The Ultimate Hands-On Hadoop: Tame your Big Data
Data Engineering and Hadoop tutorial with MapReduce, HDFS, Spark, Flink, Hive, HBase, MongoDB, Cassandra, Kafka + more!
Product Brand: Udemy
4.6
Udemy Coupon Code for The Ultimate Hands-On Hadoop: Tame your Big Data! Course. Data Engineering and Hadoop tutorial with MapReduce, HDFS, Spark, Flink, Hive, HBase, MongoDB, Cassandra, Kafka + more!
Created by Sundog Education by Frank Kane | 14.5 hours on-demand video course | 2 downloadable resources
Hadoop Course Overview
The Ultimate Hands-On Hadoop: Tame your Big Data!
This The Ultimate Hands-On Hadoop: Tame your Big Data! course is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It’s filled with hands-on activities and exercises, so you get some real experience in using Hadoop – it’s not just theory.
You’ll find a range of activities in this course for people at every level. If you’re a project manager who just wants to learn the buzzwords, there are web UI’s for many of the activities in the course that require no programming knowledge. If you’re comfortable with command lines, we’ll show you how to work with them too. And if you’re a programmer, I’ll challenge you with writing real scripts on a Hadoop system using Scala, Pig Latin, and Python.
You’ll walk away from this course with a real, deep understanding of Hadoop and its associated distributed systems, and you can apply Hadoop to real-world problems. Plus a valuable completion certificate is waiting for you at the end!
What you’ll learn
- Design distributed systems that manage “big data” using Hadoop and related data engineering technologies.
- Use HDFS and MapReduce for storing and analyzing data at scale.
- Use Pig and Spark to create scripts to process data on a Hadoop cluster in more complex ways.
- Analyze relational data using Hive and MySQL
- Analyze non-relational data using HBase, Cassandra, and MongoDB
- Query data interactively with Drill, Phoenix, and Presto
- Choose an appropriate data storage technology for your application
- Understand how Hadoop clusters are managed by YARN, Tez, Mesos, Zookeeper, Zeppelin, Hue, and Oozie.
- Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
- Consume streaming data using Spark Streaming, Flink, and Storm
Recommended Big Data Hadoop Course
Data Engineering Master Course: Spark/Hadoop/Kafka/MongoDB Best seller
Taming Big Data with MapReduce and Hadoop – Hands On!
Who this course is for
- Software engineers and programmers who want to understand the larger Hadoop ecosystem, and use it to store, analyze, and vend “big data” at scale.
- Project, program, or product managers who want to understand the lingo and high-level architecture of Hadoop.
- Data analysts and database administrators who are curious about Hadoop and how it relates to their work.
- System architects who need to understand the components available in the Hadoop ecosystem, and how they fit together.
Instructor
Sundog Education is led by Frank Kane and owned by Frank’s company, Sundog Software LLC. Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. As an Amazon “bar raiser,” he held veto authority over hiring decisions across the company, interviewed over 1,000 candidates, and hired and managed hundreds. He holds 26 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own company, Sundog Software, which has taught over one million students around the world about machine learning, data engineering, and managing engineers.