Approx. 1 months
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing. Learn the fundamental principles behind it, and how you can use its power to make sense of your Big Data.
Lesson 1 does not have technical prerequisites and is a good overview of Hadoop and MapReduce for managers.
To get the most out of the class, however, you need basic programming skills in Python on a level provided by introductory courses like our Introduction to Computer Science course.
To learn more about Hadoop, you can also check out the book Hadoop: The Definitive Guide.
See the Technology Requirements for using Udacity.
Use MapReduce to reveal surprising trends in Udacity forum data.
What is “Big Data”? The dimensions of Big Data. Scaling problems. HDFS and the Hadoop ecosystem.
The basics of HDFS, MapReduce and Hadoop cluster.
Writing MapReduce programs to answer questions about data.
MapReduce design patterns.
Answering questions about big sales data and analyzing large website logs.
Ian Wrigley is currently the Senior Curriculum Manager at Cloudera, responsible for the team which creates all the company’s Hadoop training materials. He’s been a tech journalist, an instructor, and a course author for over 20 years, during which time he’s taught everything from C programming to copywriting for the Web. He describes his job as “teaching geeks to be geekier”.
Sarah Sproehnle is the Vice President of Educational Services at Cloudera, a company that helps develop, manage and support Apache Hadoop. While she is a geek at heart, her passion is helping people learn complex technology. In addition to teaching people how to use Hadoop, she’s taught database administration, various programming languages, and system administration.