Join 52,984 Students
In collaboration with

We're still building the full experience
Approx. 3 months
(work at your own pace)

Class Summary

Learn the fundamentals of parallel computing with the GPU and the CUDA programming environment! In this class, you'll learn about parallel programming by coding a series of image processing algorithms, such as you might find in Photoshop or Instagram. You'll be able to program and run your assignments on high-end GPUs, even if you don't own one yourself.

Why It’s Important to Think Parallel

Third Pillar of Science
Learn how scientific discovery can be accelerated by combining theory and experimentation with computing to fight cancer, prevent heart attacks, and spur new advances in robotic surgery.

What Will I Learn?

You'll master the fundamentals of massively parallel computing by using CUDA C/C++ to program modern GPUs. You'll learn the GPU programming model and architecture, key algorithms and parallel programming patterns, and optimization techniques. Your assignments will illustrate these concepts through image processing applications, but this is a parallel computing course and what you learn will translate to any application domain. Most of all we hope you'll learn how to think in parallel.

What Should I Know?

We expect students to have a solid experience with the C programming language and basic knowledge of data structures and algorithms.


Lesson 1: GPU Programming Model

Project 1: Greyscale Conversion (for that classy touch!)

Lesson 2: GPU Hardware and Parallel Communication

Project 2: Smart Blurring (miracle product for removing wrinkles!)

Lesson 3: Fundamental Parallel Algorithms

Project 3: HDR Tonemapping (when 1000:1 contrast is not enough!)

Lesson 4: Using Sort and Scan

Project 4: Red Eye Removal (soothing relief for bright red eyes)

Lesson 5: Optimizing GPU Programs

Project 5: Accelerating Histograms (when fast isn't fast enough)

Lesson 6: Parallel Computing Patterns

Project 6: Seamless Image Compositing (polar bear in the swimming pool)

Lesson 7: The Frontiers and Future of GPU Computing


When does the course begin?

This class is self paced. You can begin whenever you like and then follow your own pace. It’s a good idea to set goals for yourself to make sure you stick with the course.

How long will the course be available?

This class will always be available!

How do I know if this course is for me?

Take a look at the “Class Summary,” “What Should I Know,” and “What Will I Learn” sections above. If you want to know more, just enroll in the course and start exploring.

Can I skip individual videos? What about entire lessons?

Yes! The point is for you to learn what YOU need (or want) to learn. If you already know something, feel free to skip ahead. If you ever find that you’re confused, you can always go back and watch something that you skipped.

How much does this cost?

It’s completely free! If you’re feeling generous, we would love to have you contribute your thoughts, questions, and answers to the course discussion forum.

What are the rules on collaboration?

Collaboration is a great way to learn. You should do it! The key is to use collaboration as a way to enhance learning, not as a way of sharing answers without understanding them.

Why are there so many questions?

Udacity classes are a little different from traditional courses. We intersperse our video segments with interactive questions. There are many reasons for including these questions: to get you thinking, to check your understanding, for fun, etc... But really, they are there to help you learn. They are NOT there to evaluate your intelligence, so try not to let them stress you out.

What should I do while I’m watching the videos?

Learn actively! You will retain more of what you learn if you take notes, draw diagrams, make notecards, and actively try to make sense of the material.

Course Instructors

instructor photo

David Luebke


David Luebke helped found NVIDIA Research in 2006 after eight years teaching computer science on the faculty of the University of Virginia. Dave's research on real-time 3D computer graphics led to an early interest in GPU computing when that field was still in its infancy. Today Dave is senior director of graphics research and a NVIDIA Distinguished Inventor. Dave lives in central Virginia with his wife and three boys, plays racquetball and ultimate frisbee, and prefers college hoops to the NBA. Find him at his website and @davedotluebke on Twitter.

instructor photo

John Owens


John Owens is an associate professor of electrical and computer engineering at the University of California, Davis, where he leads a research group in parallel computing. He joined the faculty at UC Davis after many happy years as a student at Stanford (graduate) and Berkeley (undergraduate), and lives in Berkeley with his wife and daughter. In his free time, he enjoys puzzles, water polo, and pursuing a finite Erdős-Bacon number. John has a web page and (after his recent sabbatical at Twitter) is learning how to tweet at @jowens.

course developer photo

Mike Roberts

Course Developer

Mike Roberts is a computer science PhD student at Stanford University. Before coming to Stanford, Mike spent two years doing GPU computing research at Harvard University, where he was involved in an exciting interdisciplinary project to construct a nanometer-scale wiring diagram of a mouse brain. Mike also collects rare funk 45s, and he used to DJ at a Motown night with his best friend every weekend. You can see what Mike is up to on his website.

course developer photo

Cheng-Han Lee

Course Developer

Cheng-Han worked as a program manager at Microsoft prior to Udacity, and he studied at the University of Texas at Austin and University of California at San Diego for his degrees in computer science.

Outside of work, Cheng-Han is a world traveler. He has lived in Taiwan, Shanghai, Charleston (SC), Dallas, Austin, San Diego, Seattle, and now the Bay Area. In addition to traveling, he likes to find new parks to explore, new venues to visit, and new restaurants to try.