Real-world projects from industry experts
With real world projects and immersive content built in partnership with top tier companies, you’ll master the tech skills companies want.
Self-driving cars are transformational technology, on the cutting-edge of robotics, machine learning and engineering. Learn the skills and techniques used by self-driving car teams at the most advanced technology companies in the world.
06Days07Hrs01Min50Sec
At 10 hours/week
Get access to the classroom immediately on enrollment
In this program, you will learn the techniques that power self-driving cars across the full stack of a vehicle’s autonomous capabilities. Using Deep Learning with radar and lidar sensor fusion, you will train the vehicle to detect and identify its surroundings to inform navigation.
Python, C++, Linear Algebra and Calculus.
In this course, you will develop critical Machine Learning skills that are commonly leveraged in autonomous vehicle engineering. You will learn about the life cycle of a Machine Learning project, from framing the problem and choosing metrics to training and improving models. This course will focus on the camera sensor and you will learn how to process raw digital images before feeding them into different algorithms, such as neural networks. You will build convolutional neural networks using TensorFlow and learn how to classify and detect objects in images. With this course, you will be exposed to the whole Machine Learning workflow and get a good understanding of the work of a Machine Learning Engineer and how it translates to the autonomous vehicle context.
In this course, you will learn about a key enabler for self-driving cars: sensor fusion. Besides cameras, self-driving cars rely on other sensors with complementary measurement principles to improve robustness and reliability. Therefore, you will learn about the lidar sensor and its role in the autonomous vehicle sensor suite. You will learn about the lidar working principle, get an overview of currently available lidar types and their differences, and look at relevant criteria for sensor selection. Also, you will learn how to detect objects such as vehicles in a 3D lidar point cloud using a deep-learning approach and then evaluate detection performance using a set of state-of-the-art metrics. In the second half of the course, you will learn how to fuse camera and lidar detections and track objects over time with an Extended Kalman Filter. You will get hands-on experience with multi-target tracking, where you will learn how to initialize, update and delete tracks, assign measurements to tracks with data association techniques and manage several tracks simultaneously. After completing the course, you will have a solid foundation to work as a sensor fusion engineer on self-driving cars.
In this course, you will learn all about robotic localization, from one-dimensional motion models up to using three-dimensional point cloud maps obtained from lidar sensors. You’ll begin by learning about the bicycle motion model, an approach to use simple motion to estimate location at the next time step, before gathering sensor data. Then, you’ll move onto using Markov localization in order to do 1D object tracking, as well as further leveraging motion models. From there, you will learn how to implement two scan matching algorithms, Iterative Closest Point (ICP) and Normal Distributions Transform (NDP), which work with 2D and 3D data. Finally, you will utilize these scan matching algorithms in the Point Cloud Library (PCL) to localize a simulated car with lidar sensing, using a 3D point cloud map obtained from the CARLA simulator.
Path planning routes a vehicle from one point to another, and it handles how to react when emergencies arise. The Mercedes-Benz Vehicle Intelligence team will take you through the three stages of path planning. First, you’ll apply model-driven and data-driven approaches to predict how other vehicles on the road will behave. Then you’ll construct a finite state machine to decide which of several maneuvers your own vehicle should undertake. Finally, you’ll generate a safe and comfortable trajectory to execute that maneuver.
This course will teach you how to control a car once you have a desired trajectory. In other words, how to activate the throttle and the steering wheel of the car to move it following a trajectory described by coordinates. The course will cover the most basic but also the most common controller: the Proportional Integral Derivative or PID controller. You will understand the basic principle of feedback control and how they are used in autonomous driving techniques.
With real world projects and immersive content built in partnership with top tier companies, you’ll master the tech skills companies want.
Our knowledgeable mentors guide your learning and are focused on answering your questions, motivating you and keeping you on track.
You’ll have access to Github portfolio review and LinkedIn profile optimization to help you advance your career and land a high-paying role.
Tailor a learning plan that fits your busy life. Learn at your own pace and reach your personal goals on the schedule that works best for you.
We provide services customized for your needs at every step of your learning journey to ensure your success.
project reviewers
projects reviewed
reviewer rating
avg project review turnaround time
technical mentors
median response time
Thomas is originally a geophysicist but his passion for Computer Vision led him to become a Deep Learning engineer at various startups. By creating online courses, he is hoping to make education more accessible. When he is not coding, Thomas can be found in the mountains skiing or climbing.
Antje Muntzinger is a technical lead for sensor fusion at Mercedes-Benz. She wrote her PhD about sensor fusion for advanced driver assistance systems and holds a diploma in mathematics. By educating more self-driving car engineers, she hopes to realize the dream of fully autonomous driving together in the future.
Andreas Haja is an engineer, educator and autonomous vehicle enthusiast with a PhD in computer science. Andreas now works as a professor, where he focuses on project-based learning in engineering. During his career with Volkswagen and Bosch he developed camera technology and autonomous vehicle prototypes.
Aaron has a background in electrical engineering, robotics and deep learning. Currently working with Mercedes-Benz Research & Development as a Senior AV Software Engineer, he has worked as a Content Developer and Simulation Engineer at Udacity focusing on developing projects for self-driving cars.
Before MITRE, Munir was a Motion Planning & Decision-Making Manager at Amazon. He also worked for a 2 Self-driving car companies and for WaltDisney Shanghai building TronLightcycle. Munir holds a B.Eng. in Aerospace, a M.S. in Physics, and a M.S. in Space Studies.
Mathilde has a strong background in optimization and control, including reinforcement learning and has an engineering diploma from the electrical engineering school Supelec, in France. Previously she worked at Tesla in the energy and optimization team.
Prior to working as a Senior Software Engineer in the autonomous vehicle industry, David Silver led School of Autonomous Systems at Udacity. David was also a research engineer on the autonomous vehicle team at Ford. He has an MBA from Stanford, and a BSE in computer science from Princeton.
Pay as you go
per
/
/
Pay upfront and save an extra 0%
for - access
A well prepared student will be able to:
The following versions are used in this program (subject to update):