self-driving cars with sensor systems

Blog Self driving car How Self-driving Cars Work: Sensor Systems

How Self-driving Cars Work: Sensor Systems

Top view of self-driving car on urban landscape background.

People have been excited about self-driving cars since the early 20th century. Perhaps you are one of many enthusiasts who foresee an automated vehicle future with enhanced driving safety, personal leisure time behind the wheel, and relief from the burdens of driving.

You can be better prepared for the future by learning about how self-driving cars work. Self-driving cars rely on computers, sensor systems, algorithms, machine learning, and artificial intelligence to accurately perceive and safely navigate their environments.

This post focuses on the complex sensor systems you can find in self-driving cars. Let’s dive into the technologies that make self-driving cars possible. 

Self-driving Cars Use Sensors to Work

Like people, self-driving cars must sense their surroundings to safely navigate. People use senses like hearing, sight, taste, smell, and touch to interact with their environments. Autonomous car technology developers provision self-driving cars with high-tech sensor systems to sense analogously.

Illuminating the World With Lidar

Lidar (light detection and ranging), also known as 3D laser scanning, is a tool that self-driving cars use to scan their environments with lasers. A typical lidar sensor pulses thousands of beams of infrared laser light into its surroundings and waits for the beams to reflect off environmental features. Many pulses create point clouds (sets of points representing 3D forms in space) of light.

Lidar systems measure the amount of time it takes to emit a laser signal and sense the same light beam reflected from a physical surface onto its photodetectors. Lidar uses the speed of light to calculate distances to objects. The longer it takes for a lidar photodetector to receive a return light signal, the farther away an object is.

Lidar systems enable self-driving cars to detect small objects with high precision. However, lidar is often unreliable at nighttime or in inclement weather.

Reading the Environment With Radar

Radar (radio detection and ranging) is useful in many contexts such as weather forecasting, astronomy, communications, ocean navigation, military operations, and autonomous driving.

Autonomous cars can emit radio waves in known directions with radar transmitters. Reflected waves that return to a car’s radar receiver help the car derive information about environmental objects like the objects’ angles, ranges, and velocities.

Radar typically operates well over long distances and in most weather types. However, it isn’t particularly useful for object identification and may falsely identify objects.

Hearing With Sonar

Self-driving cars can use sonar (sound navigation and ranging) to detect and communicate with objects, and to navigate. Sonar can be passive or active. Passive sonar systems passively listen for sounds made by nearby objects. Active sonar systems emit sound pulses and read echoes returned from physical surfaces.

Self-driving cars can use sonar to detect large objects made of solid materials (e.g. metal, ceramic) at short distances. Sonar sensors don’t require light to operate. However, sonar sensors are constrained by the speed of sound (which is slower than the speed of light) and sometimes falsely detect non-existing objects. 

Capturing Images With Cameras

Autonomous vehicles can visualize their environments with high-resolution digital camera images. Self-driving cars can use camera images to “see” and interpret environmental details (e.g. signs, traffic lights, animals) in ways that approximate human vision (aka computer vision).

Self-driving cars can use many types of input data for computer vision. Examples include:

  • Multi-dimensional data from 3D scanning devices
  • Video segments
  • Camera images captured from different viewing angles

Self-driving cars can recognize objects, control vehicle motion, and model 3D scenes with image data.

Like other sensor systems, cameras have strengths and limitations. Cameras offer advantages associated with high-resolution imagery but do not work well in all weather types. Also, cameras only capture visible visual data.

Sensing Movements With Inertial Navigation Systems

Inertial navigation systems like inertial measurement units (IMUs) (e.g. accelerometers, gyroscopes) detect a car’s physical movements. These navigation devices help self-driving cars stabilize themselves and also help cars determine whether they should take any kind of protective safety actions (e.g. deploy an airbag, prevent the car from rolling over).

Tracking Positions With the Global Positioning System

The U.S. owns a 24-satellite-based radio navigation system called the Global Positioning System (GPS). Users with a GPS receiver can obtain geolocation and time information.

Self-driving cars can use GPS to geolocate with numerical coordinates (e.g. latitude, longitude) representing their physical locations in space. They can also navigate by combining real-time GPS coordinates with other digital map data (e.g. via Google Maps).

GPS data often varies around a five-meter radius. To compensate for imprecise GPS data, self-driving cars can use unique data-processing techniques like particle filtering to improve location accuracy.

Making Good Use of Sensors

Self-driving cars typically have many sensors with overlapping and redundant functions. This is so they have sensor system backup (in case one sensor fails, another will work) and can benefit from the strengths of different sensor types.

Autonomous vehicle developers use novel data-processing techniques like sensor fusion to process information from multiple data sensors simultaneously in real-time. Sensor fusion can improve the ways self-driving cars interpret and respond to environmental variables and can therefore make cars safer. 

Getting Around in Self-driving Cars

As artificially intelligent technologies, self-driving cars operate like humans to get from point A to point B. So, like humans, autonomous vehicles use basic navigational skills:

  • Map Making and Reading. Self-driving cars combine information from their sensor systems with other data (e.g. digital maps) to create and read maps of their environments.
  • Path Planning. Artificially intelligent autonomous vehicles use their sensor systems to plan routes through their environments.
  • Obstacle Avoidance. Self-driving cars use their sensor systems in real-time to navigate safely. To drive, they must accurately detect, interpret, and react to environmental cues so they avoid obstacles like pedestrians, cyclists, buildings, and other cars.

Learn More About How Self-driving Cars Work

Udacity’s founder, Sebastian Thrun, is an expert in the field of self-driving cars. He is an experienced self-driving car developer and winning team leader.

Online learning platforms like Udacity offer you the opportunity to learn with tech experts and enhance your real-world skills. Udacity offers Nanodegree programs related to self-driving cars such as:

Mercedes-Benz Research and Development North America (MBRDNA) has partnered with Udacity to build the team’s Self-driving Car Engineer and Sensor Fusion Engineer Nanodegree programs. 

The Udacity team is passionate about self-driving cars and is excited to help you learn more. Consider registering for an absorbing self-driving car Nanodegree program today!

Start Learning

Stay updated on the latest in technology

"*" indicates required fields

Stay updated on the Latest in Tech

Be first to hear about special offers, news, and resources from Udacity

"*" indicates required fields