Though we’re mostly unaware of it, our senses relentlessly work together to guide us through the world. Our vision, hearing, sense of touch and smell create mental images of the environment at every instant, helping us make basic decisions — Is it safe to cross the street? Is this sandwich still edible?

Since these processes are involuntary, we’re not aware of how complex they really are. However, the intricacy of fusing together different sensory channels reveals itself when we start teaching a machine to master the same skill.

What Is Sensor Fusion?

Just like us mammals, a machine benefits from receiving data from multiple sources. That’s the simple idea behind sensor fusion. A subdiscipline of information fusion, sensor fusion combines sensory data to reduce uncertainty and help agents make more informed decisions. 

Noise and Uncertainty Reduction

One characteristic common to all sensors is that they are susceptible to interference. A camera may be covered or blinded by sunlight; a radar may be jammed. These scenarios can result in sensory information that’s skewed, patchy or plain wrong. Therefore, virtually all real-world sensor data is made up of a signal (the part that we’re interested in) and noise (the part that we’d like to ignore).

Our uncertainty lies in the fact that we don’t know just how noisy our data is. Sensor fusion seeks to separate the noise from the data by looking at different data sources simultaneously. It thus works to increase our level of certainty.  

Applications of Sensor Fusion

A basic, everyday example of sensor fusion is how Google Maps combines different sources to infer not only your exact position but also which way you’re facing, whether you’re indoors or outdoors. This is made possible by combining GPS satellite data with data from your phone’s sensors — including its gyroscope, compass, and accelerometer.

With the advent of autonomous machines such as driverless cars and mobile robots, sensor fusion has become a hot topic in AI. While sensors have been around for quite some time, they’re now smaller and cheaper than ever, allowing for their integration into autonomous systems — or into your smartphone for that matter.

What Types of Sensors Are There?  

Just as we have different senses, machines use separate sensors for a number of tasks. Some sensors mimic human senses, such as cameras do with vision. But sensors can go beyond the limits of human perception. Ultrasonic sonar sensors are modeled after the principle of echolocation that bats and whales (and some people) use for orientation.

LiDARs (Light Detection and Ranging)

A sensor that has become very popular in the field of autonomous driving is the lidar. Originally coined as a portmanteau of light and radar, the lidar uses rapid laser impulses to produce an accurate three-dimensional image of its surroundings. But its use is not restricted to driverless cars. In fact, lidars have been producing high-resolution maps of global surfaces for decades. 

How Does Sensor Fusion Work?

Now that we’ve covered the general idea behind sensor fusion, let’s look at some implementation details. To begin understanding this vast field, let’s look into three different classifications of sensor fusion systems.

Sensor Fusion by Abstraction Level

Most data-driven systems post-process the raw signal in some way. When working with sensory data, the level of transformation at which we do the fusion makes a big difference. The level has implications on storage, bandwidth and computation demands, as well as the interpretability and accuracy of the resulting model. 

Low-Level

Low-level sensor fusion takes raw data as input. We’re referring here to the sensor’s point data measurements. This approach makes sure we don’t add any noise to the data upon post-processing it. The downside to this method is that it requires the processing of an immense amount of data. 

Mid-Level

At the intermediate level, data fusion operates on object hypotheses. It uses data that has been interpreted either within the sensor itself or by a different processing unit. For example, when a camera thinks an object is straight ahead, the Lidar might sense it slightly to the right. With mid-level sensor fusion, these two interpretations are weighted to arrive at a single projection.  

High-Level

Tracks are hypotheses about an object’s movement in space. In high-level sensor fusion, we again see the merger of two hypotheses in a weighted manner. This time, however, the hypotheses aren’t just about an object’s position, but also about its trajectory, thus incorporating its past and future states.

Centralized vs. Decentralized vs. Distributed Sensor Fusion

Centralized

We’ve already mentioned that sensory data may be processed at different locations. In a centralized system, all data travels to a central processing unit. This type of system would perform a low-level sensor fusion, since all raw data is processed in one place. Again, the bandwidth required by such a system might quickly get out of hand.

Decentralized

In a decentralized system, each sensor fuses the raw data locally before forwarding it. In this system type, the fusing nodes communicate with each other. In the extreme, each fusion node will communicate with every other node, resulting in an exponential growth of connections.

Distributed

Distributed systems process data locally before sending it to a central unit at which the sensor fusion takes place. Such a system can have one or more fusion nodes.

Competitive vs. Complementary vs. Coordinated Sensor Fusion

Competitive

We may combine our sensors’ signals in different ways. When we fuse data from two sensors that measure the same thing, we’re dealing with competitive or redundant fusion. Once again, our aim is to achieve higher accuracy than if we would attain with just one sensor. 

Complementary

By contrast, complementary fusion refers to the combination of two sensors to produce a picture that couldn’t otherwise be measured by one sensor alone. For instance, an autonomous car often employs several cameras to make up for the range restrictions of each, to produce a 360 degree image of the vehicle’s surroundings.

Here, the purpose of fusing data is not to increase accuracy, but rather to produce a new object that cannot otherwise be observed by a single sensor.

Coordinated

In coordinated sensor fusion, we use two or more sensors to look at the same object. By combining them, we achieve a new perspective that — just like with complementary fusion — couldn’t otherwise be produced by one alone. For instance, we can use 2 two-dimensional images of the same object — taken at different angles — to produce one three-dimensional representation. 

Where Is Sensor Fusion Used?

You might recall our bit on sensor fusion in autonomous driving. But general data fusion predates driverless cars, and knows many applications from business analytics to oceanography. Sensor fusion as a special application of data fusion has grown immensely in past years. 

Any machine that moves in the real world will generally benefit from sensor fusion. This applies to robots who must learn to navigate unknown territory, such as your robot vacuum.

Another area that employs sensor fusion is the Internet of Things (IoT). In IoT, multiple data sources are fused to produce “smart” systems in both the private and public spheres, including home applications, healthcare and public transport.

Sensor Fusion Algorithms

So how do we make sense of all the incoming sensory data? And how do machines know which data to trust? Most sensor fusion algorithms make use of Kalman filters. The trick with these filters is that they don’t only output judgments about the world, but also about the sensors themselves. This is known as belief propagation.

Kalman filters allow us to incorporate prior knowledge about a sensory device into the fusion system. For instance, it can tell us that GPS data obtained without access to the sky will be worthless. In this manner, the sensor fusion algorithm iteratively updates its own assumptions about the world with the data it receives. 

The principle of combining observations with prior knowledge comes from Bayesian statistics, which has advanced much of the machine learning world. To learn more about sensor fusion algorithms, check out our blog post.

Start Your Journey

Sensor fusion is the fundamental building block that allows machines to move about the real world safely and intelligently. Learn more about mastering this rapidly developing art. Take our Nanodegree to become a sensor fusion engineer. 

Start Learning