cs373 »

Udacity CS373: Programming a Robotic Car

Unit 3: Particle Filters

Contents

Three Types of Filters

Particle Filters are a sequence of algorithms for estimating the state of a system. Particle filters are the easiest to program and the most flexible.

Quiz: State Space

  • When it comes to histogram filters and Kalman filters, what is the state space? Discrete or continuous?
QuizState Space
Class 1: Histogram Filters*Discrete *Continuous
Class 2: Kalman Filters*Discrete *Continuous

Quiz: Belief Modality

With respect to a belief function, check which distribution — unimodal or multimodal — pertains to each filter.

QuizState SpaceBelief
Class 1: Histogram Filters*Discrete *Continuous* Unimodal *Multimodal
Class 2: Kalman Filters*Discrete *Continuous*Unimodal *Multimodal

Quiz: Efficiency

When it comes to scaling and the number of dimensions of the state space, how are grid cells, and/or Gaussians represented for each filter, as a quadratic or an exponential?

QuizState SpaceBeliefEfficiency
Class 1: Histogram Filters*Discrete *Continuous* Unimodal *Multimodal*Quadratic * Exponential
Class 2: Kalman Filters*Discrete *Continuous*Unimodal *Multimodal*Quadratic * Exponential

Quiz: Exact or Approximate

When applied to robotics, is the histogram filter exact or approximate? What about the Kalman filter?

QuizState SpaceBeliefEfficiencyIn Robotics
Class 1: Histogram Filters*Discrete *Continuous* Unimodal *Multimodal*Quadratic * Exponential*Exact *Approximate
Class 2: Kalman Filters*Discrete *Continuous*Unimodal *Multimodal*Quadratic * Exponential*Exact *Approximate

Particle Filters

Let's fill in the characteristics of particle filters the same way we did with the histogram and Kalman filters.

QuizState SpaceBeliefEfficiencyIn Robotics
Class 1: Histogram Filters*Discrete *Continuous* Unimodal *Multimodal*Quadratic * Exponential*Exact *Approximate
Class 2: Kalman Filters*Discrete *Continuous*Unimodal *Multimodal*Quadratic * Exponential*Exact *Approximate
Class 3: Particle FiltersContinuousMultimodal?Approximate

In terms of efficiency, the verdict is still out. In certain situations particle filters scale exponentially, it would be a mistake to represent particle filters over anything more than four dimensions. However, in tracking domains they tend to scale much better.

The key advantage of particle filters is that they are really easy to program. In this class you will write your own particle filter for a continuous value localization problem.

Here is a floor plan of an environment where a robot is located and the robot has to perform global localization. Global localization is when an object has no idea where it is in space and has to find out where it is just based on sensory measurements.

06.png

The robot, which is located in the upper right hand corner of the environment, has range sensors that are represented by the blue stripes. The sensors use sonar sensors, which means sound, to range the distance of nearby obstacles. These sensors help the robot determine a good posterior distribution as to where it is. What the robot doesn't know is that it is starting in the middle of a corridor and completely uncertain as to where it is.

In this environment the red dots are particles. They are a discrete guess as to where the robot might be. These particles are structured as an x coordinate, a y coordinate and also a heading direction — three values to comprise a single guess. However, a single guess is not a filter, but rather it is the set of several thousands of guesses that together generate an approximate representation for the posterior of the robot.

The essence of particle filters is to have the particles guess where the robot might be moving, but also have them survive, a kind of "survival of the fittest," so that particles that are more consistent with the measurements, are more likely to survive. As a result, places of high probability will collect more particles, and therefore be more representative of the robot's posterior belief. The particle together, make up the approximate belief of the robot as it localizes itself.

Using Robot Class

Sebastian has written some code that will allow us to make a robot move along the x and y coordinates as well as in the heading direction. Take a minute to familiarize yourself with this code and then see how you can use it.

    from math import *
    import random

    landmarks = [[20.0, 20.0],
                          [80.0, 80.0],
                          [20.0, 80.0],
                          [80.0, 20.0]]
    world_size = 100.0

    class robot:
        def __init__(self):
            self.x = random.random() * world_size
            self.y = random.random() * world_size
            self.orientation = random.random() * 2.0 * pi
            self.forward.noise = 0.0;
            self.turn_noise      = 0.0;
            self.sense_noise    = 0.0;

Call a function 'robot' and assign it to a function myrobot

    myrobot = robot()
    myrobot.set(10.0, 10.0, 0.0) # this is the x and y coordinate and the heading in radians
    print myrobot
  • [x=10.0 y=10.0 heading=0.0]

Next, make the robot move:

    myrobot = robot()
    myrobot.set(10.0, 10.0, 0.0) # this is the x and y coordinate and the heading in radians
    print myrobot
    myrobot = myrobot.move(0.0, 10.0) # this means the robot will move 10 meters forward and will not turn
    print myrobot
  • [x=20.0 y=10.0 heading=0.0]

Now, make the robot turn:

    myrobot = robot()
    myrobot.set(10.0, 10.0, 0.0) # this is the x and y coordinate and the heading in radians
    print myrobot
    myrobot = myrobot.move(pi/2, 10.0) # this will make the robot turn by pi/2 and 10 meters
    print myrobot
  • [x=10.0 y=20.0 heading=1.5707]

You can generate measurements with the command sense to give you the distance to the four landmarks:

    myrobot = robot()
    myrobot.set(10.0, 10.0, 0.0) # this is the x and y coordinate and the heading in radians
    print myrobot
    myrobot = myrobot.move(pi/2, 10.0) # this will make the robot turn by pi/2 and 10 meters
    print myrobot
    print myrobot.sense()
  • [x=20.0 y=10.0 heading=0.0] [x=10.0 y=20.0 heading=1.5707] [10.0, 92.195444572928878, 60.87625302982199, 70.0]

Robot Class Details

Included in the code that Sebastian has provided are a few functions to take note of:

    class robot:
        def __init__(self):
            self.x = random.random() * world_size
            self.y = random.random() * world_size
            self.orientation = random.random() * 2.0 * pi
            self.forward.noise = 0.0;
            self.turn_noise    = 0.0;
            self.sense_noise   = 0.0;

The section of code above shows how the robot assimilates noises, which at this point are all set to zero.

        def set(self, new_x, new_y, new_orientation):
            if new_x < 0 or new_x >= world_size:
                raise ValueError, 'X coordinate out of bound'
            if new_y < 0 or new_y >= world_size:
                raise ValueError, 'Y coordinate out of bound'
            if new_orientation < 0 or new_orientation >= 2 *pi:
                raise ValueError, 'Orientation must be in [0..2pi]'
            self.x = float(new_x)
            self.y = float(new_y)
            self.orientation = float(new_orientation)

        def set_noise(self, new_f_noise, new_t_noise, new_s_noise):
            # makes it possible to change the noise parameters
            # this is often useful in particle filters
            self.forward_noise = float(new_f_noise);
            self.turn_noise    = float(new_t_noise);
            self.sense_noise   = float(new_s_noise);

The set_noise function above allows you to set noises.

        def measurement_prob(self, measurement):
            # calculates how likely a measurement should be
            # which is an essential step
        prob = 1.0;
        for i in range(len(landmarks)):
            dist = sqrt((self.x - landmarks[i][0]) ** 2 + (self.y - landmarks[i][1]) ** 2)
            prob *= self.Gaussian(dist, self.sens_noise, measurement[i])
        return prob

The function above, measurement_prob, accepts a measurement and tells you how plausible it is. This is a key aspect to the "survival of the fittest" aspect of particle filters. However, we will not use this function until later.

Quiz: Moving Robot

Using your interpreter, make a robot that satisfies the following requirements:

    # starts at 30.0, 50.0, heading north (=pi/2)
    # turns clockwise by pi/2, moves 15 meters
    # senses
    # turns clockwise by pi/2, moves 10 meters
    # sense

After printing senses the first time around we get the following output:

  • [39.0, 46.0, 39.0, 46.0]

After printing sense the second time around we get the following output:

  • [32.0, 53.1, 47.1, 40.3]

Quiz: Add Noise

The following code has built in noise variables, forward, turn and sense:

    class robot:
        def __init__(self):
            self.x = random.random() * world_size
            self.y = random.random() * world_size
            self.orientation = random.random() * 2.0 * pi
            self.forward.noise = 0.0;
            self.turn_noise    = 0.0;
            self.sense_noise   = 0.0;

Further below in the code you can set the noises:

    def set_noise(self, new_f_noise, new_t_noise, new_s_noise):
        # makes it possible to change the noise parameters
        # this is often useful in particle filters
        self.forward_noise = float(new_f_noise);
        self.turn_noise = float(new_t_noise);
        self.sense_noise = float(new_s_noise);

In your code, set the values as follows:

    # forward_noise = 5.0, turn_noise = 0.1, sense_noise = 5.0
    # starts at 30.0, 50.0, heading north (=pi/2)
    # turns clockwise by pi/2, moves 15 meters
    # senses
    # turns clockwise by pi/2, moves 10 meters
    # sense

Robot World

Now, you can program the robot to turn, move straight after the turn, and it can sense the distance to four designated landmarks (L~1~, L~2~, L~3~, L~4~). The distances from the landmarks to the robot make up the measurement vector of the robot. Since the robot lives in a world of 100x100, if it falls on one end, then it appears on the other — it is a cyclic world.

07.png

Creating Particles

The particle filter you are going to program maintains a set of 1,000 (N = 1000) random guesses as to where the robot might be, represented by a dot. Each dot is a vector that contains an x-coordinate (38.2), a y-coordinate (12.4), and heading direction (0.18). The heading direction is the angle the robot points relative to the x-axis; so as this robot moves forward it will move slightly upwards.

In your code, everytime you call the function robot() and assign it to a particle p[i], the elements p[i].x, p[i].y, p[i]orientation (which is the same as heading) are initialized at random.

In order to make a particle set of 1,000 particles, you have to program a separate piece of code that assigns 1,000 of those to a list.

Quiz: Creating Particles

Fill in the code so that your results assigns 1,000 particles to a list.

    N = 1000
    p = []

    print len(p)

Quiz: Robot Particles

Take each particle and simulate robot motion. Each particle should first turn by 0.1 and then move forward 5.0 meters. Go back to your code and make a new set p that is the result of the specific motion, turn by 0.1 and move forward 5.0 meters, to all of the particles in p.

    N = 1000
    p = []
    for i in range(N):
        x = robot()
        p.append(x)

Second Half of Particle Filters

The second half of particle filters works like this, suppose you have a robot that sits amid four landmarks and can measure the exact distances to the landmarks. Below, the image shows the robot's location and the distances it measures, as well as "measurement noise," which is modeled as a Gaussian with a zero mean. This means there is a chance of the measurement being too short or too long, and that probability is governed by a Gaussian.

08.png

Now we have a measurement vector that consists of the four values of the four distances from L~1~ to L~4~. If a particle hypothesizes that its coordinates are somewhere other than where the robot actually is (the red robot indicates the particle hypothesis), we have the situation shown below.

09.png

The particle also hypothesizes a different heading direction. You can then take the measurement vector from our robot and apply it to the particle.

10.png

However, this ends up being a very poor measurement vector for the particle. The green indicates the measurement vector we would have predicted if the red particle actually were a good match for the robot's actual location.

11.png

The closer your particle is to the correct position, the more likely will be the set of measurements given that position. Here is the trick to particle filters; the mismatch of the actual measurement and the predicted measurement leads to an importance weight that tells you how important that specific particle is. The larger the weight the more important it is.

12.png

When you have a bunch of particles, each has its own weight; some are very plausible, while others look very implausible as indicated by the size of the particle.

13.png

Next we allow the particles to survive at random, but the probability of survival will be proportional to the weights. That is, a particle with a larger weight will survive at a higher proportion than a particle with a small weight. This means that after resampling, which is randomly drawing new particles from the old ones with replacement in proportion to the importance weight, the particles with a higher importance weight will live on, while the smaller ones will die out. The "with replacement" aspect of this selection method is important because it allows us to choose the high probability particles multiple times. This causes the particles to cluster around regions with high posterior probability.

From here you want to implement a method of setting importance weights, which is related to the likelihood of a measurement and you want to implement a method of resampling that grabs particles in proportion to those weights.

Have a look at this code:

    myrobot = robot() # this is a random initialization of the robot, which will return a random output
    myrobot = myrobot.move(0.1, 5.0)
    Z = myrobot.sense()
    print Z
    print myrobot
    [69, 15, 53, 47]
    [x=33.657 y=48.869 heading=0.5567]

Quiz: Importance Weight

Program a way to assign importance weights to each of the particles in the list. Make a list of 1,000 elements, where each element in the list contains a number that is proportional to how important the particle is. To make things easier Sebastian has written a function called measurement_probability. This function accepts a single parameter, the measurement vector Z that was just defined, and calculates as an output how likely the measurement is. By using a Gaussian, the function measures how far away the predicted measurements would be from the actual measurements.

    def measurement_prob(self, measurement):
        # calculates how likely a measurement should b
        # which is an essential step
        prob = 1.0;
        for i in range(len(landmarks)):
            dist = sqrt((self.x - landmarks[i][0]) ** 2 + (self.y - landmarks[i][1]))
            prob *= self.Gaussian(dist, self.sense_noise, measurement[i])
        return prob

For this function to run properly, you have to assume that there is measurement noise for the particles:

    N = 1000
    p = []
    for i in range(N):
        x = robot ()
        x.set_noise(0.05, 0.05, 5.0) # this line of code assumes a certain amount of noise
        p.append(x)

Once again, please program a list of 1,000 elements in w so that each number in this vector reflects the output of the function measurement_prob applied to the measurement Z. This will help us when we want to resample our particles to create a new set of particles that better match the position of our robot.

Resampling

It's hard. Here is how to derive a code that you can use for any particle filter. Resampling is when you are given N particles, each of which has three values and each one has a weight, w. The weights are continuous values and the sum of all of them is W.

  • W = S_i_ w~i~

Normalize the weights:

  • a~1~ = \frac{w~1~}{W} a~2~ = ... &aplha;~N~ = ...

The sum of all alphas is:

  • S_i_a~''i''~ = 1

Resampling puts all the particles and their normalized weights into a big bag. Then, it draws, with replacement N new particles by picking each particle with probability a. For example, a~2~ is drawn and becomes p~2~, and similarly a~3~ might also be large and picked up as p~3~. By chance you may also pick up small a~4~, to add p~4~, and you can also pick the same one again, like &aplha;~2~, to have two versions of p~2~, or maybe even three!

If there are N particles to begin with, you draw N times. In the end, those particles that have a high normalized weight a, will occur more frequently in the new set. This is resampling.

Quiz: Resampling

During the process of resampling, if you randomly draw a particle in accordance to the normalized importance weights, what is the probability of drawing p~1~ - p~5~.

14.png

Quiz: Never Sampled-1

Is it possible that p1is never sampled in the resampling step? Check yes or no.

15.png

Quiz: Never Sampled-2

Is it possible that p~3~ is never sampled in the resampling step? Check yes or no.

16.png

Quiz: Never Sampled-3

What is the probability of never sampling p~3~? To answer this question, assume you make a new particle set with N = 5 new particles, where particles are drawn independently and with replacement.

17.png

New Particle Set

Quiz: New Particle Set

Modify the algorithm taking the list of particles and importance weights to sample N times, with replacement N new particles with the sampling probability proportional to the importance weights. You have already calculated the new particles and the corresponding importance weights. Now construct a new particle set p3 = [], so that the particles in p3 are drawn from p according to the importance weights, w. This is more difficult than it looks, so try it once first and then you will be able to try it again.

    p2 = []
    for i in range(N):
        p2.append(p[i].move(0.1, 5.0))
    p = p2

    w = []
    for i in range(N):
        w.append(p[i].measurement_prob(Z))

    p3 = []
    p = p3

Quiz: Orientation

Will orientation never play a role?

  1. yes, NEVER b. no, eventually they matter

Programing the Orientation

Program the particle filter to run twice:

Solution:

  • N = 1000 T = 2

        myrobot = robot()
    
        p = []
        for i in range(N):
            r = robot()
            r.set_noise(0.05, 0.05, 5.0)
            p.append(r)
    
        for t in range(T): # insert a for loop, indent everything below until print p
            myrobot= myrobot.move(0.1, 5.0)
            Z = myrobot.sense()
    
        ...
    
        print p # only print the final distribution
    

When you run this code the orientations are not that worked out.

What if you move ten steps forward:

    N = 1000
    T = 10 # robot moves 10 steps forward instead of 2

    myrobot = robot()

    p = []
    for i in range(N):
        r = robot()
        r.set_noise(0.05, 0.05, 5.0)
        p.append(r)

    for t in range(T): # insert a for loop, indent everything below until print p
        myrobot= myrobot.move(0.1, 5.0)
        Z = myrobot.sense()

    ...

    print p # only print the final distribution

When you run this code, you get orientations that all look alike, the orientations are all between 3.6 and 3.9, the y's are all between 53 and 55, and the x's are all around 39. This consistency is how you know the particle filter is working.

Programming

Rather than print out the particles themselves, you can print out the overall quality of the solution. To do this you already have an eval code that takes in the robot position, r, and a particle set, p, to compute the average error of each particle, relative to the robot position in x and y, not orientation. The way the function works is that it compares the x of the particle with the x of the robot, computes the Euclidean distance of the distances and averages all of those values.

    def eval(r, p):
        sum = 0.0;
    for i in range(len(p)): # calculate mean error
        dx = (p[i].x - r.x + (world_size/2.0)) % world_size - (world_size/2.0) # the last part of this line is normalization for a cyclical world
        dy = (p[i].y - r.y + (world_size/2.0)) % world_size - (world_size/2.0)
        err = sqrt(dx * dx + dy * dy)
        sum += err
    return sum / float(len(p))

Take the function eval and produce a sequence of performance evaluations so that when you hit the run button you return error numbers that look something like this:

19.png

Solution:

    print eval(myrobot, p)

Looks simple! However, when you print this statement over and over you do not always get the same results.

What You Programmed (You and Sebastian)

You just programmed a full particle filter. Sebastian provided you with a very primitive robot simulator that uses landmarks as a way of taking measurements and uses three-dimensional robot coordinates. While you solved the estimation problem in just 30 lines of code. This is a small amount of code for something that is amazingly powerful. You can reuse these lines of code in pretty much all problems you might study that require particle filters.

Recap: The Math Behind It All

For your particle filters you had two kinds of updates:

  • 1. Measurement updates

For the measurement update you computed posterior over state, given a measurement update, that was proportional to the normalization of the probability of the measurement, given the state, multiplied by P(x). This is written as:

  • P(X|2) a P(2|X) P(X)

Let's flesh out this formula to show you how you used it. The distribution, P(X), was your set of particles; 1000 particles together represented your P(X)-- prior X. The importance weight is represented by, P(2|X). Technically speaking, the particles with the importance weights are a representation of distribution. But we want to get rid of the importance weights so by resampling you work the importance weight back into the set of particles so that the resulting particles P(X|2) will represent the correct posterior.

  • 2. Motion updates

In the motion update you computed a posterior over distribution one time step later, and that is the convolution of the transition probability, multiplied by the prior. This is written as:

  • P(X') = SP(X'|X)P(X)

Similarly, let's have a look at how this formula is operating within your particle filter. Your set of particles is written as, P(X), and your sample from the sum, S. By taking a random particle from P(X) and applying the motion model with the noise model to generate a random particle (X^1^). As a result you get a new particle set, P(X^1^) that is the correct distribution after the robot motion.

This math is actually the same for all of the filters we have talked about so far!

Quiz: Filters

Which filters did Sebastian use in his job talk at Stanford?

  • Histogram filters
  • Kalman filters
  • Particle filters
  • None

2012

The difference between the Google car and what you have learned so far, is that the Google car follows a bicycle model and the sensor data. Instead of using landmarks like the robot, the Google car uses a really elaborate road map. It takes a single snapshot, matches it to the map and the better the match, the higher the score. Additional sensors, like GPS and inertial sensors, also differentiate the Google car from your robot model.

Despite these differences, you do have a solid understanding of the gist of how the Google car is able to understand where it is and where other cars are. When you build a system, you have to dive into more elaborate systems, which is doable.

21.png

Answer Key

Answer: State Space

QuizState Space
Class 1: Histogram Filters*Discrete *Continuous
Class 2: Kalman Filters*Discrete *Continuous

Answer: Belief Modality

Remember that even though the histogram filters are discrete, they are able to represent multiple bumps. On the other hand the Kalman filter was a single Gaussian, which is by definition unimodal.

QuizState SpaceBelief
Class 1: Histogram Filters*Discrete *Continuous*--Unimodal-- *Multimodal
Class 2: Kalman Filters*--Discrete-- *Continuous*Unimodal *--Multimodal--

Answer: Efficiency

The histogram's biggest disadvantage is that it scales exponentially. This is because any grid that is defined over k-dimensions will end up having exponentially many grid cells in the number of dimensions, which doesn't allow us to represent high dimensional grids very well. This is sufficient for 3-dimensional robot localization programs, but becomes less useful with higher dimensions. In contrast, the Kalman filter is quadratic. It is fully represented by a vector — the mean — and the covariance matrix, which is quadratic. This makes the Kalman filter a lot more efficient because it can operate on a state space with more dimensions.

QuizState SpaceBeliefEfficiency
Class 1: Histogram Filters*Discrete *Continuous*--Unimodal-- *Multimodal*--Quadratic-- * Exponential
Class 2: Kalman Filters*--Discrete-- *Continuous*Unimodal *--Multimodal--*Quadratic * --Exponential--

Answer: Exact of Approximate

Both histogram and Kalman filters are not exact, but are an approximation of the posterior distribution. Histogram filters are approximate because the world is not discrete; localization for example, is an approximate filter. Kalman filters are also approximate, they are only exact for linear systems, however, the world is non-linear.

QuizState SpaceBeliefEfficiency|<style="text-align:center">In Robotics
Class 1: Histogram Filters*Discrete *Continuous*--Unimodal-- *Multimodal*--Quadratic-- * Exponential*--Exact-- *Approximate
Class 2: Kalman Filters*--Discrete-- *Continuous*Unimodal *--Multimodal--*Quadratic * --Exponential--*--Exact-- *Approximate

Answer: Moving Robot

myrobot = robot() 
myrobot.set(30.0, 50.0, pi/2) 
myrobot = myrobot.move(-pi/2, 15.0) 
print myrobot.sense() 

myrobot = myrobot.move(-pi/2, 10.0) 
print myrobot.sense()
  • [39.0, 46.0, 39.0, 46.0] [32.0, 53.1, 47.1, 40.3]

Answer: Add Noise

myrobot = robot() 
myrobot.set_noise(5.0, 0.1, 5.0) # here is where you add your code 
myrobot.set(30.0, 50.0, pi/2) 
myrobot = myrobot.move(-pi/2, 15.0) 
print myrobot.sense() 

myrobot = myrobot.move(-pi/2, 10.0) 
print myrobot.sense()

Notice that every time you hit run you get different set of values.

Answer: Creating Particles

N = 1000 
p = [] 
for i in range(N): # iterate the loop 1000 times
    x = robot() # create an object called robot 
    p.append(x) # append the object to growing list p
print len(p)
  • 1000

If you try to print just p you get 1000 particles, each of which has three values associated with it, an x-coordinate, a y-coordinate and an orientation (heading direction).

Answer: Robot Particles

Here is one possible solution:

N = 1000 p = [] 
for i in range(N):
    x = robot() 
    p.append(x)

p2 = [] 
for i in range(N): # go through all the particles again
    p2.append(p[i].move(0.1, 5.0)) # append to list p2 the result of motion applied to the i particle, chosen from particle set
p = p2

If you got this far, you are halfway through! However, the next half is going to be tricky.

Answer: Importance Weight

w = [] 
for i in range(N):
    w.append(p[i].measurement_prob(Z)) # append the output of the function measurement_prob to the i particle with the argument of the extra measurement
print w

The results return some outputs that are highly unlikely, with exponents of -146, while others are more likely, for example exponents of -5 — these are the particles that are more likely to survive.

For the final step of the particle filter algorithm, you have to sample particles from p with a probability that is proportional to a corresponding w value. Particles in p that have a large output value should be drawn more frequently than the ones with a smaller value. How hard can that be?

Answer: Resampling

To obtain the answer, you just have to normalize the importance weights by dividing each weight by the sum of the weights.

26.png

Answer: Never Sampled-1

Yes because something with an importance weight of 0.1 is quite unlikely to be sampled into the next data set.

27.png

Answer: Never Sampled-2

The answer is yes again, because even though the importance weight is large, it could happen that in each of the five resampling steps you would pick one of the other four.

28.png

Answer: Never Sampled-1

For p~3~ to never be drawn in the resampling phase, you would always have to draw p~1~, p~2~, p~4~ or p~5~. These together have a 0.6 probability of being drawn (sum of them all). For five independent samplings to draw one of those four, you get a total probability of 0.65, which is approximately 0.0777. In other words, there is about a 7.77 percent chance that p~3~ is missing — about a 93 percent probability of never sampling it. Particles with small importance weights will survive at a much lower rate than the ones with larger importance weights. This is exactly what you want to get from the resampling step.

29.png

Answer: Resampling Wheel

Represent all of the particles and importance weights in a big wheel. Each particle occupies a slice that corresponds to its' importance weight. Particles with a bigger weight occupy more space, where particles with a smaller weight occupy less space.

30.png

Initially, guess a particle index, uniformly from the set of all indices. This can be written as:

index = U[1...N]

If you select w~6~, the trick is to construct a function, that you initialize to zero, and to which you add a uniformly drawn continuous value that sits between zero and 2*wmax, which is the largest of the importance weights in the importance set.

31.png

Since w~5~ is the largest, you are going to add a random value that might be as large as twice w~5~. Suppose that you start at the position indicated in the picture above and add a beta that brings you to w~7~, as shown below. Now iterate the following loop; if the importance weights of the present particle doesn't suffice to reach all the way to , then subtract from the index and increase the index by increments of one. This is written as:

if windex < -windex
index index +1

What you have done is moved the index to the next w and removed a section of .

By repeating this, you will eventually get to the point where beta is smaller than your w[index], then you pick the particle associated with that index.

When we do this N times, we get N particles, and we can see that particles will be chosen in proportion to their circumference on the circle.

32.png

Answer: Orientation

  • b. No, eventually they matter

Orientation does matter in the second step of particle filtering because the prediction is so different for different orientations.

Answer: Filters

  1. Histogram filters (1998) c. Particle filters (2003)

Sebastian's Code: Particle Filters

from math import * 
import random

landmarks  = 20.0, 20.0], [80.0, 80.0], [20.0, 80.0], [80.0, 20.0 
world_size = 100.0

class robot:
    def __init__(self):
        self.x = random.random() * world_size 
        self.y = random.random() * world_size 
        self.orientation = random.random() * 2.0 * pi 
        self.forward_noise = 0.0; 
        self.turn_noise    = 0.0; 
        self.sense_noise   = 0.0;

    def set(self, new_x, new_y, new_orientation):
        if new_x < 0 or new_x >= world_size:
            raise ValueError, 'X coordinate out of bound'
        if new_y < 0 or new_y >= world_size:
            raise ValueError, 'Y coordinate out of bound'
        if new_orientation < 0 or new_orientation >= 2 * pi:
            raise ValueError, 'Orientation must be in [0..2pi]'
        self.x = float(new_x) 
        self.y = float(new_y) 
        self.orientation = float(new_orientation)

    def set_noise(self, new_f_noise, new_t_noise, new_s_noise):
        # makes it possible to change the noise parameters 
        # this is often useful in particle filters self.forward_noise = float(new_f_noise);                     
        self.turn_noise    = float(new_t_noise); 
        self.sense_noise   = float(new_s_noise);

    def sense(self):
        Z = [] 
        for i in range(len(landmarks)):
            dist = sqrt((self.x - landmarks[i][0]) ** 2 + (self.y - landmarks[i][1]) ** 2) 
            dist += random.gauss(0.0, self.sense_noise) 
            Z.append(dist)
        return Z

    def move(self, turn, forward):
        if forward < 0:
            raise ValueError, 'Robot cant move backwards'
        # turn, and add randomness to the turning command 
        orientation = self.orientation + float(turn) + random.gauss(0.0, self.turn_noise) 
        orientation %= 2 * pi 

        # move, and add randomness to the motion command 
        dist = float(forward) + random.gauss(0.0, self.forward_noise) 
        x = self.x + (cos(orientation) * dist) 
        y = self.y + (sin(orientation) * dist) 
        x %= world_size    # cyclic truncate 
        y %= world_size # set particle 
        res = robot() res.set(x, y, orientation) 
        res.set_noise(self.forward_noise, self.turn_noise, self.sense_noise) return res

    def Gaussian(self, mu, sigma, x):
        # calculates the probability of x for 1-dim Gaussian with mean mu and var. sigma 
        return exp(- ((mu - x) ** 2) / (sigma ** 2) / 2.0) / sqrt(2.0 * pi * (sigma ** 2))

    def measurement_prob(self, measurement):
        # calculates how likely a measurement should be 
        prob = 1.0; 

        for i in range(len(landmarks)):
            dist = sqrt((self.x - landmarks[i][0]) ** 2 + (self.y - landmarks[i][1]) ** 2) 
            prob *= self.Gaussian(dist, self.sense_noise, measurement[i])
        return prob

    def __repr__(self):
        return '[x=%.6s y=%.6s orient=%.6s]' % (str(self.x), str(self.y), str(self.orientation))

    def eval(r, p):
        sum = 0.0; 
        for i in range(len(p)): # calculate mean error
            dx = (p[i].x - r.x + (world_size/2.0)) % world_size - (world_size/2.0) 
            dy = (p[i].y - r.y + (world_size/2.0)) % world_size - (world_size/2.0) 
            err = sqrt(dx * dx + dy * dy) 
            sum += err
        return sum / float(len(p))

# --------

N = 1000 
T = 10

myrobot = robot()

p = [] 
for i in range(N):
    r = robot() 
    #r.set_noise(0.01,0.01,1.0) 
    r.set_noise(0.05, 0.05, 5.0)#Sebastian's provided noise. 
    p.append(r)

for t in range(T):
    myrobot= myrobot.move(0.1, 5.0) 
    Z = myrobot.sense() 

    p2 = [] 
    for i in range(N):
        p2.append(p[i].move(0.1, 5.0))
    p = p2 

    w = [] 
    for i in range(N):
        w.append(p[i].measurement_prob(Z))

    p3 = [] 
    index = int(random.random() * N) 
    beta = 0.0 
    mw = max(w) 
    for i in range(N):
       beta += random.random() * 2.0 * mw
       while beta > w[index]:
           beta -= w[index] index = (index + 1) % N
       p3.append(p[index])
    p = p3 

    print eval(myrobot, p)

if eval(myrobot, p) > 15.0:
    for i in range(N):
        print '#', i, p[i]
    print 'R', myrobot

CategoryNotes