Contents

## Probability

So welcome to our very first homework assignment. This is number 1. [Homework Assignment #1] Just to recap, in the class we learned about localization. [Localization] We learned about about histogram filters. [Histogram filters] And we programmed some in Python. [Python] So the homework assignment will cover this plus some very basic probability. [Probability] [Question 1] In the first question, I'm going to ask you some very basic probability questions. We have random variable X and the probability of 0.2. What's the probability of the complement? We have 2 random variables, X and Y, whose probability individually is 0.2. And X and Y are independent. [X, Y independent] What's the probability of the joint X, Y? And we have the variable P(X) with probability of 0.2, and we have 2 conditionals, P(Y|X) and P(Y|Â¬X), both 0.6. What's the probability of Y? Here you have to apply total probability.

## Probability Solution

[Question 1] The correct answer in the first case is 0.8. This is just 1 minus 0.2. If X and Y are independent, then we just take the product of those 2 things, which is 0.04. And in the last case, it turns out X and Y are independent but just by a coincidence because the probability of Y is independent of what X says, and therefore, the outcome is 0.6. Put differently no what matter value X assumes, whether it's X or not X, Y is always 0.6 probability so it must be that P(Y) is 0.6. You can actually compute this using total probability where P(Y) equals P(Y|X) times P(X) plus P(Y|Â¬X) times (PÂ¬X). When you plug in the numbers, you get 0.6 times 0.2 over here; And if you regroup this, or you put the 0.2 and the 0.8 together into one, you end up with 0.6.

## Localization

Let me ask you a localization question. You remember a robot operating in a plane environment has usually 3 coordinates. It has an x-coordinate, a y-coordinate, and a heading direction--often called orientation. Now, flying robots have more coordinates. If you can orient the robot fully in free space then you have an x, y, and z, and you also happen to have 3 rotation angles-- often called roll, pitch, and yaw. If you built a localization system for robots with higher dimensional state spaces, I wonder how the memory used will scale for our histogram-based localization method. Does memory scale linearly, quadratically, exponentially, or none of the above in the number of state variables used in localization? Again, for a robot operating on a plane, there will be three of them. So the number of state variables will be three. If you were to look at a flying robot where you have x, y, z, roll, pitch, and yaw, You would get six such variables, and I wonder how the memory use of the basic histogram localization scales. Please check exactly one of those four boxes over here.

## Localization Solution

The answer is exponential. Suppose we resolve each variable at a granularity of 20 different values, so there's 20 different values for x and 20 for y and 20 for Î¸. Then the joint table over all of those will be 20^N where N is the number of state dimensions. That's an exponential expression. There is unfortunately no easy way around it. The biggest disadvantage of the grid-based localization method or the histogram method is that the scale of memory is exponential, which means it's not applicable to even problems with 6 dimensions, because you can't really allocate memory for 6 dimensions.

## Bayes Rule

I'm now going to quiz you on Bayes Rule. Say you own a house, and you know that the house might catch fire in your absence, but the probability of it catching fire--"f" over here--is small. It's a 10th of a percent--0.001. Let's say every afternoon you talk to your neighbor, and every afternoon you ask your neighbor, "Does my house burn?" Of course,you a little bit paranoid if you do this, but for the sake of the argument, let's just assume you do this every afternoon. This afternoon he comes back and says, "Yes, it burns," so B. You happen to know that the neighbor is not very truthful. In fact, every time you ask him a question, you know there is a 0.1 chance--a 10% chance--the neighbor will just produce a lie and a 0.9 chance the neighbor actually speaks the truth. So you ask him exactly one question--"Does my house burn?" He says, "Yes, it burns," but you know that the probability of this being a lie is 0.1. So in applying Bayes Rule I like to first compute the non-normalized posterior--p bar-- bar now stands for non-normalized--of fire given the neighbor just called. The same for the opposite event of no fire given the neighbor just said, yes, it burns. After you've done this, I'd like you to compute the normalized values that have to add up to 1. Please enter all 4 values for this homework assignment.

## Bayes Rule Solution

Here are my answers. The prior for fire is 0.001 times the probability that the neighbor now correctly said, yes, it burns, which is 0.9. He lies with a probability of 0.1, so the complement is 0.9. This gives us 0.0009. For the complement, the prior of no fire, is 0.999, but now the neighbor would have lied, which multiplies with 0.1, which gives us 0.0999. Now, these two values don't add up to 1. The normalizer will be 1 over these two things, which is about 9.92. Multiplying these with their normalizer gives us approximately 0.0089 and 0.991. So the answer your neighbor gave you--yes, it burns-- raised your probability from 0.001 to 0.0089. It's still small, but it's significantly larger. In fact, it's approximately 0.9 times as large as the initial probability. The reason why that is the case is it relates to the 0.9 probability of speaking the truth. It's not exactly 0.9 because of normalization, but it's approximately 0.9.

## Congratulations

Congratulations. You made it through homework assignment number 1. You learned about Monte Carlo robot localization with a technique that I often call histogram filters. You've implemented it successfully and learned a lot about statistics. This is all just a single class. Congratulations. That's really awesome. Now, next week we talk about common filters and tracking other cards in traffic, and you're going to implement our common filter, so I'll see you in the next class.