Monte Carlo Localization

Initially that was my course project for computational physics, but I got so much interested into this topic that I’ve invested a lot of time and effort into this project far beyond the load of a course project.

So, Monte Carlo Localization (MCL) is a very powerful method in robot localization which is simply the determination of robot location.

MCL algorithm needs the environment map and some sensor measurements as inputs. Next step is updating the weights of each possible location on the map based on the sensor reading. Third step is re-sampling with the probability of drawing a certain location from the configuration space is proportional to its weight. Assigning weights to locations and re-sampling are actually looping till the algorithm converges to the true location with a certain confidence level.

So, initially the map has to be created and for this specific project, the map is actually how much light is reflected from the floor where 1 indicates black and 100 is white. Therefore, the map is basically an array of integers ranging from 1 to 100 where each value represents how much light is reflected at specific location. The second input is the sensor reading and here the sensor used is a light sensor that measures either the ambient light or it emits light and measures the reflected light. For this project, the sensor is working on measuring reflected light mode. Initially, the probability of being at locations is uniform. So, it’s equally probable to be anywhere in the environment.

To give an estimate of how likely a certain location is occupied by the robot at a given instance of time, a weight has to be assigned for such location. So, how to calculate the weights is explained as follows: First, the sensor reading is compared to each value in the map and the absolute value of the difference is calculated. For example, if the sensor reading is 22 and the map contains these 5 values {1, 20, 40, 60, 80} we would get an array of {21, 2, 18, 38, 58}. This is giving an indication of how far each value is from the current sensor reading. Now, we would like to have this in a form of probability. So, we divide these difference values by the sensor reading which give us a measure of how unlikely a certain position is (because the bigger the difference, the more unlikely it is). Therefore, we subtract these numbers from 1 to indicate how much a position is likely to be true and we would get a set of values as {0.05, 0.91, 0.18, -0.72, -1.6}. We notice 2 things here: first, locations that have close values to the sensor reading has higher values than others; the location with 20 has a 91% chance it’s true while a location with 1 is only 18% which makes sense because the sensor is reading 22. Second, we got negative values and this is expected because we got locations that are very highly unlikely to be true. (like 60 and 80 where the sensor is actually reading 22) Finally, to make some magnification for the values and get rid of the negative values, we multiply positive numbers with 100 and divide those negative ones with 100. So, in our example, we would get an array of {5, 91, 18, .0072, .0016}

Here is the key step in the algorithm which re-samples from the configuration space with a probability proportional to the weight of each location.
We assign a random number ranging from 1 to M (recall that M is number of particles) to a variable named, index. We then create one more variable, β, which carries a random number that equals twice as the maximum weight multiplied by a random number, rand, where 0 < rand ≤1. Next, we check if the weight of the location of the random index is > β. If so, we consider this location into sampling. If not, we increment the index by 1 and take the next location to be of that index while we decrement β and do the check again. We loop in this till we pick a location for the sampling.

To illustrate this, let’s consider the following case. Suppose we have 5 locations with weights as follows:

Location Weight 1 7 2 3 3 2 4 6 5 1
Suppose we get an index as 2 and β as random * 2 * Max Weight (7) to be 8. Next, we check if weight of second location is > β which is not true (3 is not > 8). So, we increment index to be 3 and decrease β by weight of current location (3) which is to be 8 – 3 = 5 and do the check again. Obviously, location 3 is not picked either and we do the loop again. For next iteration, location 4 is picked which makes sense because it has a high weight. This concept is illustrated below, using the known as sampling wheel



Motion Model
One more essential step in MCL is the motion model, i.e. how the robot is moving and in which direction. With this information in hand, the probability distribution is shifted to next location. For example, if the robot is moving in a straight line and it’s know that it takes 0.5 second to reach next location, then if the robot is 95% confident it’s at location 3, then for next re-sampling, it’s assumed to be 95% confident at location 4.


For experimental purposes, I’ve used the LEGO NXT 2 kit and created a simple robot that is capable of moving and installed with a light sensor as shown in opposite picture.
I also created a portion of the map by a set of rectangles filled with grayscale color corresponding to the light-reflection model (1 would be 1 RGB color values, 100 would be 255 RGB and 60 would be 0.6*255) as shown below



I’ve used NXT Logging 2 software to program the robot to collect data from light sensor and move in a straight line. Then the same software is used to upload the log data which I use as sensor readings into my algorithm.
I had to test several values for sampling and motor speed to find the best ones so that it provides good readings for the sensor measurement and also to cover the whole environment with adequate speed. Since the environment is actually composed of 16 locations, my aim was to cover each location in half a second. So, I set the motor power to 20% and the sampling rate to 4 samples per second and then I take the average of each 2 samples per location. Robot motion over the environment is shown in the sequence of pictures below.


I’ve tested my code using several situations where I try several levels of uncertainty. Basically the sensor readings have some sort of errors and contribute to the uncertainty. However, I’ve created more sources for uncertainty. First one was changing the robot motion to slightly inclined line instead of a straight line and second one was running the experiment in different times with different battery levels while the motor power is having the same setting, thus the robot speed is changing and adding more noise. Last one was simulating an extreme case where the sensor is having a lot of noise and causing high level of uncertainty.
I’ve used histograms to visualize results where on x-axis locations are marked and on y-axis how many a certain location is considered in each iteration. Such histograms are shown below for 3 different cases:

Here, it’s only the noise from the sensor readings that is the main factor contributing to the uncertainty.



In the first iteration, the probability distribution is uniform and sampling is actually considering almost all locations, we can see that maximum number of a location to be sampled is 5 times, happened only once. For the next one, it’s clear that the sampling is biased and the algorithm is giving a higher probability that robot is at location 5 which is fed into next iteration as location 6. As the algorithm loops and more sensor readings are fed, the algorithm is converging to the true location and it could do this after 5 iterations.
In the second case, I’ve created more uncertainty for the robot by slightly rotating the rear wheel so that it moves in an inclined path rather than a straight line. This added uncertainty made the algorithm need 3 more iterations to converge to the true location.
For the third case, I’ve simulated large level of uncertainty in sensor readings by manually inserting values in sensor readings array in the code. Such values are equally probable for different locations (a sensor reading of 50 makes the algorithm believe that it’s equally probable to be at a location where the value is 40 or 60). Meanwhile, I’ve added same sequence of values in the map in different regions (for instance locations 11 to 15 are to be 60, 80, 40, 20, 100 and locations 74 to 78 are also 60, 80, 40, 20, 100).
Results for second and third cases are shown below.