Purpose

  • Explore how to use Bayesian reasoning in a somewhat challenging environment.
  • Understand the power of probabilistic reasoning in the presence of noise.
  • Use Bayesian reasoning when observations are nonlinear.

This will be accomplished in the same bzrflag environment used for the rest of the labs.

What you should learn:

  • When you complete this lab, you should be well-informed in the following two areas:
    • Applying Bayes rule in a sequential choice problem.
    • Implementing a grid filter in a stationary environment.
  • For each of these areas, you should be able to talk and write intelligently about the following:
    • Memory required to store elements of the grid.
    • Time required for the filter to converge.
    • Sensitivity of results to noise levels.
    • Sensitivity of results to the parameters of the likelihood function.

Description

Many robots in the world use some combination of sonars and lasers to sense distances to objects. Although these sensors are useful for providing information about when a robot is too close to an obstacle, it's only been within the last decade that algorithms have existed for taking these sensor readings and turning them into accurate maps of obstacles in the world. Naturally, the laser readings are more accurate than the sonar readings.

The key breakthrough in using laser range finders to build maps of the environment was the application of Bayes rule to the problem of mapping observations from the laser into estimates of the states of obstacles in the environment. More specifically, people applied Bayes rule in a sequential choice problem using the extension known as the Bayes filter. The Bayes filter, in its general form, does not provide enough details to actually build a map of the environment from sensor readings. In addition to the general filter, robots needed some way to efficiently represent obstacles. They also needed some way to model the observation process — something that was accurate enough to allow maps to be built, but simple enough to be computable in realistic time.

In this lab, you'll use a Bayes filter to estimate the locations of obstacles in the environment. You'll represent obstacles using an occupancy grid and you'll use a very simple sensor model. (In fact, the observations that you'll make don't match any realistic sensor. The sensor created in the BZRFlag environment is designed so that you can use a very simple sensor model.) The occupancy grid discretizes the world into a grid of cells. Let S denote the set of all such cells in the environment. For each cell, we can conclude that the cell is either occupied or not occupied by an obstacle. Thus, if we let i and j represent the x index and y index of the cells, then the set of relevant states in the world consists of an “occupied” or “unoccupied” label for each cell. Thus, <math>S=\{s_{i,j}\ :\ s_{i,j}\ =\ \text{occupied}\ \text{or}\ s_{i,j}\ =\ \text{unoccupied}\}</math>.

As the robot moves around the environment, it has a sensor that can take noisy sensor readings of the cells around it. For the purposes of this lab, we set this sensor to be a square that is centered on the robot. The square is known as the sensor footprint of the robot, meaning the set of states surrounding the robot that are sensed. Outside of this square, no information is obtained about obstacles in the world. Within this square, information is returned about obstacles in the world, but this information is noisy. Sometimes the sensor says that a cell is occupied when it is not, and sometimes the sensor says that a cell is unoccupied when it is. We call these two errors a false alarm and a missed detection, respectively.

The size of the sensor is set by the –occgrid-width command line option to bzrflag, as you saw in the search lab. In that lab, we gave you a ridiculously large occupancy grid, so that you could have a discretization of the world that was easy and consistent. In this lab you will get a much smaller occupancy grid, 100×100. Interestingly, this sensor can see all cells within this grid; there is no occlusion (like their normally would be with a range sensor, since range sensors typically cannot see through things).

The sensor will return one of two readings for each cell within the sensor footprint: a hit or a miss. Since the sensor cannot see outside of the sensor square, we do not get to observe obstacles throughout the entire environment; there are many cells for which we do not get observations. Thus, the set of observations is O={hit, miss, no_data} for each cell in the world.

Your job is to translate these observations into estimates of the four corners of all obstacles in the world by applying the Bayes filter. Since we are assuming that obstacles are placed and sized so that their corners align on integer values, and since we are discretizing the world into cells of unit area, you can apply a form of the Bayes filter known as the Grid Filter. The term “grid” simply means that we represent states of the world in a grid of x and y values. For each cell in the grid, you should estimate the probability that it is occupied. (You automatically get the probability that it is not occupied, since p(s=occupied) = 1-p(s=not occupied).) You'll update this estimate as you get more observations. This means that you should apply Bayes rule to translate observations into estimates of the true state. The pseudo-code for this lab looks something like the following:

for each state s_{i,j} of the world
     observe o_{i,j} from the set {hit, miss, no_data}
      update p(s_i,j=\text{occupied}|o_{i,j}) = p(o_{i,j}|s_{i,j}=\text{occupied})p(s_{i,j}=\text{occupied})/p(o_{i,j})
       move the robot and repeat

When you are confident in your probabilities, set a threshold. If the probability of it being occupied is high enough, conclude that the cell is occupied. Then, do a search of all occupied cells to determine where the corner of the obstacle is. Print out the locations of all four corners of all obstacles in the world.

There are four things that you will need to focus on to get this pseudo-code to work in practice. First, you will need to create a likelihood. This is easily done by getting two parameters from the server: the probability of a true positive and the probability of a true negative. These two numbers completely specify <math>p(o_{i,j}|s_{i,j})</math>. The numbers can be set as command line parameters (see bin/bzrflag -h), and they are obtained by the server as part of the response to the 'constants' command.

P(observed = occupied | state = occupied) is approximated as true positive rate
P(observed = not occupied | state = occupied) is approximated as false negative rate

P(observed = occupied | state = not occupied) is approximated as false positive rate
P(observed = not occupied | state = not occupied) is approximated as true negative rate

Second, you can improve the efficiency of your algorithm by noting that obstacles are made up of several occupied cells. If you are clever, you can observe that the probability of a cell being occupied is pretty low if none of the cells around it are occupied, and it's pretty high if all of the cells around it are occupied. Although you don't need to do this to get the lab to work, it may make your algorithm converge more quickly. However, it may also take more time to implement since you will need to look at cells around a particular cell to update its posterior probability.

Third, you will have to start with a prior probability that makes sense to you. I started with a high prior probability of a cell being occupied since the likelihood that I used did a better job concluding that a cell was not occupied than that a cell was occupied. You should experiment with a few things to see what works for you.

Fourth, you should be smart in how you have your robots move in the environment so that you build up your map as quickly as possible. The MATLAB code that implements the Grid Filter has the robot move in pseudo-random directions. This isn't very smart and you should be able to do much better.

Implementation

BE SURE TO UPDATE YOUR COPY OF BZRFLAG. CHANGES HAVE BEEN MADE

Visualizing this lab is incredibly helpful, so that you can see what you are doing. I do not recommend using Gnuplot for visualizing this lab; you can if you want to, but other methods probably work better. Those of you who have been using things besides Gnuplot to visualize may already have a decent way to visualize this lab - all you need to do is display a grid that is gray-scaled based on your belief of whether or not the cell is occupied. If you know OpenGL (say from CS 455), it is very well suited this type of visualization. There is some sample Python code here, that should be usable even if you don't know OpenGL. Just call init_window() when you first start your agent, then update_grid() and draw_grid() in turn whenever you want to update the window (every time you update the grid, I would think). If you want to see what the code looks like when used properly, change runme.sh to use grid_lab_agent instead of agent2 (i.e., use blind.py to run the pre-compiled grid_lab_agent.pyc in bots/compiled).

It can be rather slow to request the occgrid every tick for 10 tanks, depending on your language and how you implement your code. You can choose how many tanks you want to use for this lab - if your code can handle 10, or 20 tanks just fine without slowing down, feel free to use 20 tanks. If it is more efficient for you to use 1 or 2 tanks, go ahead. Remember that you will be passing off to the TA, and he probably will not be happy if he has to sit waiting for you to finish for 15 minutes (a minute or two or three is probably reasonable, given that you have to send tanks through the whole world looking for obstacles).

Run bzrflag with the following parameters (plus some world, and any other necessary parameters): 
<pre>--default-true-positive=.97 --default-true-negative=.9 --occgrid-width=100 --no-report-obstacles</pre>
As a reminder, see the [[BZRC Protocol]] for how to use the occgrid command.

What to Turn In

When you are done, you pass the lab off to the TA in person.  The TA will check to see that you correctly locate obstacles in a world you have never seen that you should expect to be tricky (at least one world; we reserve the right to ask for more than one).  We will know the correct locations for the corners of obstacles, and you will have to reproduce them (and you should be able to figure it out without off-by-one errors - they may lose you some points, though only at most 5).  If the obstacle is L-shaped, or something more complicated, it is acceptable to break it up into rectangles and report those corners, or to report the L.  When you are testing, it might be helpful for you to know that the .bzw file for the map has all of the obstacle corners (at least you can figure them out), and that is what we will be using during passoff (i.e., if you can reproduce the corners that are in the .bzw file for the worlds you test on, you should be just fine).
The TA will also have you show where you implemented the Grid Filter; you'll be asked to explain how you implemented the likelihoods. The lab write-up should include:
  1. the amount of time you and your partner each spent on the lab,
  2. a summary of what you learned,
  3. a discussion of how you got your tanks to search the world (how many tanks your code could handle reasonably, whether or not you did anything tricky like looking at neighboring cells in the grid, how you moved your tanks around, what you did when tanks got stuck, etc.),
  4. a discussion of what happens when the sensor has different parameters (ex. it has a bigger range, it returns noisier estimates, you have an incorrect model for the amount of noise it returns, it only detects moving objects, etc.) - some of this you can test by varying the parameters you pass to bzrflag, and some you may have to just speculate on (like the moving objects bit),
  5. a discussion of how you would apply the grid filter in the capture the flag tournament, especially given the fact that you might get shot if you spend too much time wandering around the opponent's territory without having a good map, and that asking for the occupancy grid costs you computation time.

The point breakdown for this lab will be as follows:

  • 60 points for correctly passing off to the TA (getting the answers right on an unseen world)
  • 40 points for your written report:
    • 10 points for your discussion of how your tanks searched the world
    • 10 points for discussing what happens as the parameters of the sensor vary
    • 10 points for your thought about how you will use the grid filter in the final tournament
    • 10 points for other
cs-470/grid-lab.txt · Last modified: 2015/01/06 14:52 by ryancha
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0