This page documents my use of the Pololu 3pi robotics platform as a base for a “micromouse,” an autonomous maze solving robot. This was done during 2012 with IEEE’s Student Branch at UCSD.
To create a working MicroMouse robot without any funding.
Most teams working with UCSD IEEE had a budget to buy the parts necessary to create their robot. Because I was working on a separate project with funding the creation of this robot was done as a challenge instead of as a funded project. All of the components were either repurposed from old projects and scrap boards, or passives found in the lab.
The platform for the robot was based on the Pololu 3pi which a previous team had attempted to use and had slightly mangled in the process. I only used this as a simple platform with 2 independently controllable wheels, the rest of the sensors and onboard hardware wasn’t used.
The heart of the system was an OpenLPC2148 development board with an NXP LPC2148 ARM Microcontroller. This had a built in analog to digital converter and UART peripherals which were utilized extensively. The MCU also had a lot of RAM which was useful for storing the maze.
One method to navigate the maze is to track the location of the walls around the robot. I used three IR emitter/detector pairs on the left, right, and front of the robot. The IR emitters were controlled with transistors enabling the low power MCU pins to switch on and off the higher current LEDs. The detectors were IR sensitive phototransistors set up as common emitter amplifiers; the output from the BJT collector was buffered by an LM324 op-amp to provide a low impedance source to the ADC.
The final challenge in making the “mouse” navigate reliably was wheel position feedback. The 3pi robot was primarily designed to be a line following robot and not a wall following one, because of this there are no encoders on the motors.
To solve this problem I printed out circles of black and white alternating lines on a laser printer then used a hot iron to transfer the toner to white plastic. After cutting down the plastic and gluing it to the insides of the wheels I could use an IR detector to determine the color of the disk at a certain point. Because detection had to be very close range I used a small IR detector/emitter device that was pre-soldered to a breakout board, luckily the IEEE project room had a few laying around. These boards output an analog signal depending on how much IR light is reflected, but only have a range of a centimeter or so.
Unlike the wall position sensors with the wheel encoders I only needed a binary output. The analog signal from the detector was input to an op-amp set up as a Schmitt trigger with an adjustable threshold. This allowed me to get a reliable digital signal from the wheel encoders despite differences in the IR detector position and differences in the encoder disks.
At one point I had added a rear position sensor and a port to plug in an xbee RF module, but these were not used in the project.
The initial software to make the micromouse work is fairly basic: The robot continuously turns on the IR LEDs and reads back the voltage from the detector through the ADC. Assuming the robot is straight the difference between the left and right sensors can be used to determine the horizontal offset of the mouse in the maze, or (depending on if a threshold value is exceeded) there is an opening in the maze on the left or right. The value of the front sensor is used to determine if the robot is facing a wall and can be used to position the robot back into the center of each maze square (or cell).
While traveling straight the wheel encoders are counted and the error in their count is used as feedback for a PID loop. With the PID constants tuned this can keep the robot traveling in a straight line for most of the distance of a full sized (16 cell) maze.
Of course if there is any offset from the starting cell the mouse will just slowly collide with the wall. To solve this the IR sensors are used in conjunction with the encoders to ensure a straight path. At each cell the IR sensors are used to straighten the robot, it then drives forward and checks the sensor error. If error thresholds are exceeded the sensor error is offset and added to the PID error input. The combination of self centering and the combination of position and encoder feedback ensured the micromouse wouldn’t collide with the walls.
CAMM – California Micromouse Competition
Up until the time of competition everything came together quit nicely. My micromouse was able to navigate the maze and map out where it was and where it had been. There were a few tricky maze configurations which caught it up when it couldn’t self calibrate, but that was to be expected. I had used a flood-fill algorithm which worked some times, but it still had a few bugs to work out. It would navigate to the center, but if the first path was blocked it would have trouble finding alternate routes. With a few more days of debugging I imagine I could have fixed the bugs in my maze solving routine.
An unavoidable issue
In the end my robots downfall was a critical flaw in the IR detectors, they had no shielding from other light sources and were very sensitive. Under most circumstances they worked well, but in environments with large amounts of lower frequency IR they were useless. Fortunately most of the labs used fluorescent lights and even the room the competition was in seemed to work extremely well, we were allowed to calibrate the sensors before the competition. Unfortunately it worked well until additional lighting was turned on during the competition, this drowned out the sensors with my calibrations and all I could do was watch my robot slowly drive itself into the wall.
In the end I didn’t place very high considering I was only able to hobble through 5 or 6 cells, but it was a great challenge to get my robot to that point. The only other problem I had during the competition was that a wire had come unsoldered feeding power to the IR phototransistors, getting that back in place was a little tricky, but very doable.
From start to finish creating this robot for the micromouse competition was an awesome experience. If I were to do it again it would be nice to not have a budget constraint of $0.
The most exciting part was when I had the control code working and I could let the mouse explore the maze randomly. The most disappointing part was the actual competition, however it is satisfying knowing what the problem was and that it would be corrected by changing the sensors used.
Regrettably I didn’t get any video with the full maze setup (debugging was more important at the time). A short clip during testing is available on YouTube. Please excuse the vertical video.
Additional information can be found at the project’s wiki page.
Messy code can be found on the project’s git repository.