Introduction

This lab we again split into 2 groups. Group 1 worked on radio and GUI implementation whie Group 2 integrated more functionality into LABron's maze exploring skills, including starting on a 660 Hz sound and encoding LABron's surroundings. Group 1 was Shubhom and David. Group 2 was Rohan and Ho-Jung. Below is a summary of the steps needed to advance LABron through yet another preliminary challenge before he enters the Finals(held in December).

Group 1: Setting Up Radios

The radios we used for this part of lab were the Nordic nRF24L01+ transceivers. We need 2 of these radios to setup on LABron (to communicate information about the maze) and on a second Arduino Uno which will be setting up our base-station and receiving information from LABron. Communication between the two radios is relatively straight-forward, requiring us to only setup our identifier channel (70 and 71 for Team 23) and modifying existing sketches for inter-radio communication. Integrating this into LABron, we decided that the radio communcation should only happen at intersections (when new information was present), and should contain various information about position/walls/treasures. The decision for encoding this information was left up to Group 2 while Group 1 forged ahead to work with the GUI. Using the existing sketches, it was evident that some radios were dysfunctional, so several devices had to be tried and (powered externally) before the examples ran as expected. Then, we decided the max payload of 32 bytes was more than enough for communication between LABron and the base station, so that one packet transmission at each intersection was enough to encode all the information LABron needed to send. As a result, the provided sketch was not modified significantly in using radio transmission.

Setting up a GUI Interface

The next step after enabling radio communication was to link the base-station's serial communication packets with the provided GUI using provided APIs. First, to test the GUI functionality, we hard-coded a maze, meaning we created an array and defined wall locations that could be fed as a packet to be decoded and rendered by the GUI. This relied on the encoding scheme Group 2 defined (see below), and ended up being successful (pictured right).

Group 2: Mapping the Maze

One of the challenges of the lab was maintaining information about a 9 x 9 map of intersections. To maintain this information, we created a 9 x 9 byte array where each entry contained a byte with encoded information about the walls and treasures near that intersection. Specifically, the 8 bit "metapackets" were encoded as follows:

Bit 7: Wall West
Bit 6: Wall North
Bit 5: Wall East
Bit 4: Wall South
Bit 3: Treasure Shape
Bit 2: Treasure Shape
Bit 1: Treasure Color
Bit 0: Treasure Color

Note that these directions are all absolute, which means internal logic translated the sensor input from the left, right, and forward wall sensors to decide where walls were with respect to the current intersection location and current direction. These metadata incorporating packets were intended to use absolute directions so that some logic was written to track LABron's absolute direction on the map and translate motion when direction changes were made to the cardinal directions, which occurred at intersections (the desired functionality for IR hat detection was to stop). Furthermore, two global state variables representing LABron's current coordinate were created and updated at each intersection. Then, at each intersection, one packet of size two bytes was sent from the onboard radio to the base station: the first byte was the (x,y) coordinate where x and y were each encoded as 4 bits, and the last 8 bits contained the 8 bit "metapacket" that was contained in the local map at that (x,y) intersection.The base station contained a decoder that decoded the packets and translated them into strings to print on the Serial as defined in the Maze GUI API.

Decode Package

The way we setup for the maze is quite different to the GUI. That is our robot compass is 90 degree clockwise to the GUI's compass. Therefore, we have to shift the wall direction left for one bit. As a result, it would correctly map onto the GUI.

LABron Listens for a Whistle

Once we'd taken care of the bulk of the new material due for this lab, we went back and began working on integrating already functioning components into the behavior of our robot. The first item for consideration was using a 660Hz signal to have our robot begin the maze (pictured left).

LABron Stops for IR Hat

LABron was programmed to stop when he detects the IR hat of another robot (pictured right) , which was similar to the logic used in initiating motion on the start whistle.

LABron Ignores Decoys

We make sure that LABron still ignores the saturating signal of the Decoy hardware. If LABron confused this signal for the IR hat, he would exhibit the stopping behavior which all the robots are required to display during this lab when another robot is detected. Fortunately, LABron was designed too intelligently to fall for the decoy, thanks to the filters developed in earlier labs and milestones.

Full Integration

LABron now has fully integrated all the various parts of this lab. He now navigates a maze, transmits wall info back to a base station, and reponds to both IR and Audio signals.

Conclusion

The video above shows the synchronized map update, with LABron using the radio to wirelessly transmit his wall sensings, position, and direction to create an internal map of the maze. This was LABron's most challenging lab to date. After full integration from Milestone 2, including beginning at the start whistle and adding radio capabilities for wireless transmission, LABron had to begin navigating the maze and storing not only his own state but also all information he had about the maze, and transmitting this remotely. Some of the challenges faced in this lab included getting the radio communication to work as intended, as radios were easily prone to damage from mistakes in power supplied or previous usage. Furthermore, integrating the full array of sensors and program logic required more careful consideration of data types and data structures used. Two future plans for improving LABron are the incorporation of interrupt-driven logical decisions as well and the writing of more memory-efficient code/algorithms, as current implementations require over 90% of system memory.