Updated Website Introduction
Our project was to build a search and rescue robot that could explore one floor of a building and detect any people. This system would be used when a building needed to be evacuated and could have dangerous conditions, such as a gas leak. Using a robot to explore a floor of a hazardous building has the potential to better protect rescue workers, by allowing them to not enter the building if no one is inside. This could also be used to save time by searching the building while rescue workers are putting on necessary safety equipment. The goal of our robot is to autonomously navigate one floor of a building, and detect and notify rescue personnel of anyone in the building.
We used a mecanum chassis for our robot to allow for high maneuverability and effective obstacle avoidance. A Raspberry Pi 4B is used to run all of the computation on the robot, such as determining a shortest path to follow, with no input needed to control the robot. An IMU is used to provide acceleration data to the robot so the robot can know where it is on a map and follow the planned path. A Lidar scanner is used to measure distances to points and allow the robot to avoid obstacles. A webcam is used to record video, which the Pi uses to detect people and notify the rescue workers. At the outset of the project we had wanted to use SLAM (Simultaneously Localization and Mapping) so the robot could determine its location and a map of its surroundings entirely on its own, but we were not able to integrate our obstacle avoidance and path planning systems in time, so we rely on a pre-loaded map for navigation.
Final Video
Andy’s Status Report for December 6
Since the last status report, I have completed simulation for path planning, working path planning code that can be deployed on the robot, and fully implemented though not yet tested obstacle avoidance code. We are not using SLAM and Nav2 any more. We are now using preloaded map, A* algorithm, and dynamic path re-planning. I also worked on the final presentation slides and gave the presentation last Wednesday. For the rest of the time, I will help with further testing of path planning and fixing any problem with obstacle avoidance. If time allows, I will use Lidar data to replace the preloaded map.
Team Status Report for December 6
The most significant risk at this point is that there’s still a lot of work to do and final demo is Monday. To try to mitigate this we are spending a lot of time on the project between now and then.
The main change to our design is that we will no longer be using the L515 Lidar scanner. This change was necessary because the fact that the L515 is no longer supported made it too hard to work with and integrate with ROS2 Jazzy. Instead we will be using the RPLIDAR C1 for the obstacle avoidance (and SLAM if there is time) and using a thermal camera for the detection.
The last time the schedule was updated was before the final presentation. The most up-to-date schedule can be seen in the final presentation slides.
While we have not been able to test all of the systems yet since they do not yet have working implementations, we have been able to conduct some tests:
- Movement: The robot is able to move forwards, backwards, turn, and rotate. The speed was of the robot was estimated and is about equal to our goal from the design requirements of 0.3 m/s.
- Dimensions: The robot’s dimensions are within those specified by the use case requirements.
- Battery life: The battery has been able to last far longer than the requirement of 20 minutes without being charged while the full system is running.
- Path planning: The robot is able to follow a pre-planned global path and update it with local path planning based on odometry data. However, this does decrease the speed of the robot since the robot must continuously reorient itself to modify its direction.
Jeremy’s Status Report for December 6
Progress of things completed since last status report:
- I finished setting up controlling the motors from the RPi. Now the RPi can control the motors and move the robot forward, backwards, turn it, and rotate it. There were some changes in wiring from our connection of the motor controllers with the RPi 5 to instead connect it to the RPi 4, and also I had to change the orientation of the mecanum wheels since the current setup was resulting in unsteady movement.
- I mounted the MPU6050 on the robot, soldered it so I could connect jumper wires, and connected it to the RPi. I got the MPU6050 to send IMU data to the RPi and publish the data in a ROS node to the topic /imu.
- I set up the RPLIDAR C1 and got the data displayed on the RPi and published to a ROS topic /scan. After spending a lot of time trying to get the L515 Lidar scanner to publish data to a ROS node I decided to pivot to the RPLIDAR C1 because there was no available software compatible with both the L515 (a no longer supported device) and ROS2 Jazzy (the current version of ROS).
- I integrated the path planning, odometry, and motor control with the actual robot. I took the software that Andy had written to run path planning in simulation and modified it to make it work with the actual robot. There were some difficult bugs to solve in this, such as fixing a race condition where the path planning node would sometimes publish the data before the controller node subscribed to the ROS topic, resulting in the controller node not receiving the path and failing to send velocities to the motors
Schedule update:
- I am currently behind schedule since by this point I had wanted to have everything completed and just need to do some final testing before the demo, but there is still a lot more to do.
- While it is impossible to get completely back on schedule by this point, I am spending a lot of time on the project now and trying to get as much to function on the robot as I can.
Deliverables to complete next:
- The main thing I want to complete next is integrating the obstacle avoidance with the robot. Since I have already gotten the data from the RPLIDAR C1 published to a ROS node and Andy has been working on the obstacle avoidance simulation, I’m optimistic that we can get obstacle avoidance completed before the final demo.
- If there is still time after implementing obstacle avoidance, I will also try to get SLAM completed before the final demo. Since the RPLIDAR C1 works and has a 360 degree scan this might be possible, but slam_toolbox had been difficult to run before and might also be hard to integrate so I’m not sure if I’ll be able to complete this. The priority right now is obstacle avoidance and if then I’ll see if I can get SLAM to work too.
Soren’s Status Report for Dec. 6
This week, I collected much more data to test our detection algorithms on by recording videos using the camera on the L515 LiDAR and using the thermal camera (which I was able to connect using the new Lepton breakout board, and confirm is working). Thus the detection subsystem is finally done, in time for our final demo. What remains is testing with full system integration. This is likely to go smoothly as the data that we have collected and tested the detection algorithms on up to this point have been very similar, and in the exact form that the robot will be seeing in its environment when it is deployed.
Final Presentation Slides
Soren’s Status Report for Nov 22
I spent this week working on our person detection system using visual light as a backup to using thermal light (thermal camera and algorithms that detect if there are people in thermal images). This meant connecting the two visual cameras that we got this week from the ECE inventory to our Raspberry Pi, and working on adjusting our detection algorithms to work on visual data instead of thermal data, and making sure that it would successful detect people in the environment in which it would be deployed. I was able to get the first camera connected, however that camera might be having some problems because while it did capture pictures (and those images did somewhat respond to what the camera should be seeing, for instance if you covered the camera, the image it would pick up would be all dark whereas if you didn’t, then the image would be light) those pictures didn’t really represent anything at all; basically the images it would pick up were just an entire screen of blue pixels. I’ve been working on getting the other camera we were lent (the webcam, which should connect fairly simply by USB) set up with our Raspberry Pi, but I have not finished doing so. I also worked on our visual data detection. On Monday, Jeremy allowed me to take some pictures of him in the lab (from approximately the point of view of what the robot would be seeing, i.e. from the ground), and I’ve been using these images to test if our detection is working or not. Currently I’m trying to do the visual detection with HOG, however so far this is not working; if I can’t get it to work with HOG, then more advanced methods might be necessary (CNNs). Next week, our new FLIR breakout board is expected to arrive, so I hope to have this subsystem done either using visual or thermal data. Either way, I expect to be able to get a camera set up and working with our Pi and a detection algorithm tuned to the robots environment working.
As I’ve worked on this project, I’ve mostly picked up new knowledge in computer vision techniques for purposes of our person detection needs (for instance, background subtraction, edge detection, HOG algorithms, and CNNs), and videos on YouTube as well as course content from the CMU course on computer vision has been very helpful for this. In particular, on a subject like this, I found videos to be very helpful in showing visually how each of these algorithms/techniques work. I’ve also searched online for information on OpenCV and what features are available in that library.
Andy’s Status Report for November 22
This week, I focused on selecting a new power source and building the simulation for our robot. I calculated the power requirements for all components and identified a power bank that should meet our needs. On the simulation side, I completed the basic software setup, and my next step is to begin running and testing the simulation. We are currently behind schedule due to several component failures, so I will need to devote more time to the project for the remainder of the semester.
Throughout this project, I’ve found that watching YouTube videos of similar projects is an effective learning strategy. These videos are especially helpful when I’m working on tasks that are new or unfamiliar to me. They allow me to quickly understand fundamental concepts, compare different approaches, and verify my own design decisions.
