Updated Website Introduction
Our project was to build a search and rescue robot that could explore one floor of a building and detect any people. This system would be used when a building needed to be evacuated and could have dangerous conditions, such as a gas leak. Using a robot to explore a floor of a hazardous building has the potential to better protect rescue workers, by allowing them to not enter the building if no one is inside. This could also be used to save time by searching the building while rescue workers are putting on necessary safety equipment. The goal of our robot is to autonomously navigate one floor of a building, and detect and notify rescue personnel of anyone in the building.
We used a mecanum chassis for our robot to allow for high maneuverability and effective obstacle avoidance. A Raspberry Pi 4B is used to run all of the computation on the robot, such as determining a shortest path to follow, with no input needed to control the robot. An IMU is used to provide acceleration data to the robot so the robot can know where it is on a map and follow the planned path. A Lidar scanner is used to measure distances to points and allow the robot to avoid obstacles. A webcam is used to record video, which the Pi uses to detect people and notify the rescue workers. At the outset of the project we had wanted to use SLAM (Simultaneously Localization and Mapping) so the robot could determine its location and a map of its surroundings entirely on its own, but we were not able to integrate our obstacle avoidance and path planning systems in time, so we rely on a pre-loaded map for navigation.
Final Video
Team Status Report for December 6
The most significant risk at this point is that there’s still a lot of work to do and final demo is Monday. To try to mitigate this we are spending a lot of time on the project between now and then.
The main change to our design is that we will no longer be using the L515 Lidar scanner. This change was necessary because the fact that the L515 is no longer supported made it too hard to work with and integrate with ROS2 Jazzy. Instead we will be using the RPLIDAR C1 for the obstacle avoidance (and SLAM if there is time) and using a thermal camera for the detection.
The last time the schedule was updated was before the final presentation. The most up-to-date schedule can be seen in the final presentation slides.
While we have not been able to test all of the systems yet since they do not yet have working implementations, we have been able to conduct some tests:
- Movement: The robot is able to move forwards, backwards, turn, and rotate. The speed was of the robot was estimated and is about equal to our goal from the design requirements of 0.3 m/s.
- Dimensions: The robot’s dimensions are within those specified by the use case requirements.
- Battery life: The battery has been able to last far longer than the requirement of 20 minutes without being charged while the full system is running.
- Path planning: The robot is able to follow a pre-planned global path and update it with local path planning based on odometry data. However, this does decrease the speed of the robot since the robot must continuously reorient itself to modify its direction.
Jeremy’s Status Report for December 6
Progress of things completed since last status report:
- I finished setting up controlling the motors from the RPi. Now the RPi can control the motors and move the robot forward, backwards, turn it, and rotate it. There were some changes in wiring from our connection of the motor controllers with the RPi 5 to instead connect it to the RPi 4, and also I had to change the orientation of the mecanum wheels since the current setup was resulting in unsteady movement.
- I mounted the MPU6050 on the robot, soldered it so I could connect jumper wires, and connected it to the RPi. I got the MPU6050 to send IMU data to the RPi and publish the data in a ROS node to the topic /imu.
- I set up the RPLIDAR C1 and got the data displayed on the RPi and published to a ROS topic /scan. After spending a lot of time trying to get the L515 Lidar scanner to publish data to a ROS node I decided to pivot to the RPLIDAR C1 because there was no available software compatible with both the L515 (a no longer supported device) and ROS2 Jazzy (the current version of ROS).
- I integrated the path planning, odometry, and motor control with the actual robot. I took the software that Andy had written to run path planning in simulation and modified it to make it work with the actual robot. There were some difficult bugs to solve in this, such as fixing a race condition where the path planning node would sometimes publish the data before the controller node subscribed to the ROS topic, resulting in the controller node not receiving the path and failing to send velocities to the motors
Schedule update:
- I am currently behind schedule since by this point I had wanted to have everything completed and just need to do some final testing before the demo, but there is still a lot more to do.
- While it is impossible to get completely back on schedule by this point, I am spending a lot of time on the project now and trying to get as much to function on the robot as I can.
Deliverables to complete next:
- The main thing I want to complete next is integrating the obstacle avoidance with the robot. Since I have already gotten the data from the RPLIDAR C1 published to a ROS node and Andy has been working on the obstacle avoidance simulation, I’m optimistic that we can get obstacle avoidance completed before the final demo.
- If there is still time after implementing obstacle avoidance, I will also try to get SLAM completed before the final demo. Since the RPLIDAR C1 works and has a 360 degree scan this might be possible, but slam_toolbox had been difficult to run before and might also be hard to integrate so I’m not sure if I’ll be able to complete this. The priority right now is obstacle avoidance and if then I’ll see if I can get SLAM to work too.
Final Presentation Slides
Team Status Report for November 22
The most significant risk at this point is that we might not have enough time to finish, integrate, and test all of the different systems. We are trying to mitigate this by spending more time on the project for the remainder of the semester. We want to at least have a somewhat working system by the end, so if need be we have the contingency plan of relaxing some of the use case requirements to at least have some of the systems working together and integrated. There has also been the difficulty of various components breaking at different points over the course of the semester.
The main design change since last week is that we will be using a Raspberry Pi 4 instead of a Raspberry Pi 5. This is as a result of the RPi 5 breaking as a result of a short circuit caused when we were setting up the motors. The switch to an RPi 4 was necessary because there were only RPi 4s available in the ECE inventory and not RPi 5s. This also changes the powering system that we will be using, since the RPi 4 requires less current. The one benefit of this is that the RPi 4 powering system is simpler since the RPi 4 requires less power than the RPi 5.
Jeremy’s Status Report for November 22
This week the main things I worked on were setting up the new L515 Lidar Scanner and running slam_toolbox and Nav2 in simulation. I was able to get the data from the Lidar scanner displayed on my laptop. There was a lot of time spent debugging the Lidar scanner software since the current versions of the Intel RealSense software are not compatible with the L515, so I had to find and download an older version of the software. For simulation, I was able to run a Turtlebot3 simulation in Gazebo on the Raspberry Pi in ROS2 Jazzy. I worked on setting up slam_toolbox and Nav2 in this simulation but have not yet finished this as I am still working on errors related to not having the correct reference frame for the simulated robot’s odometry.
I am currently behind schedule as by this point I had wanted to have the SLAM subsystem and path planning fully working and only need to integrate, test, and improve it. To catch up I plan to continue working on the project over Thanksgiving break in the time that I am on-campus. By the end of next week I hope to have the SLAM and path planning subsystems fully working, with the Lidar data and odometry processed and input to the SLAM and path planning algorithms.
Over the course of the semester, I have learned a lot about setting up a Raspberry Pi, the ROS2 development environment, and setting up a Lidar scanner. I did not know very much about any of these beforehand so I had to learn about them to do my parts of the project. The main tools and learning strategies I used to acquire this knowledge were reading the documentation for these components, watching tutorial Youtube videos, and asking AI models for setup guides and debugging help. I also learned by trying different setups and methods and seeing what worked and what caused problems.
Team Status Report for November 15
The main risks and difficulties at this point are as a result of hardware for the systems breaking. Neither the Lidar scanner nor the thermal is working at this point, and we suspect that both of these hardware components are broken. There is the additional difficulty here of that we are now out of budget. To manage this, we have ordered a replacement Lidar scanner and possible cameras we could use for detection from the ECE inventory. A further risk from the hardware difficulties is that the project is now behind schedule and we do not have much time left to complete the project. To manage this we are trying to simplify the design of the project and aim to finish as many subsystems as we can as quickly as possible (for example the motor control and movement subsystem is almost completed).
There have been some design changes as a result of the different hardware components no longer working. We have ordered a new Intelsense L515 Lidar scanner from the ECE inventory, and we will be using this Lidar scanner instead of the RPLIDAR A1. We also plan to simplify our design by using normal camera data instead of a thermal camera. We ordered two cameras from the ECE inventory which we can test out to see if they work for the detection subsystem.
The schedule at this point is no longer something we can achieve as a result of the hardware difficulties we encountered. We are now aiming to finish and integrate whatever we can before the final demo.
Full system testing and validation has not been considered much recently since we are not very close to getting the full system working. Our original plan for validation was to test the robot in an environment, such as the HH1300 wing, as if it were in real use after a disaster.
Jeremy’s Status Report for November 15
At the start of this week, I was working on integrating the Lidar data with the slam_toolbox algorithm. I was having some difficulty with this since there is no odometry system for our robot, and slam_toolbox requires odometry data as input in addition to Lidar data. I began working on installing Lidar-only odometry, and tried setting up rf20_laser_odometry, but was unable to successfully set it up. Then, on Monday I ran into the problem that the Lidar scanner was no longer detecting any points, and began reporting that all points were infinitely far away with 0 intensity. I tried many things to debug this issue. At first I thought it might be an issue of the Lidar scanner not getting enough power, but after examining and reconfiguring the powering system I decided this was not the problem. I then thought it might be a software issue that could have been created as a result of something I was doing with setting up the laser odometry system. I tried uninstalling and reinstalling all the software on the RPi related to the Lidar scanner, including ROS, but there was still the same problem. I also tried setting up the Lidar scanner with my Windows laptop using Slamtec’s official RoboStudio software, but the Lidar scanner still did not detect any points. This led me to conclude that the issue was a hardware issue with the RPLIDAR A1. I tested connectivity of the wires connecting the Lidar scanner to the MicroUSB converter board, and found that they were all connected correctly. I also tried using a different MicroUSB to USB converter cable. At this point I think that the only possible explanations are that either the Lidar scanner itself or the board that converts its outputs to MicroUSB is broken. I emailed Slamtec’s support staff but have not yet received a response. Since we do not have the budget to buy another RPLIDAR A1, I requested the IntelSense L515 Lidar scanner from the ECE inventory.
My progress at this point is behind schedule as a result of the difficulties with the Lidar scanner. In fact I right now have less accomplished than I did last week since last week the Lidar scanner was working, and now it is broken and I spent most of my time trying to fix it. On Monday I will pick up the new Lidar scanner from ECE inventory and will hopefully be able to catch up on my schedule. Andy and I are also finishing up the motor control system over the weekend, so Andy will be able to help me with the SLAM system next week which should allow me to get back on schedule. By the end of next week I hope to have set up the new Lidar scanner, set up laser odometry, and integrated the results with slam_toolbox and Nav2. If I accomplish those then I will also be able to do the global path planning.
At this point I have not been able to run any tests yet since the SLAM subsystem is not yet at a point where it can be tested. I am planning to test the SLAM subsystem by taking in Lidar data input and examining the generated map to determine its accuracy. I will revise the testing plan further once I am able to get some output from the SLAM subsystem.
