Updated Website Introduction

Our project was to build a search and rescue robot that could explore one floor of a building and detect any people. This system would be used when a building needed to be evacuated and could have dangerous conditions, such as a gas leak. Using a robot to explore a floor of a hazardous building has the potential to better protect rescue workers, by allowing them to not enter the building if no one is inside. This could also be used to save time by searching the building while rescue workers are putting on necessary safety equipment. The goal of our robot is to autonomously navigate one floor of a building, and detect and notify rescue personnel of anyone in the building.

 

We used a mecanum chassis for our robot to allow for high maneuverability and effective obstacle avoidance. A Raspberry Pi 4B is used to run all of the computation on the robot, such as determining a shortest path to follow, with no input needed to control the robot. An IMU is used to provide acceleration data to the robot so the robot can know where it is on a map and follow the planned path. A Lidar scanner is used to measure distances to points and allow the robot to avoid obstacles. A webcam is used to record video, which the Pi uses to detect people and notify the rescue workers. At the outset of the project we had wanted to use SLAM (Simultaneously Localization and Mapping) so the robot could determine its location and a map of its surroundings entirely on its own, but we were not able to integrate our obstacle avoidance and path planning systems in time, so we rely on a pre-loaded map for navigation.

Andy’s Status Report for December 6

Since the last status report, I have completed simulation for path planning, working path planning code that can be deployed on the robot, and fully implemented though not yet tested obstacle avoidance code. We are not using SLAM and Nav2 any more. We are now using preloaded map, A* algorithm, and dynamic path re-planning. I also worked on the final presentation slides and gave the presentation last Wednesday. For the rest of the time, I will help with further testing of path planning and fixing any problem with obstacle avoidance. If time allows, I will use Lidar data to replace the preloaded map.

Team Status Report for December 6

The most significant risk at this point is that there’s still a lot of work to do and final demo is Monday. To try to mitigate this we are spending a lot of time on the project between now and then.

The main change to our design is that we will no longer be using the L515 Lidar scanner. This change was necessary because the fact that the L515 is no longer supported made it too hard to work with and integrate with ROS2 Jazzy. Instead we will be using the RPLIDAR C1 for the obstacle avoidance (and SLAM if there is time) and using a thermal camera for the detection.

The last time the schedule was updated was before the final presentation. The most up-to-date schedule can be seen in the final presentation slides.

While we have not been able to test all of the systems yet since they do not yet have working implementations, we have been able to conduct some tests:

  • Movement: The robot is able to move forwards, backwards, turn, and rotate. The speed was of the robot was estimated and is about equal to our goal from the design requirements of 0.3 m/s.
  • Dimensions: The robot’s dimensions are within those specified by the use case requirements.
  • Battery life: The battery has been able to last far longer than the requirement of 20 minutes without being charged while the full system is running.
  • Path planning: The robot is able to follow a pre-planned global path and update it with local path planning based on odometry data. However, this does decrease the speed of the robot since the robot must continuously reorient itself to modify its direction.

Jeremy’s Status Report for December 6

Progress of things completed since last status report:

  • I finished setting up controlling the motors from the RPi. Now the RPi can control the motors and move the robot forward, backwards, turn it, and rotate it. There were some changes in wiring from our connection of the motor controllers with the RPi 5 to instead connect it to the RPi 4, and also I had to change the orientation of the mecanum wheels since the current setup was resulting in unsteady movement.
  • I mounted the MPU6050 on the robot, soldered it so I could connect jumper wires, and connected it to the RPi. I got the MPU6050 to send IMU data to the RPi and publish the data in a ROS node to the topic /imu.
  • I set up the RPLIDAR C1 and got the data displayed on the RPi and published to a ROS topic /scan. After spending a lot of time trying to get the L515 Lidar scanner to publish data to a ROS node I decided to pivot to the RPLIDAR C1 because there was no available software compatible with both the L515 (a no longer supported device) and  ROS2 Jazzy (the current version of ROS).
  • I integrated the path planning, odometry, and motor control with the actual robot. I took the software that Andy had written to run path planning in simulation and modified it to make it work with the actual robot. There were some difficult bugs to solve in this, such as fixing a race condition where the path planning node would sometimes publish the data before the controller node subscribed to the ROS topic, resulting in the controller node not receiving the path and failing to send velocities to the motors

Schedule update:

  • I am currently behind schedule since by this point I had wanted to have everything completed and just need to do some final testing before the demo, but there is still a lot more to do.
  • While it is impossible to get completely back on schedule by this point, I am spending a lot of time on the project now and trying to get as much to function on the robot as I can.

Deliverables to complete next:

  • The main thing I want to complete next is integrating the obstacle avoidance with the robot. Since I have already gotten the data from the RPLIDAR C1 published to a ROS node and Andy has been working on the obstacle avoidance simulation, I’m optimistic that we can get obstacle avoidance completed before the final demo.
  • If there is still time after implementing obstacle avoidance, I will also try to get SLAM completed before the final demo. Since the RPLIDAR C1 works and has a 360 degree scan this might be possible, but slam_toolbox had been difficult to run before and might also be hard to integrate so I’m not sure if I’ll be able to complete this. The priority right now is obstacle avoidance and if then I’ll see if I can get SLAM to work too.

Soren’s Status Report for Dec. 6

This week, I collected much more data to test our detection algorithms on by recording videos using the camera on the L515 LiDAR and using the thermal camera (which I was able to connect using the new Lepton breakout board, and confirm is working). Thus the detection subsystem is finally done, in time for our final demo. What remains is testing with full system integration. This is likely to go smoothly as the data that we have collected and tested the detection algorithms on up to this point have been very similar, and in the exact form that the robot will be seeing in its environment when it is deployed.