Team Status Report for November 22

The most significant risk at this point is that we might not have enough time to finish, integrate, and test all of the different systems. We are trying to mitigate this by spending more time on the project for the remainder of the semester. We want to at least have a somewhat working system by the end, so if need be we have the contingency plan of relaxing some of the use case requirements to at least have some of the systems working together and integrated. There has also been the difficulty of various components breaking at different points over the course of the semester.

The main design change since last week is that we will be using a Raspberry Pi 4 instead of a Raspberry Pi 5. This is as a result of the RPi 5 breaking as a result of a short circuit caused when we were setting up the motors. The switch to an RPi 4 was necessary because there were only RPi 4s available in the ECE inventory and not RPi 5s. This also changes the powering system that we will be using, since the RPi 4 requires less current. The one benefit of this is that the RPi 4 powering system is simpler since the RPi 4 requires less power than the RPi 5.

Jeremy’s Status Report for November 22

This week the main things I worked on were setting up the new L515 Lidar Scanner and running slam_toolbox and Nav2 in simulation. I was able to get the data from the Lidar scanner displayed on my laptop. There was a lot of time spent debugging the Lidar scanner software since the current versions of the Intel RealSense software are not compatible with the L515, so I had to find and download an older version of the software. For simulation, I was able to run a Turtlebot3 simulation in Gazebo on the Raspberry Pi in ROS2 Jazzy. I worked on setting up slam_toolbox and Nav2 in this simulation but have not yet finished this as I am still working on errors related to not having the correct reference frame for the simulated robot’s odometry.

I am currently behind schedule as by this point I had wanted to have the SLAM subsystem and path planning fully working and only need to integrate, test, and improve it. To catch up I plan to continue working on the project over Thanksgiving break in the time that I am on-campus. By the end of next week I hope to have the SLAM and path planning subsystems fully working, with the Lidar data and odometry processed and input to the SLAM and path planning algorithms.

Over the course of the semester, I have learned a lot about setting up a Raspberry Pi, the ROS2 development environment, and setting up a Lidar scanner. I did not know very much about any of these beforehand so I had to learn about them to do my parts of the project. The main tools and learning strategies I used to acquire this knowledge were reading the documentation for these components, watching tutorial Youtube videos, and asking AI models for setup guides and debugging help. I also learned by trying different setups and methods and seeing what worked and what caused problems.

Andy’s Status Report for Nov. 15

Last week, I focused primarily on the robot’s motor system. I successfully achieved full control of one of the motors, and since all wiring is already in place, extending this control to all four motors should be straightforward. Our next step is to test the remaining motors, which we expect to complete tomorrow.

Overall, I am on schedule, though slightly delayed due to hardware failures with the LiDAR and thermal camera, which have slowed our group’s progress. My next tasks are to help set up the new LiDAR unit and begin work on SLAM and path planning.

For verification of the motor system, I will test whether the robot can move forward smoothly and execute accurate left and right turns and rotations, as these maneuvers are essential for navigation and obstacle avoidance. I will create a test script that controls the robot to perfom different kinds of maneuvers and observe whether the robot carries them out correctly. I will also measure the robot’s straight-line speed to verify that it meets the design requirement of achieving at least 0.3 m/s.

 

Soren’s Status Report for Nov. 15

I spent part of this week putting together some of the visuals of how the person detection techniques that I’ve been exploring work and how they do on online thermal imaging datasets. Unfortunately, while the hardware components for the imaging and detection subsystems were working last week, earlier this week the FLIR breakout board we were using stopped working, and I have spent much of this week looking into alternatives that we can use and seeing how the detection algorithms will work with a different system. Most likely, we will use a cheap Raspberry Pi camera and use HOG for detecting people, which will work just as well on visual data as on thermal data. Because of this setback I am somewhat behind on this portion of the project, however, I am confident that next week I will be able to hook up a new camera to our RPi, and make small modifications to our (HOG) detection algorithms to work on visual data, so that I will be back on track.

The tests I have been able to run so far on our detection system have been on online thermal imaging datasets. Many more tests will be needed as the available online datasets are very different from what our system will see when it is actually deployed (for instance, many of the images are outside, the people are very far away). Once I have the hardware for this subsystem working again, I will use it to capture video/images of what our system will see once it’s actually deployed and make sure that our detection algorithm does not fail to see people that are in frame in the environment (for instance, the detection algorithm right now sometimes misses people that are very far away and appear small in images from the online dataset, but people on the other side of a room should be detected by our system). We will place some people in a room in different orientation/positions with respect to the camera and make sure that the detection algorithm detects everyone in all cases. I will likely go through each of the frames that we capture in a test-run, and flag them for whether or not they have a person or not, and make sure that our detection algorithm is meeting the required false negative (<1%) and false positive (<50%) rates (using my flagging as the standard). If these tests requirements are passed, then our system will be very effective and robust as it will be extremely unlikely to fail to detect someone in back to back image frames over a period of time that someone is being captured in frame when the detection system is deployed.

Team Status Report for November 15

The main risks and difficulties at this point are as a result of hardware for the systems breaking. Neither the Lidar scanner nor the thermal is working at this point, and we suspect that both of these hardware components are broken. There is the additional difficulty here of that we are now out of budget. To manage this, we have ordered a replacement Lidar scanner and possible cameras we could use for detection from the ECE inventory. A further risk from the hardware difficulties is that the project is now behind schedule and we do not have much time left to complete the project. To manage this we are trying to simplify the design of the project and aim to finish as many subsystems as we can as quickly as possible (for example the motor control and movement subsystem is almost completed).

There have been some design changes as a result of the different hardware components no longer working. We have ordered a new Intelsense L515 Lidar scanner from the ECE inventory, and we will be using this Lidar scanner instead of the RPLIDAR A1. We also plan to simplify our design by using normal camera data instead of a thermal camera. We ordered two cameras from the ECE inventory which we can test out to see if they work for the detection subsystem.

The schedule at this point is no longer something we can achieve as a result of the hardware difficulties we encountered. We are now aiming to finish and integrate whatever we can before the final demo.

Full system testing and validation has not been considered much recently since we are not very close to getting the full system working. Our original plan for validation was to test the robot in an environment, such as the HH1300 wing, as if it were in real use after a disaster.

Jeremy’s Status Report for November 15

At the start of this week, I was working on integrating the Lidar data with the slam_toolbox algorithm. I was having some difficulty with this since there is no odometry system for our robot, and slam_toolbox requires odometry data as input in addition to Lidar data. I began working on installing Lidar-only odometry, and tried setting up rf20_laser_odometry, but was unable to successfully set it up. Then, on Monday I ran into the problem that the Lidar scanner was no longer detecting any points, and began reporting that all points were infinitely far away with 0 intensity. I tried many things to debug this issue. At first I thought it might be an issue of the Lidar scanner not getting enough power, but after examining and reconfiguring the powering system I decided this was not the problem. I then thought it might be a software issue that could have been created as a result of something I was doing with setting up the laser odometry system. I tried uninstalling and reinstalling all the software on the RPi related to the Lidar scanner, including ROS, but there was still the same problem. I also tried setting up the Lidar scanner with my Windows laptop using Slamtec’s official RoboStudio software, but the Lidar scanner still did not detect any points. This led me to conclude that the issue was a hardware issue with the RPLIDAR A1. I tested connectivity of the wires connecting the Lidar scanner to the MicroUSB converter board, and found that they were all connected correctly. I also tried using a different MicroUSB to USB converter cable. At this point I think that the only possible explanations are that either the Lidar scanner itself or the board that converts its outputs to MicroUSB is broken. I emailed Slamtec’s support staff but have not yet received a response. Since we do not have the budget to buy another RPLIDAR A1, I requested the IntelSense L515 Lidar scanner from the ECE inventory.

My progress at this point is behind schedule as a result of the difficulties with the Lidar scanner. In fact I right now have less accomplished than I did last week since last week the Lidar scanner was working, and now it is broken and I spent most of my time trying to fix it. On Monday I will pick up the new Lidar scanner from ECE inventory and will hopefully be able to catch up on my schedule. Andy and I are also finishing up the motor control system over the weekend, so Andy will be able to help me with the SLAM system next week which should allow me to get back on schedule. By the end of next week I hope to have set up the new Lidar scanner, set up laser odometry, and integrated the results with slam_toolbox and Nav2. If I accomplish those then I will also be able to do the global path planning.

At this point I have not been able to run any tests yet since the SLAM subsystem is not yet at a point where it can be tested. I am planning to test the SLAM subsystem by taking in Lidar data input and examining the generated map to determine its accuracy. I will revise the testing plan further once I am able to get some output from the SLAM subsystem.

Andy’s Status Report for November 8

Last week, I focused on assembling the robot, and the assembly is now mostly complete. All components have been mounted except for the thermal camera and LiDAR, though both have been tested and successfully connected to the system. The robot is now powered — the motors are operational — but I still need to verify that the control pins are correctly wired and develop the motor control script.

My goal for this week is to implement and test the motor control logic to ensure reliable operation before the interim demo. Next week, I plan to extend the script to include turning functionality and verify its performance. Although I am slightly behind schedule, I have been making steady progress and expect to catch up by dedicating a few extra hours over the next two weeks.

Soren’s Status Report for 11/8

I spent most of this week working on the component of the thermal imaging and processing subsystem that is not just the processing algorithm: the hardware setup for the thermal camera, allowing it to communicate with the Raspberry Pi, collecting data from the camera, and allowing the subsystem to communicate and display what images are getting picked up. I am currently on track because all the hardware setup for this component of the system is essentially done. Next week I plan on taking a closer look at our processing algorithm and making sure that it is optimized for use in the environment in which it will be deployed given the data we are now able to collect.

Team Status Report for November 8

The most significant risk at this point is that there is not a lot of time left to finish the project but there is still a lot left to do. This means that if there are unexpected difficulties in integrating the different systems together then we might not have enough time to solve all of the problems. This risk is mitigated some by the interim demo. We will try to get an initial version of most systems working for the interim demo, and the interim demo will give us a chance to test and get feedback on our work so far. There have been no design changes since last week, and no changes to the schedule.

Jeremy’s Status Report for November 8

This week, I set up slam_toolbox and Nav2 on the Raspberry Pi. I was able to get both slam_toolbox and Nav2 to run in simulations on the RPi. I also connected the Lidar Scanner to the RPi and was able to get the output from the Lidar Scanner to display on the RPi. In addition to these things, I also worked with Andy in setting up the powering system and the robot chassis. Together we connected the powering system to the RPi and got that working together, and also attached the batteries, motor controllers, and RPi to each other and to the robot chassis. I also wrote an initial version of a script that we can try to use to send instructions from the RPi to the motor controllers.

At this point my progress is mostly on schedule, since I have completed my goals of setting up slam_toolbox and Nav2 on the RPi and testing the Lidar Scanner. However, there is still a lot more that I have to do. By next week I plan to set up the Lidar Scanner and slam_toolbox together so that the output of the Lidar Scanner is used as the input to slam_toolbox, and then use the output of slam_toolbox as the input to Nav2. I also want to complete a test of the system working with the RPi controlling the physical robot.