The most significant risk at this point is that there is not a lot of time left to finish the project but there is still a lot left to do. This means that if there are unexpected difficulties in integrating the different systems together then we might not have enough time to solve all of the problems. This risk is mitigated some by the interim demo. We will try to get an initial version of most systems working for the interim demo, and the interim demo will give us a chance to test and get feedback on our work so far. There have been no design changes since last week, and no changes to the schedule.
Jeremy’s Status Report for November 8
This week, I set up slam_toolbox and Nav2 on the Raspberry Pi. I was able to get both slam_toolbox and Nav2 to run in simulations on the RPi. I also connected the Lidar Scanner to the RPi and was able to get the output from the Lidar Scanner to display on the RPi. In addition to these things, I also worked with Andy in setting up the powering system and the robot chassis. Together we connected the powering system to the RPi and got that working together, and also attached the batteries, motor controllers, and RPi to each other and to the robot chassis. I also wrote an initial version of a script that we can try to use to send instructions from the RPi to the motor controllers.
At this point my progress is mostly on schedule, since I have completed my goals of setting up slam_toolbox and Nav2 on the RPi and testing the Lidar Scanner. However, there is still a lot more that I have to do. By next week I plan to set up the Lidar Scanner and slam_toolbox together so that the output of the Lidar Scanner is used as the input to slam_toolbox, and then use the output of slam_toolbox as the input to Nav2. I also want to complete a test of the system working with the RPi controlling the physical robot.
Team Status Report for November 1
The main risk at this point is that we have not yet tested and integrated our systems, but we are hoping to have a full system initially working for our live demo in a bit over a week in order to demonstrate a mostly functional initial version of the project. To manage this risk we plan to start testing and integrating our systems as soon as possible (over the weekend and early next week) so that we have time to deal with unexpected issues before our initial project demo.
Since last week, there have been several minor design modifications in terms of the hardware and software that we will be using. We ordered an additional part for interfacing between the Raspberry Pi and the thermal camera, and a few more parts for setting up the physical robot. For software, we changed the SLAM and local path planning software that we will be using since our previously selected software was not compatible with current, maintained versions of ROS. We will now be using SLAM Toolbox instead of Hector SLAM, and Nav2 instead of TEB for local path planning.
Jeremy’s Status Report for November 1
This week I worked on setting up ROS, SLAM, and the local path planning algorithm on the Raspberry Pi. I successfully installed ROS2 Jazzy Jalisco on the Raspberry Pi. For the SLAM and local path planning, I ran into the problem that our previously chosen software of Hector SLAM and TEB are not compatible with ROS2 but only with ROS1, which has been deprecated and thus cannot be used. So, I found new software that works with ROS2 Jazzy Jalisco. For SLAM, I will be using SLAM Toolbox and for local path planning I will be using Nav2. I have begun setting up this new software with the Raspberry Pi.
I am still a little bit behind schedule since I had also wanted to connect and test the Lidar scanner with the SLAM algorithm by this point. To get back on schedule I plan to finish setting up the new SLAM and local path planning algorithms over the weekend and also do the Lidar scanner test over the weekend, so that I can spend time next week on the next steps. By the end of next week, in addition to setting up the SLAM and local path planning and testing the Lidar scanner, I also plan to finish integrating these with the robot controls so that we can do a full test of the system before the live demo.
Team Status Report for October 25
The most significant risk for the team at this point is that we have not actually implemented very much yet. We have spent most of our time on design, and have not yet seen how everything will work in practice, so there could be unexpected problems that will arise. Additional risks include remaining uncertainty in the designs for the powering subsystem and for the global path planning algorithm.
There was one main change to the design since the previous status report. This has to do with obstacle detection for local path planning. Since the Lidar scanner may be too high off the ground to detect short obstacles, we determined that an additional sensor was needed closer to the ground for use in local path planning to allow the robot to avoid these obstacles. We decided to use an ultrasonic sensor since it is cheap and simple to use, while also being effective for the necessary task. There was enough extra money in the budget for this sensor, so the most significant cost is that it is an additional component to set up, which we believe we can handle since it should be relatively easy to use.
Jeremy’s Status Report for October 25
This week I worked on setting up the Raspberry Pi. I successfully downloaded an operating system to the Raspberry Pi and began working on setting it up. I am behind schedule now, since I had planned to have ROS set up on the Raspberry Pi by this point and also have downloaded and run initial versions of the SLAM and TEB algorithms from GitHub on the Raspberry Pi. To get back on schedule, I plan to spend more time on the project next week. It will also help that more of the parts will have arrived by then, such as the Lidar scanner and a keyboard which will make it easier to interact with the Raspberry Pi. By the end of next week, my goal is to have finished setting up ROS, SLAM, and TEB on the Raspberry Pi, and also to have connected the Lidar scanner to the Raspberry Pi and use it to perform an initial test of the SLAM algorithm.
Jeremy’s Status Report for October 18
Since the last status report, the main thing that I have worked on is the design for the SLAM subsystem and for path planning (both global path planning and local path planning). I finalized the overall design, with scan matching-based SLAM used similar to Hector SLAM, and with Dijkstra’s algorithm for global path planning and TEB for local path planning. These designs are described in detail in the Design Report, which I worked on significantly along with the rest of the team.
My progress is on schedule at the moment, with the goal from before having been to have completed the design at this point. I had also wanted to have an initial version of the software working by this point, but that was not feasible yet since the team and I were still working on finishing the design and figuring out the details. I did however find GitHub pages with code for both Hector SLAM that we can use as a starting point for SLAM, and for TEB which we can use as a starting point for local path planning. These are shown in the Design Report.
By next week, my goal is to have these initial versions of the code running correctly on the Raspberry Pi. This will involve working on the software and setting it up in the ROS environment. Additionally, I will order the necessary hardware for these (mainly just the Lidar scanner, plus any wires needed to interface with the RPi) and have this connected to the RPi.
Design Review Report
Team Status Report for October 4
The main risk at this point is that we are still working on the details of how many of the systems will work, and we aren’t yet sure what problems will arise as we try to implement our designs. For example, we are still figuring out exactly how the pathing algorithm will work and where the robot will decide to go (though this will come into more focus once we have an initial version of the SLAM system completed so we can see exactly what the input to the pathing algorithm will look like). To manage these risks, we plan to start implementing our software on the actual Raspberry Pi so we can refine the designs by seeing how they will be implemented. We have ordered the Raspberry Pi and it is ready for pickup from the ECE Inventory, so we can get it first thing Monday morning.
Last week a lot of the design was uncertain, but we have worked a lot more on the design for the design review presentation. There have not been any major changes to the design since the design review presentation. However, we are still working on refining our design and going into greater detail for the design review report. There have also not been any changes to the schedule since the design review presentation.
Jeremy’s Status Report for October 4
This week I worked on the design for the SLAM subsystem. After doing some research, I determined that a Scan Matching-based 2D Lidar SLAM method would probably be the most effective for our use case. I read the papers A Review of 2D Lidar SLAM Research (Yan et al. 2025) and A Flexible and Scalable System with Full 3D Motion Estimation (Kohlbrecher et al. 2011) as part of this research. The first provided explanations of many SLAM systems which I used to evaluate which would be most effective for our use case. The second went into greater detail on a specific Scan Matching-based algorithm called Hector SLAM, which they also provide in open-source software. I think that this is a good initial design for our SLAM system, and that the open-source software can be used as a starting point for our software. I also worked on the slides for the design review presentation.
My progress right now is on schedule. My goal was to have an initial design for the SLAM algorithm by the end of this week, which I think I have accomplished.
By next week, I hope to have completed an initial version of the software that will demonstrate that the design is feasible, and to begin working on implementing it on the Raspberry Pi. I think that this should be doable by using the open-source software for Hector SLAM as a starting point for our software for the SLAM system.
