Soren’s Status Report for Oct. 18

I spent the past couple of weeks primarily on the design report and on finding solutions to a few of the outstanding gaps in our design. First, I decided that we should use the on-board Raspberry Pi WiFi to allow our robot to communicate with the the users (i.e. rescue team) of our system. Second, I was searching for some way to convert the CVBS analog video signal that comes out of the thermal camera model that we’ve picked out for our design into a digital form that can be used by our Raspberry Pi for image processing. I have found a component that should accomplish just this.

I am slightly behind schedule for the thermal image processing portion of the project. Next week, I need to finish testing the first few approaches to thermal image processing that we are trying using thermal imaging data available on line, and determine if other techniques should be used to solve this problem (such as a HOG type algorithm) as discussed in the design review report.

Andy’s Report for October 18

For the week before the fall break, I completed the design for the robot base and computation unit and finalized the design report with my teammates. I have worked out most of the technical details, including the wiring plan and the control program for the mecanum wheels. These details are documented in the design report, and I have provided well justifications for them.

The only remaining issue is the powering plan. While the current design should work in terms of voltage requirements, I am not fully confident about its feasibility since I have limited experience with circuits and batteries. I plan to work with my teammates to refine and verify the powering plan in the coming weeks.

Overall, I am on schedule, with only a slight delay caused by the power system uncertainty. It should be easy to catch up once the issue is resolved. My next steps are to assist in implementing the SLAM and path planning algorithms and to begin developing the control program. Completing these components will help us determine the final configuration of the computing unit as soon as possible.

Soren’s Status Report for 10/4

This week I delivered our group’s design presentation and continued working on our algorithms for detecting people given infra-red imaging data. Overall, on this portion of the project I am on track as next week I plan to test the accuracy of the algorithms that we have so far using IR imaging datasets online.

This week I also began thinking about how our system will represent, store and keep track of what the robot’s surroundings are (some of the more detailed aspects of the path planning portion of our design). I think in this portion of the project, we are somewhat behind as it was pointed out to us in the design review that we have not considered some of the important details in the path planning portion of the project. To help move things along here, it seems likely that I will take up part of this portion of the project to work on in addition to IR data processing. Next week I will work on a detailed design/plan of how our system will take in information from the Lidar scanner about its environment, store the information it has about its environment, and use that information to navigate (as well as the exact policy/way in which it will navigate), and what functionality we may want to include in this portion in addition to just how the robot will explore a building’s floor/rooms.

Andy’s Status Report for October 4

This week, I worked with my teammates on the design review presentation and continued developing the vehicle design. After evaluating different options, I decided that a chassis equipped with magnum wheels would be the best foundation for our vehicle, as it should provide strong mobility and maneuverability. Overall, I am on schedule.

My next step is to explore the motor controller and control algorithms, focusing on how to make the robot turn or rotate effectively. By next week, I plan to finalize most of the design, order the required parts, and complete the design report.

Additionally, I looked into another robot platform called iRobot Create. It is a highly programmable and capable system, but it is no longer widely available. The inventory currently lists iRobot Vacuum models, so I plan to investigate whether they can serve as a substitute. If they are suitable, I may adjust our vehicle design accordingly.

Team Status Report for October 4

The main risk at this point is that we are still working on the details of how many of the systems will work, and we aren’t yet sure what problems will arise as we try to implement our designs. For example, we are still figuring out exactly how the pathing algorithm will work and where the robot will decide to go (though this will come into more focus once we have an initial version of the SLAM system completed so we can see exactly what the input to the pathing algorithm will look like). To manage these risks, we plan to start implementing our software on the actual Raspberry Pi so we can refine the designs by seeing how they will be implemented. We have ordered the Raspberry Pi and it is ready for pickup from the ECE Inventory, so we can get it first thing Monday morning.

Last week a lot of the design was uncertain, but we have worked a lot more on the design for the design review presentation. There have not been any major changes to the design since the design review presentation. However, we are still working on refining our design and going into greater detail for the design review report. There have also not been any changes to the schedule since the design review presentation.

Jeremy’s Status Report for October 4

This week I worked on the design for the SLAM subsystem. After doing some research, I determined that a Scan Matching-based 2D Lidar SLAM method would probably be the most effective for our use case. I read the papers A Review of 2D Lidar SLAM Research (Yan et al. 2025) and A Flexible and Scalable System with Full 3D Motion Estimation (Kohlbrecher et al. 2011) as part of this research. The first provided explanations of many SLAM systems which I used to evaluate which would be most effective for our use case. The second went into greater detail on a specific Scan Matching-based algorithm called Hector SLAM, which they also provide in open-source software. I think that this is a good initial design for our SLAM system, and that the open-source software can be used as a starting point for our software. I also worked on the slides for the design review presentation.

My progress right now is on schedule. My goal was to have an initial design for the SLAM algorithm by the end of this week, which I think I have accomplished.

By next week, I hope to have completed an initial version of the software that will demonstrate that the design is feasible, and to begin working on implementing it on the Raspberry Pi. I think that this should be doable by using the open-source software for Hector SLAM as a starting point for our software for the SLAM system.

Andy’s Status Report for September 27

At the beginning of this week, I looked deeper into drone design and found that the requirements were more complex than expected. Beyond the standard components, we also needed ESCs, a flight controller, a gyroscope, and an altimeter. These essential parts alone would consume over half of our budget, and adding a USB camera and a high-resolution thermal camera would push us well beyond our limits. Because of this, we decided to pivot away from drones.

I then explored the idea of indoor search-and-rescue drones. This use case was more reasonable since limiting operations to indoor spaces removes the challenge of long-range control. However, I still faced difficulties with the RC control aspects of drone design.

After meeting with our instructors, we formally decided to switch from drones to indoor ground vehicles. I began researching suitable platforms and initially considered the iRobot Create, but since it is no longer widely available, I shifted focus to on-shelf robot car kits, such as the ELEGOO UNO. While these kits are basic, they provide a reliable chassis we can expand upon without losing time on building vehicles from scratch. This means that we can put more time on algorithm designs, which we are are better at.

Next, I will focus on how to handle vehicle rotation and how to integrate lidar into the system. Although the project change has set me slightly behind schedule, I am confident I can quickly catch up with this more practical and achievable direction.

Soren’s Status Report for September 27

This week I continued learning about and working on algorithms and techniques for detecting people using IR imaging data, as well as looked into which specific hardware components (cameras and controllers) would be best suited for our project. Significant time this week has also gone into the design review presentation and slides.

Because we have made a pivot in the project (based on the fact that our original idea of using a drone to search an area for people would not have been able to cover a wide enough area to make for an effective product) we have pivoted to a different project idea, but one that will still be making use of detecting people using IR imaging, so this component of the project will still be useful and is not one that we are behind schedule in. In the next week I hope to have finally placed an order for an IR camera based on what I have found out about them this week, and test if simpler algorithms (such as edge detecting, or thresholding) will be sufficient to detect people (and not detect not-people) for our use case, or if more advanced methods (CNNs, for instance) will be needed.