Soren’s Status Report for Oct. 25

This week I continued working on our person detection algorithm using thermal imaging data, and testing it. Some time was also spent this week on the individual and group ethics assignment. I am currently on schedule and next week I hope to work on setting up the algorithm on the Raspberry Pi and connecting the thermal camera to the Raspberry Pi as much of what we have worked on this week has been setting up the Raspberry Pi. I think it will be important to get a sense of what the data that the thermal imaging camera in practice will be picking up as the next stage of testing and optimizing the detection algorithm. The testing data available on line might not be the best to represent what our system will actually be seeing.

Andy’s Status Report for October 25

This week, I worked with my teammates to set up the respberry pi and adjusted the powering plan for our robot. Some other time was spent on the ethics assignment. Due to the problems with the previous powering plan, I am now behind the schedule. I should be already working on robot assembly at this time according to the original plan. I plan to spend more time working on the project with my teammates. For next week, I will first work on chassis assembly and help with software tasks. I will start robot assembly as soon as I got all the parts I need.

Team Status Report for October 25

The most significant risk for the team at this point is that we have not actually implemented very much yet. We have spent most of our time on design, and have not yet seen how everything will work in practice, so there could be unexpected problems that will arise. Additional risks include remaining uncertainty in the designs for the powering subsystem and for the global path planning algorithm.

There was one main change to the design since the previous status report. This has to do with obstacle detection for local path planning. Since the Lidar scanner may be too high off the ground to detect short obstacles, we determined that an additional sensor was needed closer to the ground for use in local path planning to allow the robot to avoid these obstacles. We decided to use an ultrasonic sensor since it is cheap and simple to use, while also being effective for the necessary task. There was enough extra money in the budget for this sensor, so the most significant cost is that it is an additional component to set up, which we believe we can handle since it should be relatively easy to use.

Jeremy’s Status Report for October 25

This week I worked on setting up the Raspberry Pi. I successfully downloaded an operating system to the Raspberry Pi and began working on setting it up. I am behind schedule now, since I had planned to have ROS set up on the Raspberry Pi by this point and also have downloaded and run initial versions of the SLAM and TEB algorithms from GitHub on the Raspberry Pi. To get back on schedule, I plan to spend more time on the project next week. It will also help that more of the parts will have arrived by then, such as the Lidar scanner and a keyboard which will make it easier to interact with the Raspberry Pi. By the end of next week, my goal is to have finished setting up ROS, SLAM, and TEB on the Raspberry Pi, and also to have connected the Lidar scanner to the Raspberry Pi and use it to perform an initial test of the SLAM algorithm.

Team Status Report for October 18

There have been many minor changes and modifications to the design since the previous status report as we have worked on the design more and added more details. The current design is described in detail in the Design Report. There have not been any changes to the design since the Design Report was completed.

One main risk we have now is the powering plan. The current powering plan is not guaranteed to work as none of us is an expert of circuits and batteries. We will work together on this later to resolve the problem.

Another risk right now is that we have not yet implemented or combined the system yet, so we don’t yet know what problems we’ll run into as we do this. In order to mitigate this risk, we plan to order the parts early this week and start connecting everything to the Raspberry Pi and the chassis. This way we will discover any problems that we might have with the implementation sooner rather than later.

Part A: Our design considers global factors in that we have designed our search and rescue robot to work for a floor of any building. This is why we have designed our robot to be versatile, in that it can discover the layout of the building and does not rely on the building having only a layout that is found in buildings locally. Additionally, our design does not rely on a floor plan of the building being available ahead of time, since for many buildings this may not be the case. This way, our design should allow the robot to be useful for buildings of a variety of architectures across the world, allowing the robot to be helpful in urban search and rescue in a building in any city.

Part B: Our design takes cultural and ethical factors seriously, especially when it comes to saving lives. In rescue situations, people naturally expect technology to be dependable, and it can be hard to accept if someone isn’t saved because of an error in a new system. That’s why we focus on making our robot highly reliable with a very low false negative rate as it should never miss a person in need. The robot is also meant to reduce the risks faced by rescue workers by handling dangerous tasks on their behalf. In this way, our project supports the shared belief that life of rescue workers is as important as people who need help.

Part C: Our design addresses the environmental concerns of keeping humans safe and out of harm from dangers that may appear in a building setting. Our project is also committed to a low-energy-consumption solution to the problem of search and rescue in a building situation; while this is intended primarily to allow our system to be as effective as possible by exploring as much building area as possible because it would not be primarily limited by power, this will also have a secondary effect of minimizing use of environmental resources, i.e. power. Finally, by allowing a rescue team to quickly locate and evacuate people at a disaster site, the process of containing the site to prevent dangerous substances from polluting or destroying the surrounding environment (such as in the event of a gas leak, or building fire) could happen much quicker as well, thereby mitigating environmental damage that could be brought along in such a scenario.

A was written by Jeremy; B was written by Andy; C was written by Soren.

 

Jeremy’s Status Report for October 18

Since the last status report, the main thing that I have worked on is the design for the SLAM subsystem and for path planning (both global path planning and local path planning). I finalized the overall design, with scan matching-based SLAM used similar to Hector SLAM, and with Dijkstra’s algorithm for global path planning and TEB for local path planning. These designs are described in detail in the Design Report, which I worked on significantly along with the rest of the team.

My progress is on schedule at the moment, with the goal from before having been to have completed the design at this point. I had also wanted to have an initial version of the software working by this point, but that was not feasible yet since the team and I were still working on finishing the design and figuring out the details. I did however find GitHub pages with code for both Hector SLAM that we can use as a starting point for SLAM, and for TEB which we can use as a starting point for local path planning. These are shown in the Design Report.

By next week, my goal is to have these initial versions of the code running correctly on the Raspberry Pi. This will involve working on the software and setting it up in the ROS environment. Additionally, I will order the necessary hardware for these (mainly just the Lidar scanner, plus any wires needed to interface with the RPi) and have this connected to the RPi.

Soren’s Status Report for Oct. 18

I spent the past couple of weeks primarily on the design report and on finding solutions to a few of the outstanding gaps in our design. First, I decided that we should use the on-board Raspberry Pi WiFi to allow our robot to communicate with the the users (i.e. rescue team) of our system. Second, I was searching for some way to convert the CVBS analog video signal that comes out of the thermal camera model that we’ve picked out for our design into a digital form that can be used by our Raspberry Pi for image processing. I have found a component that should accomplish just this.

I am slightly behind schedule for the thermal image processing portion of the project. Next week, I need to finish testing the first few approaches to thermal image processing that we are trying using thermal imaging data available on line, and determine if other techniques should be used to solve this problem (such as a HOG type algorithm) as discussed in the design review report.

Andy’s Report for October 18

For the week before the fall break, I completed the design for the robot base and computation unit and finalized the design report with my teammates. I have worked out most of the technical details, including the wiring plan and the control program for the mecanum wheels. These details are documented in the design report, and I have provided well justifications for them.

The only remaining issue is the powering plan. While the current design should work in terms of voltage requirements, I am not fully confident about its feasibility since I have limited experience with circuits and batteries. I plan to work with my teammates to refine and verify the powering plan in the coming weeks.

Overall, I am on schedule, with only a slight delay caused by the power system uncertainty. It should be easy to catch up once the issue is resolved. My next steps are to assist in implementing the SLAM and path planning algorithms and to begin developing the control program. Completing these components will help us determine the final configuration of the computing unit as soon as possible.

Soren’s Status Report for 10/4

This week I delivered our group’s design presentation and continued working on our algorithms for detecting people given infra-red imaging data. Overall, on this portion of the project I am on track as next week I plan to test the accuracy of the algorithms that we have so far using IR imaging datasets online.

This week I also began thinking about how our system will represent, store and keep track of what the robot’s surroundings are (some of the more detailed aspects of the path planning portion of our design). I think in this portion of the project, we are somewhat behind as it was pointed out to us in the design review that we have not considered some of the important details in the path planning portion of the project. To help move things along here, it seems likely that I will take up part of this portion of the project to work on in addition to IR data processing. Next week I will work on a detailed design/plan of how our system will take in information from the Lidar scanner about its environment, store the information it has about its environment, and use that information to navigate (as well as the exact policy/way in which it will navigate), and what functionality we may want to include in this portion in addition to just how the robot will explore a building’s floor/rooms.