This week, I collected much more data to test our detection algorithms on by recording videos using the camera on the L515 LiDAR and using the thermal camera (which I was able to connect using the new Lepton breakout board, and confirm is working). Thus the detection subsystem is finally done, in time for our final demo. What remains is testing with full system integration. This is likely to go smoothly as the data that we have collected and tested the detection algorithms on up to this point have been very similar, and in the exact form that the robot will be seeing in its environment when it is deployed.
Soren’s Status Report for Nov 22
I spent this week working on our person detection system using visual light as a backup to using thermal light (thermal camera and algorithms that detect if there are people in thermal images). This meant connecting the two visual cameras that we got this week from the ECE inventory to our Raspberry Pi, and working on adjusting our detection algorithms to work on visual data instead of thermal data, and making sure that it would successful detect people in the environment in which it would be deployed. I was able to get the first camera connected, however that camera might be having some problems because while it did capture pictures (and those images did somewhat respond to what the camera should be seeing, for instance if you covered the camera, the image it would pick up would be all dark whereas if you didn’t, then the image would be light) those pictures didn’t really represent anything at all; basically the images it would pick up were just an entire screen of blue pixels. I’ve been working on getting the other camera we were lent (the webcam, which should connect fairly simply by USB) set up with our Raspberry Pi, but I have not finished doing so. I also worked on our visual data detection. On Monday, Jeremy allowed me to take some pictures of him in the lab (from approximately the point of view of what the robot would be seeing, i.e. from the ground), and I’ve been using these images to test if our detection is working or not. Currently I’m trying to do the visual detection with HOG, however so far this is not working; if I can’t get it to work with HOG, then more advanced methods might be necessary (CNNs). Next week, our new FLIR breakout board is expected to arrive, so I hope to have this subsystem done either using visual or thermal data. Either way, I expect to be able to get a camera set up and working with our Pi and a detection algorithm tuned to the robots environment working.
As I’ve worked on this project, I’ve mostly picked up new knowledge in computer vision techniques for purposes of our person detection needs (for instance, background subtraction, edge detection, HOG algorithms, and CNNs), and videos on YouTube as well as course content from the CMU course on computer vision has been very helpful for this. In particular, on a subject like this, I found videos to be very helpful in showing visually how each of these algorithms/techniques work. I’ve also searched online for information on OpenCV and what features are available in that library.
Soren’s Status Report for Nov. 15
I spent part of this week putting together some of the visuals of how the person detection techniques that I’ve been exploring work and how they do on online thermal imaging datasets. Unfortunately, while the hardware components for the imaging and detection subsystems were working last week, earlier this week the FLIR breakout board we were using stopped working, and I have spent much of this week looking into alternatives that we can use and seeing how the detection algorithms will work with a different system. Most likely, we will use a cheap Raspberry Pi camera and use HOG for detecting people, which will work just as well on visual data as on thermal data. Because of this setback I am somewhat behind on this portion of the project, however, I am confident that next week I will be able to hook up a new camera to our RPi, and make small modifications to our (HOG) detection algorithms to work on visual data, so that I will be back on track.
The tests I have been able to run so far on our detection system have been on online thermal imaging datasets. Many more tests will be needed as the available online datasets are very different from what our system will see when it is actually deployed (for instance, many of the images are outside, the people are very far away). Once I have the hardware for this subsystem working again, I will use it to capture video/images of what our system will see once it’s actually deployed and make sure that our detection algorithm does not fail to see people that are in frame in the environment (for instance, the detection algorithm right now sometimes misses people that are very far away and appear small in images from the online dataset, but people on the other side of a room should be detected by our system). We will place some people in a room in different orientation/positions with respect to the camera and make sure that the detection algorithm detects everyone in all cases. I will likely go through each of the frames that we capture in a test-run, and flag them for whether or not they have a person or not, and make sure that our detection algorithm is meeting the required false negative (<1%) and false positive (<50%) rates (using my flagging as the standard). If these tests requirements are passed, then our system will be very effective and robust as it will be extremely unlikely to fail to detect someone in back to back image frames over a period of time that someone is being captured in frame when the detection system is deployed.
Soren’s Status Report for 11/8
I spent most of this week working on the component of the thermal imaging and processing subsystem that is not just the processing algorithm: the hardware setup for the thermal camera, allowing it to communicate with the Raspberry Pi, collecting data from the camera, and allowing the subsystem to communicate and display what images are getting picked up. I am currently on track because all the hardware setup for this component of the system is essentially done. Next week I plan on taking a closer look at our processing algorithm and making sure that it is optimized for use in the environment in which it will be deployed given the data we are now able to collect.
Soren’s Status Report for Nov. 1
This week, I mostly worked on learning about how to connect the thermal camera that we are using to the Raspberry Pi so that we can collect thermal imaging data in the kind of environment and with the kind of data that our system will actually be operating in. I am hoping that next week I will have successfully connected the camera with the Pi (there is a module needed for this that should arrive next week) and collected a good amount of data to test our vision algorithm. I am currently behind on this part of the project because I did not realize that there was another module that we would need to order to connect the camera, but if I can finish this part of the vision component of our system next week, then I will be back on track.
Soren’s Status Report for Oct. 25
This week I continued working on our person detection algorithm using thermal imaging data, and testing it. Some time was also spent this week on the individual and group ethics assignment. I am currently on schedule and next week I hope to work on setting up the algorithm on the Raspberry Pi and connecting the thermal camera to the Raspberry Pi as much of what we have worked on this week has been setting up the Raspberry Pi. I think it will be important to get a sense of what the data that the thermal imaging camera in practice will be picking up as the next stage of testing and optimizing the detection algorithm. The testing data available on line might not be the best to represent what our system will actually be seeing.
Soren’s Status Report for Oct. 18
I spent the past couple of weeks primarily on the design report and on finding solutions to a few of the outstanding gaps in our design. First, I decided that we should use the on-board Raspberry Pi WiFi to allow our robot to communicate with the the users (i.e. rescue team) of our system. Second, I was searching for some way to convert the CVBS analog video signal that comes out of the thermal camera model that we’ve picked out for our design into a digital form that can be used by our Raspberry Pi for image processing. I have found a component that should accomplish just this.
I am slightly behind schedule for the thermal image processing portion of the project. Next week, I need to finish testing the first few approaches to thermal image processing that we are trying using thermal imaging data available on line, and determine if other techniques should be used to solve this problem (such as a HOG type algorithm) as discussed in the design review report.
Soren’s Status Report for 10/4
This week I delivered our group’s design presentation and continued working on our algorithms for detecting people given infra-red imaging data. Overall, on this portion of the project I am on track as next week I plan to test the accuracy of the algorithms that we have so far using IR imaging datasets online.
This week I also began thinking about how our system will represent, store and keep track of what the robot’s surroundings are (some of the more detailed aspects of the path planning portion of our design). I think in this portion of the project, we are somewhat behind as it was pointed out to us in the design review that we have not considered some of the important details in the path planning portion of the project. To help move things along here, it seems likely that I will take up part of this portion of the project to work on in addition to IR data processing. Next week I will work on a detailed design/plan of how our system will take in information from the Lidar scanner about its environment, store the information it has about its environment, and use that information to navigate (as well as the exact policy/way in which it will navigate), and what functionality we may want to include in this portion in addition to just how the robot will explore a building’s floor/rooms.
Soren’s Status Report for September 27
This week I continued learning about and working on algorithms and techniques for detecting people using IR imaging data, as well as looked into which specific hardware components (cameras and controllers) would be best suited for our project. Significant time this week has also gone into the design review presentation and slides.
Because we have made a pivot in the project (based on the fact that our original idea of using a drone to search an area for people would not have been able to cover a wide enough area to make for an effective product) we have pivoted to a different project idea, but one that will still be making use of detecting people using IR imaging, so this component of the project will still be useful and is not one that we are behind schedule in. In the next week I hope to have finally placed an order for an IR camera based on what I have found out about them this week, and test if simpler algorithms (such as edge detecting, or thresholding) will be sufficient to detect people (and not detect not-people) for our use case, or if more advanced methods (CNNs, for instance) will be needed.
Soren’s Status Report for September 20
The main thing I worked on this week was learning about and implementing algorithms to take thermal imaging data and tell where sources of heat in a given image are. For the most part I am on schedule, because I should be able to work on putting a thermal image processing algorithm on a Raspberry Pi next week as outlined in our planned schedule for the project, however there will likely need to be quite a bit more adjustments made to our thermal image processing algorithm (in terms of kind of data being received, and possible optimizations). I think this can be accomplished in the coming week and week after.
