Soren’s Status Report for Nov. 15

I spent part of this week putting together some of the visuals of how the person detection techniques that I’ve been exploring work and how they do on online thermal imaging datasets. Unfortunately, while the hardware components for the imaging and detection subsystems were working last week, earlier this week the FLIR breakout board we were using stopped working, and I have spent much of this week looking into alternatives that we can use and seeing how the detection algorithms will work with a different system. Most likely, we will use a cheap Raspberry Pi camera and use HOG for detecting people, which will work just as well on visual data as on thermal data. Because of this setback I am somewhat behind on this portion of the project, however, I am confident that next week I will be able to hook up a new camera to our RPi, and make small modifications to our (HOG) detection algorithms to work on visual data, so that I will be back on track.

The tests I have been able to run so far on our detection system have been on online thermal imaging datasets. Many more tests will be needed as the available online datasets are very different from what our system will see when it is actually deployed (for instance, many of the images are outside, the people are very far away). Once I have the hardware for this subsystem working again, I will use it to capture video/images of what our system will see once it’s actually deployed and make sure that our detection algorithm does not fail to see people that are in frame in the environment (for instance, the detection algorithm right now sometimes misses people that are very far away and appear small in images from the online dataset, but people on the other side of a room should be detected by our system). We will place some people in a room in different orientation/positions with respect to the camera and make sure that the detection algorithm detects everyone in all cases. I will likely go through each of the frames that we capture in a test-run, and flag them for whether or not they have a person or not, and make sure that our detection algorithm is meeting the required false negative (<1%) and false positive (<50%) rates (using my flagging as the standard). If these tests requirements are passed, then our system will be very effective and robust as it will be extremely unlikely to fail to detect someone in back to back image frames over a period of time that someone is being captured in frame when the detection system is deployed.

Leave a Reply

Your email address will not be published. Required fields are marked *