Linsey’s Status Report for 3/11

I have helped put together the design review report for its submission deadline and have continued working on the machine learning architecture. For the report, I wrote more general sections like the abstract, introduction, use case requirements, and related work. Much of this was expanding upon our presentation, but I carefully examine the requirements and made sure to hit all the discussion points and reviewed all these sections with my teammates. Additionally, in the other sections, I wrote material pertaining to the machine learning architecture. Again, I built off of the presentation. I did have to organize which of my references pertained to which parts of the architecture and really understand exactly how my part is integrating with Angie’s and Ayesha’s. I also drew out a whole system diagram for the report. For the machine learning architecture, with Angie’s help I labeled each data piece, so that they all have targets. Therefore, I was able to generate train and test sets. I started testing the architecture that I had written out with the training data, but I am working through dimension errors. This will take some shape examination but hopefully shouldn’t be that much of a barrier to further progress.

I am slightly behind, because I wanted to have the machine learning architecture fully training at this point. However, I truly took a break for spring break and will be back on track after overcoming the dimension errors.

I hope to deliver the machine learning architecture fully trained by the end of this week.

Linsey’s Status Report for 2/25

At the start of this week, I finished up the slides for the design review. To fully flesh out my approach, I read a few articles to understand differences between different CNN architectures. I have worked with 2D CNN architectures before and am most familiar with them. However, since the range-Doppler maps that we will input are 3D, in the end, I found that using a 3D CNN architecture would be the most fitting for the task. I learned that 3D CNN’s differ in that the kernel slides in all three dimensions. After pinning down my architecture, it was fairly easy to find code that accomplished this network (I was unable to attach the Python file, but I did want to). However, it’s important that the parameters are relevant to the radar classification task. I found a very interesting research paper on exactly this topic, and I plan on reading it and changing the parameters of the network accordingly (currently the network is for classifying CT images). While I made progress on the architecture, I struggled with generating inputs from the dataset. We have the range-Azimuth and range-Doppler data, but from that a range-Doppler map needs to be constructed. Furthermore, the label for each training data piece will be a cubelet outlining the human in the frame (this is something I didn’t previously understand). Therefore, this will be a big focus for me moving forward–making both the heat maps and the cubelets. Lastly, I drafted the introduction and use case requirements sections of the design report. I also added relevant details about the ML architecture in the architecture, design requirements, design trade studies, and test, verification, and validation sections.

My progress is behind. I was hoping to get the network training by now, but the input and label generation was much more complicated than I expected. However, as long as I can get the network reliably training by spring break, I think we are on a good track for leaving an ample amount of time for integration.

By the end of this week, I hope to get the network training. The bulk of that work will be generating the inputs and the targets.

 

Team Status Report for 2/18

A high risk factor for this project is training the network to completion (a very time consuming task) and then testing it with the radar data only for it to not work due to the differences in training and test data. To mitigate this risk, Linsey spoke with researchers in Belgium this week to better understand the data we are training the network on. We learned that we must construct range-doppler maps from our radar data in order to improve image resolution and successfully detect humans. By learning from these researchers, we can make our data better for the network and thus better for detecting humans. There aren’t currently any contingency plans in place. Because Angie has already started collecting data using the radar and Linsey has confirmed the dataset, we will be able to soon compare our own range-doppler maps and the dataset’s maps to ensure a smooth integration process on this end.

We added a temperature sensor and speaker to our design. Since our use case is reaching areas where traditionally used infrared can’t (aka fire SAR missions), it’s extremely important that our drone attachment can withstand high temperatures, since fires can measure around 600 degrees Celsius. We know that the plastic chassis and radar will start deteriorating at 125 degrees Celsius. To stop this from happening, our temperature sensor will alert the user when the temperature reaches 100 degrees Celsius. This alert will be shown through our web application. On the victim side of our application, the speaker will emit a loud beeping noise. By making victims aware of the device’s presence, they can be cued to wave their arms, which will help our system detect them more easily by the Doppler shift. The temperature warning system will make our device more user friendly and help ensure the functionality of our device. The beeping noise helps our device function better as well by alerting victims that it’s there. The cost of the temperature sensor and speaker is very low and will have no impact on our budget.

No changes have occurred to the schedule.

To develop our design, we employed the following engineering principles: usability, ethics, and safety.

Linsey’s Status Report for 2/18

This week I planned on developing the machine learning architecture. I started by loading up our chosen dataset and examining its structure. Upon this analysis, I found several confusing points that weren’t answered in the dataset’s readme file. I then emailed the authors of the paper “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach” (the paper on which were basing our approach). I’ve since been corresponding with those authors. They pointed out that the dataset didn’t include range-doppler signatures, which we absolutely have to train on to successfully detect moving humans. They pointed me to another dataset which is must more suitable. The authors also attached 4 extremely helpful research papers that spoke more about drones and using mmWave radar to detect humans. I read and reviewed each of those. I learned two very important things. Due to the low resolution of radar data, it will be necessary to construct range-doppler maps to gain more information and achieve higher resolution, which will both in turn aid in detecting humans. Next, although I knew I wanted to implement a CNN architecture, one of the papers–“Radar-camera Fusion for Road Target Classification”–pointed out a high performing CNN architecture that I would like to implement.

Because I spent this week understanding and laying out the architecture, I wasn’t able to get to implementation. This puts me a week behind schedule. However, I think this time was very well spent, because my progress going forward now feels a lot more structured. Although I’m behind schedule, I think by implementing the network this week, our progress will still be on track.

By the end of this week, I would like to get the network training. Since I am using a CNN architecture on high dimensional training data, this process will take a long time, including tuning hyperparameters. Therefore, it’s very important I get this started early.

Before this project, I had only been exposed to radar briefly in 18-220. To learn more about radar and its collected data, I read the 4 research papers recommended by the authors of the aforementioned paper. For the ML architecture, I am a machine learning minor and have implemented my own CNN’s in Intermediate Deep Learning (10-417).

Linsey’s Status Report for 2/11

This week I presented the project proposal for my group. As a group, we decided what information we wanted to convey in the presentation. Afterwards, I reviewed the slides and helped make the slides better for presenting–engaging and readable. During our meeting, Professor Fedder stressed that we must be able to answer “why” questions about our use case and requirements. To be prepared and effectively communicate our proposal, I researched current search and rescue drone applications and their standards, the drone market, and different radars–specifically our mmWave radar to gain more knowledge about it but also possible other choices for our device like lidar. After performing this research, I distilled what I learned into readable slides. I did my best to make our device easily imaginable by equating size with an iPhone 12, weight with half a bottle of water, and finishing our user requirements with a clear price comparison. Once the slides were polished, I practiced and timed out the presentation. Moving on, our progress is on schedule. In order to stay on track, I need to research ML algorithms to implement that would best achieve our goal of human detection classification. Therefore, I will perform this research and hopefully start training one architecture on our dataset, which we have already chosen.