Linsey’s Status Report for 4/29

This week, I got the temperature sensor working with the RaspberryPi with Ayesha. I have never worked with RaspberryPi’s before. With Angie’s help, I learned how to re-image it and configure it to my local Wi-Fi. Although Ayesha and I had previously gotten the temperature sensor working with the Arduino, we wanted to ensure that it was taking accurate readings with the RPi as well. We found a guide online for our temperature sensor and were able to get everything connected and test its accurate temperature readings by comparing with the ambient thermometer we acquired a week ago. I’ll show pictures of that process in the team status report. Additionally, Ayesha and I worked on getting the GPS sensor working with the RPi to see if Ayesha’s web app was receiving data correctly. Angie had already written a script to format and send the data from the GPS to the web app, so we tested that script. We used the same step-by-step guide that Angie used, but we were not able to get the GPS to fix on a location. Therefore, Ayesha wrote a dummy script that sent fixed GPS coordinates to her web app, and we tested that successfully and that data was received on her web app! Alone, I worked on integrating the speaker with the RPi. There wasn’t a straightforward guide on the speaker kit, but then I realized that speakers are more about the hardware connection than the actual module itself, because little to no code is involved. I was working on the speakers at home, so I didn’t have the soldering tools and wcrecutters necessary to actually assemble it. However, I was able to copy over a “.wav” file of me saying, “Wave your arms if you are able. This will help us detect you better.” I also installed the necessary audio libraries on the Pi, so once the speakers are connected properly via hardware, playing the message should be easy. Lastly, I retrained the machine learning network with 3600 more samples Angie collected. Unfortunately, the F1 score dropped to .33. After discussing this with Angie, it’s because all the new samples for the human labeled examples had humans deep breathing, but I don’t believe it’s smart for us to focus on that hard to detect case at the moment. So, I have hope that with more samples of humans waving arms, the F1 score will increase from the .5 that was the best we have collected previously.

My progress is on schedule.

In the next week, I will get the speakers working as a whole. I will retrain the network once Angie gives me more data. Once I get an F1 score close to .7, I will integrate my part with Ayesha’s.

Linsey’s Status Report for 4/22

This week I migrated the existing 3D CNN model to TensorFlow. I translated the PyTorch code to TensorFlow, because I found that TensorFlow has better functionalities that are easier to work with. A part of this was getting the shape dimensions to work, because the Gent University dataset was 166 x 127 x 195 where the training data was much smaller. After this translation, I initially trained the model on the data that had been collected. The model reached a .99 validation accuracy. However, I wasn’t convinced by this model, because the vast majority it was training on (the data that we had collected) didn’t have humans. Therefore, Angie collected 1800 human and 1800 no human samples with a higher resolution. Then, I adjusted the shapes of the network to accommodate 128 x 32 x 8 and trained the network to a .99 validation accuracy and 1.00 F1 score. Additionally, this week the speaker the ambient temperature thermometer arrived. I have started getting those to work with the Raspberry Pi and plan on completing that process by tomorrow. Lastly, I worked on the final presentation slides.

My progress is on schedule.

Tomorrow, I will get the temperature sensor and speaker working with the Raspberry Pi. I will also be working with Ayesha to integrate our parts i.e. integrate the fronted end with my machine learning architecture. I will test the machine learning architecture on truly unseen data, which Angie has already collected.

Linsey’s Status Report for 4/8

This week I trained the machine learning architecture on the dataset. By coding informative print statements, I gathered that the gradients during learning were exploding, causing NaN’s to appear after 1 iteration. Therefore, I implemented gradient clipping to overcome this. Previously, this concept was something that had just been discussed in class. I had never actually implemented it before and was able to find helpful documentation and integrate it with the 3D convolutional architecture. Additionally, I cast the outputted tensor to integers to allow some decimal places of error when comparing to the outputted tensor. After training the architecture for 15 epochs, it was able to achieve 45% as its highest accuracy. Because the dataset it was running on isn’t entirely representative of the data we will collect ourselves, I am not discouraged by the low metric.

My progress is on schedule.

Currently, I am waiting for Angie to process the data collected by our own radar (I have contributed to its collection over the past week). Once I receive that data, I will retrain the network on our own data. Additionally, Ayesha and I will be testing the temperature sensor and speaker this week.

In terms of the tests that I have run for the machine learning architecture, I have verified that all the shapes work for the architecture to run without errors. Once I receive the processed data, I will focus on achieving our F1 score metric. This metric is not built into PyTorch, so I will be figuring out how to collect that metric at the same time as training. For the temperature, Ayesha and I will be interacting with it through an Arduino. Using a heat gun and a comparative thermometer, we will test the accuracy of the temperature sensor in order for it to correctly monitor that our device isn’t in dangerously high temperatures. For the speaker, we will just be testing that it outputs the corresponding message like “Please wave your arms to aid in detection for rescue.”

Linsey’s Status Report for 4/1

This week I was able to successfully migrate my architecture and data to my personal AFS ECE space and make significant progress on getting the architecture to run. Previously, I struggled to find a place to run my machine learning architecture, because this course doesn’t provide AWS credits and I was only able to successfully SCP to the AFS Andrew space, which truncated my data at 2 GB according to the quota of that space. By changing my default home directory to the AFS ECE space and working with Samuel to successfully SCP my data directly to the AFS ECE space, I am now able to run my architecture, which has been a huge relief. Since then, I have generated training and validation datasets. That was somewhat challenging, because the data is so high dimensional, it not only took time, but I also had to do it piecemeal, because doing it all at once allocated too much memory and threw a CPU error. Then, I worked on the architecture itself. By more closely examining the 3D architecture I had coded locally inspired by https://keras.io/examples/vision/3D_image_classification/, I found a helpful research paper that discussed that 4 iterations of a 3D convolutional layer, max pooling, and batch normalization was effective at extracting features on 3D data. Therefore, I sought out to code this. I went back and forth between using TensorFlow and PyTorch. I feel like TensorFlow is easier to work with, but during data generation, it required converting between NumPy arrays and tensors, which took up too much memory. Therefore, I have settled on creating my architecture in PyTorch, which I am not as familiar with but am working through the different syntax. Currently, I am working out the dimensions by hand and verifying them in code to make sure they match input shapes.

Due to the workflow of my weekend, I will be spending significant amount of time on the machine learning architecture tomorrow, not today. Taking that into account, I am right on schedule to have a functioning architecture that is outputting some metrics by the interim demo.

In the next week, I will demo a machine learning architecture that is able to perform inference on one of the pieces of training data. I will also have corresponding loss, accuracy, and f1 score metrics.

Linsey’s Status Report for 3/25

Although I spent a lot of time on the project this week, I am frustrated, because I didn’t make any progress. After trying many times unsuccessfully to migrate my code to AWS, I realized that for it to work I would have to use a tier that requires money. I then learned that this course doesn’t provide AWS credits. Therefore, Tamal pointed me towards the ECE machines and lab computers. I first tried the ECE machines, because I can ssh into them remotely. I accomplished this. However, the size of the data I am working with is too large for the andrew directories, so it is necessary to use the AFS directories. After reading the guide online, I wasn’t able to successfully change directories. I emailed IT services, and they responded with some help that didn’t work for me, and they didn’t respond to my most recent follow-up, which leaves me with the option of working in person on the lab computers. Tomorrow, I plan on trying this option.

My progress is behind. This is not at all where I planned on being. By working in the lab tomorrow, I plan on getting the architecture training and finally catching up.

This week, my deliverables will include a trained architecture and integration with at least one of the subsystems (either the frontend of the hardware).

 

Linsey’s Status Report for 3/18

This week I worked to migrate my code to AWS. Previously, making the training and target datasets locally was working fine. However, when I tried to concatenate those two datasets and split them into training and validation sets, it stopped working locally due to running out of memory. Therefore, I migrated everything to AWS to overcome those issues. However, this has proven much more difficult than I thought. I spent hours trying to fix the ssh pipeline on VSCode and installing the necessary packages in the AWS environment. Ssh was finicky and sometimes wouldn’t connect at all; fixing that was a lot of Stack Overflow and looking through Git forums. Once I got into the AWS environment, everything was successfully copied over–the code files and the data. However, every time I tried to install PyTorch on the AWS environment, it would disconnect me. Additionally, once it logged me out of the ssh window, it wouldn’t let me back in a second time. I’ve looked at many pages for this and am still struggling to get this to work.

My progress is behind. I am very frustrated by the AWS migration. I hope I can figure this out soon, because after that, running and training the architecture will be very simple.

By the end of the week, I hope to have successfully run the architecture on AWS and started integration with either the web application or radar.

Linsey’s Status Report for 3/11

I have helped put together the design review report for its submission deadline and have continued working on the machine learning architecture. For the report, I wrote more general sections like the abstract, introduction, use case requirements, and related work. Much of this was expanding upon our presentation, but I carefully examine the requirements and made sure to hit all the discussion points and reviewed all these sections with my teammates. Additionally, in the other sections, I wrote material pertaining to the machine learning architecture. Again, I built off of the presentation. I did have to organize which of my references pertained to which parts of the architecture and really understand exactly how my part is integrating with Angie’s and Ayesha’s. I also drew out a whole system diagram for the report. For the machine learning architecture, with Angie’s help I labeled each data piece, so that they all have targets. Therefore, I was able to generate train and test sets. I started testing the architecture that I had written out with the training data, but I am working through dimension errors. This will take some shape examination but hopefully shouldn’t be that much of a barrier to further progress.

I am slightly behind, because I wanted to have the machine learning architecture fully training at this point. However, I truly took a break for spring break and will be back on track after overcoming the dimension errors.

I hope to deliver the machine learning architecture fully trained by the end of this week.

Linsey’s Status Report for 2/25

At the start of this week, I finished up the slides for the design review. To fully flesh out my approach, I read a few articles to understand differences between different CNN architectures. I have worked with 2D CNN architectures before and am most familiar with them. However, since the range-Doppler maps that we will input are 3D, in the end, I found that using a 3D CNN architecture would be the most fitting for the task. I learned that 3D CNN’s differ in that the kernel slides in all three dimensions. After pinning down my architecture, it was fairly easy to find code that accomplished this network (I was unable to attach the Python file, but I did want to). However, it’s important that the parameters are relevant to the radar classification task. I found a very interesting research paper on exactly this topic, and I plan on reading it and changing the parameters of the network accordingly (currently the network is for classifying CT images). While I made progress on the architecture, I struggled with generating inputs from the dataset. We have the range-Azimuth and range-Doppler data, but from that a range-Doppler map needs to be constructed. Furthermore, the label for each training data piece will be a cubelet outlining the human in the frame (this is something I didn’t previously understand). Therefore, this will be a big focus for me moving forward–making both the heat maps and the cubelets. Lastly, I drafted the introduction and use case requirements sections of the design report. I also added relevant details about the ML architecture in the architecture, design requirements, design trade studies, and test, verification, and validation sections.

My progress is behind. I was hoping to get the network training by now, but the input and label generation was much more complicated than I expected. However, as long as I can get the network reliably training by spring break, I think we are on a good track for leaving an ample amount of time for integration.

By the end of this week, I hope to get the network training. The bulk of that work will be generating the inputs and the targets.

 

Linsey’s Status Report for 2/18

This week I planned on developing the machine learning architecture. I started by loading up our chosen dataset and examining its structure. Upon this analysis, I found several confusing points that weren’t answered in the dataset’s readme file. I then emailed the authors of the paper “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach” (the paper on which were basing our approach). I’ve since been corresponding with those authors. They pointed out that the dataset didn’t include range-doppler signatures, which we absolutely have to train on to successfully detect moving humans. They pointed me to another dataset which is must more suitable. The authors also attached 4 extremely helpful research papers that spoke more about drones and using mmWave radar to detect humans. I read and reviewed each of those. I learned two very important things. Due to the low resolution of radar data, it will be necessary to construct range-doppler maps to gain more information and achieve higher resolution, which will both in turn aid in detecting humans. Next, although I knew I wanted to implement a CNN architecture, one of the papers–“Radar-camera Fusion for Road Target Classification”–pointed out a high performing CNN architecture that I would like to implement.

Because I spent this week understanding and laying out the architecture, I wasn’t able to get to implementation. This puts me a week behind schedule. However, I think this time was very well spent, because my progress going forward now feels a lot more structured. Although I’m behind schedule, I think by implementing the network this week, our progress will still be on track.

By the end of this week, I would like to get the network training. Since I am using a CNN architecture on high dimensional training data, this process will take a long time, including tuning hyperparameters. Therefore, it’s very important I get this started early.

Before this project, I had only been exposed to radar briefly in 18-220. To learn more about radar and its collected data, I read the 4 research papers recommended by the authors of the aforementioned paper. For the ML architecture, I am a machine learning minor and have implemented my own CNN’s in Intermediate Deep Learning (10-417).

Linsey’s Status Report for 2/11

This week I presented the project proposal for my group. As a group, we decided what information we wanted to convey in the presentation. Afterwards, I reviewed the slides and helped make the slides better for presenting–engaging and readable. During our meeting, Professor Fedder stressed that we must be able to answer “why” questions about our use case and requirements. To be prepared and effectively communicate our proposal, I researched current search and rescue drone applications and their standards, the drone market, and different radars–specifically our mmWave radar to gain more knowledge about it but also possible other choices for our device like lidar. After performing this research, I distilled what I learned into readable slides. I did my best to make our device easily imaginable by equating size with an iPhone 12, weight with half a bottle of water, and finishing our user requirements with a clear price comparison. Once the slides were polished, I practiced and timed out the presentation. Moving on, our progress is on schedule. In order to stay on track, I need to research ML algorithms to implement that would best achieve our goal of human detection classification. Therefore, I will perform this research and hopefully start training one architecture on our dataset, which we have already chosen.