Team Status Report for 4/29

A significant risk is that the ML architecture, upon retraining, will not reach an F1 score of .7. However, last time we added the data, the F1 score jumped by .17. Our current best F1 score is .5, so we hope at the very least we can get an F1 score of .67. Another risk is that the hardware doesn’t integrate with the RPi well. However, the temperature sensor is fully integrated. The speaker is on its way to integration, and the GPS sensor is integrated.

No changes were made to the system design. By narrowing our application to the demo, instead of calculating when a radar frame should have inference performed on it using the IMU sensor data, we will instead use a key press to start inference. This has no impact on the cost.

Here is Ayesha and I testing the temperature sensor connected to the RPi. We used a hairdryer to increase the temperature and see the ambient temperature and temperature sensor reading increase accordingly.

Testing Details:

  • Machine learning architecture: We unit tested on 600 held-out test samples (collected of diverse scenes and roughly half humans and half no human scenes) measuring resulting F1 score and accuracy. Also, we recorded the inference time of the network.
  • Hardware
    • Temperature sensor: We have connected it to the RPi and seen the output of the sensor. By comparing the room temperature readings of the temperature sensor readings and the ambient thermometer readings, we saw that the temperature sensor was working at baseline. By using the hairdryer we saw both the temperature on the temperature sensor and ambient thermometer increase.
    • Radar: We tested real-time data acquisition at 5Hz on the laptop and connected it to the RPi, but have not tested real-time data acquisition over WiFi yet.
    • GPS/IMU sensor: We connected it to the RPi and logged 4927 locations and compared them to the actual stationary location. The location data is precise enough for our updated use case, with a standard deviation of 1.5m, but the location is way off by 20 miles, requiring compensation to output an acceptable location.
  • Web application: We have measured that the website updates in ~100 ms. Through HTTP requests, we also found that the web application is able to received formatted data.

Findings:

  • The GPS is currently ~20 miles off, so we may need to apply an offset to get accurate readings from the sensor.

Linsey’s Status Report for 4/29

This week, I got the temperature sensor working with the RaspberryPi with Ayesha. I have never worked with RaspberryPi’s before. With Angie’s help, I learned how to re-image it and configure it to my local Wi-Fi. Although Ayesha and I had previously gotten the temperature sensor working with the Arduino, we wanted to ensure that it was taking accurate readings with the RPi as well. We found a guide online for our temperature sensor and were able to get everything connected and test its accurate temperature readings by comparing with the ambient thermometer we acquired a week ago. I’ll show pictures of that process in the team status report. Additionally, Ayesha and I worked on getting the GPS sensor working with the RPi to see if Ayesha’s web app was receiving data correctly. Angie had already written a script to format and send the data from the GPS to the web app, so we tested that script. We used the same step-by-step guide that Angie used, but we were not able to get the GPS to fix on a location. Therefore, Ayesha wrote a dummy script that sent fixed GPS coordinates to her web app, and we tested that successfully and that data was received on her web app! Alone, I worked on integrating the speaker with the RPi. There wasn’t a straightforward guide on the speaker kit, but then I realized that speakers are more about the hardware connection than the actual module itself, because little to no code is involved. I was working on the speakers at home, so I didn’t have the soldering tools and wcrecutters necessary to actually assemble it. However, I was able to copy over a “.wav” file of me saying, “Wave your arms if you are able. This will help us detect you better.” I also installed the necessary audio libraries on the Pi, so once the speakers are connected properly via hardware, playing the message should be easy. Lastly, I retrained the machine learning network with 3600 more samples Angie collected. Unfortunately, the F1 score dropped to .33. After discussing this with Angie, it’s because all the new samples for the human labeled examples had humans deep breathing, but I don’t believe it’s smart for us to focus on that hard to detect case at the moment. So, I have hope that with more samples of humans waving arms, the F1 score will increase from the .5 that was the best we have collected previously.

My progress is on schedule.

In the next week, I will get the speakers working as a whole. I will retrain the network once Angie gives me more data. Once I get an F1 score close to .7, I will integrate my part with Ayesha’s.

Linsey’s Status Report for 4/22

This week I migrated the existing 3D CNN model to TensorFlow. I translated the PyTorch code to TensorFlow, because I found that TensorFlow has better functionalities that are easier to work with. A part of this was getting the shape dimensions to work, because the Gent University dataset was 166 x 127 x 195 where the training data was much smaller. After this translation, I initially trained the model on the data that had been collected. The model reached a .99 validation accuracy. However, I wasn’t convinced by this model, because the vast majority it was training on (the data that we had collected) didn’t have humans. Therefore, Angie collected 1800 human and 1800 no human samples with a higher resolution. Then, I adjusted the shapes of the network to accommodate 128 x 32 x 8 and trained the network to a .99 validation accuracy and 1.00 F1 score. Additionally, this week the speaker the ambient temperature thermometer arrived. I have started getting those to work with the Raspberry Pi and plan on completing that process by tomorrow. Lastly, I worked on the final presentation slides.

My progress is on schedule.

Tomorrow, I will get the temperature sensor and speaker working with the Raspberry Pi. I will also be working with Ayesha to integrate our parts i.e. integrate the fronted end with my machine learning architecture. I will test the machine learning architecture on truly unseen data, which Angie has already collected.

Linsey’s Status Report for 4/8

This week I trained the machine learning architecture on the dataset. By coding informative print statements, I gathered that the gradients during learning were exploding, causing NaN’s to appear after 1 iteration. Therefore, I implemented gradient clipping to overcome this. Previously, this concept was something that had just been discussed in class. I had never actually implemented it before and was able to find helpful documentation and integrate it with the 3D convolutional architecture. Additionally, I cast the outputted tensor to integers to allow some decimal places of error when comparing to the outputted tensor. After training the architecture for 15 epochs, it was able to achieve 45% as its highest accuracy. Because the dataset it was running on isn’t entirely representative of the data we will collect ourselves, I am not discouraged by the low metric.

My progress is on schedule.

Currently, I am waiting for Angie to process the data collected by our own radar (I have contributed to its collection over the past week). Once I receive that data, I will retrain the network on our own data. Additionally, Ayesha and I will be testing the temperature sensor and speaker this week.

In terms of the tests that I have run for the machine learning architecture, I have verified that all the shapes work for the architecture to run without errors. Once I receive the processed data, I will focus on achieving our F1 score metric. This metric is not built into PyTorch, so I will be figuring out how to collect that metric at the same time as training. For the temperature, Ayesha and I will be interacting with it through an Arduino. Using a heat gun and a comparative thermometer, we will test the accuracy of the temperature sensor in order for it to correctly monitor that our device isn’t in dangerously high temperatures. For the speaker, we will just be testing that it outputs the corresponding message like “Please wave your arms to aid in detection for rescue.”

Team Status Report for 4/1

The most significant risk that could jeopardize this project is collecting enough data of moving humans with our own radar to train the machine learning architecture. On Friday, we met to test the radar. Angie brought the newer radar in hopes of getting the green board to work. However, it refused to work with her computer. We will instead meet tomorrow to test the older radar on moving humans. Angie will visualize the corresponding Doppler effect on her computer. Not only will this prepare us for the interim demo, it will also ensure that the collected radar data is suited for our machine learning architecture. After this assurance, Angie will be able to send Linsey the radar data, so that she can start training the architecture on our data as soon as possible.

Another risk in general is integration timeline. We spoke about this as a team and assured ourselves that we have enough time to integrate components. Ayesha and Linsey have already initiated integration of the web application frontend and the machine learning architecture. Ayesha installed the Django REST API that will connect the two components, and together they’ll migrate their code to the same location for integration.

There were no system design changes. Just to clarify, we plan on using the older radar for our project, because it works much better with our computers, and at this point in the project, we need something reliable.

Linsey’s Status Report for 4/1

This week I was able to successfully migrate my architecture and data to my personal AFS ECE space and make significant progress on getting the architecture to run. Previously, I struggled to find a place to run my machine learning architecture, because this course doesn’t provide AWS credits and I was only able to successfully SCP to the AFS Andrew space, which truncated my data at 2 GB according to the quota of that space. By changing my default home directory to the AFS ECE space and working with Samuel to successfully SCP my data directly to the AFS ECE space, I am now able to run my architecture, which has been a huge relief. Since then, I have generated training and validation datasets. That was somewhat challenging, because the data is so high dimensional, it not only took time, but I also had to do it piecemeal, because doing it all at once allocated too much memory and threw a CPU error. Then, I worked on the architecture itself. By more closely examining the 3D architecture I had coded locally inspired by https://keras.io/examples/vision/3D_image_classification/, I found a helpful research paper that discussed that 4 iterations of a 3D convolutional layer, max pooling, and batch normalization was effective at extracting features on 3D data. Therefore, I sought out to code this. I went back and forth between using TensorFlow and PyTorch. I feel like TensorFlow is easier to work with, but during data generation, it required converting between NumPy arrays and tensors, which took up too much memory. Therefore, I have settled on creating my architecture in PyTorch, which I am not as familiar with but am working through the different syntax. Currently, I am working out the dimensions by hand and verifying them in code to make sure they match input shapes.

Due to the workflow of my weekend, I will be spending significant amount of time on the machine learning architecture tomorrow, not today. Taking that into account, I am right on schedule to have a functioning architecture that is outputting some metrics by the interim demo.

In the next week, I will demo a machine learning architecture that is able to perform inference on one of the pieces of training data. I will also have corresponding loss, accuracy, and f1 score metrics.

Linsey’s Status Report for 3/25

Although I spent a lot of time on the project this week, I am frustrated, because I didn’t make any progress. After trying many times unsuccessfully to migrate my code to AWS, I realized that for it to work I would have to use a tier that requires money. I then learned that this course doesn’t provide AWS credits. Therefore, Tamal pointed me towards the ECE machines and lab computers. I first tried the ECE machines, because I can ssh into them remotely. I accomplished this. However, the size of the data I am working with is too large for the andrew directories, so it is necessary to use the AFS directories. After reading the guide online, I wasn’t able to successfully change directories. I emailed IT services, and they responded with some help that didn’t work for me, and they didn’t respond to my most recent follow-up, which leaves me with the option of working in person on the lab computers. Tomorrow, I plan on trying this option.

My progress is behind. This is not at all where I planned on being. By working in the lab tomorrow, I plan on getting the architecture training and finally catching up.

This week, my deliverables will include a trained architecture and integration with at least one of the subsystems (either the frontend of the hardware).

 

Linsey’s Status Report for 3/18

This week I worked to migrate my code to AWS. Previously, making the training and target datasets locally was working fine. However, when I tried to concatenate those two datasets and split them into training and validation sets, it stopped working locally due to running out of memory. Therefore, I migrated everything to AWS to overcome those issues. However, this has proven much more difficult than I thought. I spent hours trying to fix the ssh pipeline on VSCode and installing the necessary packages in the AWS environment. Ssh was finicky and sometimes wouldn’t connect at all; fixing that was a lot of Stack Overflow and looking through Git forums. Once I got into the AWS environment, everything was successfully copied over–the code files and the data. However, every time I tried to install PyTorch on the AWS environment, it would disconnect me. Additionally, once it logged me out of the ssh window, it wouldn’t let me back in a second time. I’ve looked at many pages for this and am still struggling to get this to work.

My progress is behind. I am very frustrated by the AWS migration. I hope I can figure this out soon, because after that, running and training the architecture will be very simple.

By the end of the week, I hope to have successfully run the architecture on AWS and started integration with either the web application or radar.

Team Status Report for 3/11

A significant risk we identified was the real-time functionality of our system specifically the radar. We found that our current radar module didn’t support it, but using a different radar module (also from CyLab) would be more feasible. This radar is already procured, so this risk has been mitigated.

The radar change was necessary to support the real-time functionality of our system. This is driven by the use case requirement of our device helping in time-pressured search and rescue situations. No additional cost will be incurred, because it is the courtesy of CyLab. Because this radar is capturing the same data–range-Doppler and range-azimuth coordinates–from an integration viewpoint, it will be the same for the machine learning architecture.

The new radar is certainly a new tool. For the integration of the machine learning architecture with the web application, we identified that Django REST API, because it is compatible with supporting Python programs, which is how the machine learning architecture will be implemented. Lastly, while writing the design report, we realized that there was no clear delineation of when the radar would use the captured data, perform inference, and identify a human. Therefore, we established that we will use the IMU data to determine when the drone has zero horizontal acceleration and is upright and subsequently perform inference on the data by running the machine learning architecture.