Linsey’s Status Report for 4/8

This week I trained the machine learning architecture on the dataset. By coding informative print statements, I gathered that the gradients during learning were exploding, causing NaN’s to appear after 1 iteration. Therefore, I implemented gradient clipping to overcome this. Previously, this concept was something that had just been discussed in class. I had never actually implemented it before and was able to find helpful documentation and integrate it with the 3D convolutional architecture. Additionally, I cast the outputted tensor to integers to allow some decimal places of error when comparing to the outputted tensor. After training the architecture for 15 epochs, it was able to achieve 45% as its highest accuracy. Because the dataset it was running on isn’t entirely representative of the data we will collect ourselves, I am not discouraged by the low metric.

My progress is on schedule.

Currently, I am waiting for Angie to process the data collected by our own radar (I have contributed to its collection over the past week). Once I receive that data, I will retrain the network on our own data. Additionally, Ayesha and I will be testing the temperature sensor and speaker this week.

In terms of the tests that I have run for the machine learning architecture, I have verified that all the shapes work for the architecture to run without errors. Once I receive the processed data, I will focus on achieving our F1 score metric. This metric is not built into PyTorch, so I will be figuring out how to collect that metric at the same time as training. For the temperature, Ayesha and I will be interacting with it through an Arduino. Using a heat gun and a comparative thermometer, we will test the accuracy of the temperature sensor in order for it to correctly monitor that our device isn’t in dangerously high temperatures. For the speaker, we will just be testing that it outputs the corresponding message like “Please wave your arms to aid in detection for rescue.”

Ayesha’s Status Report for 4/8

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours).

This week I personally accomplished a couple of things. First, I worked with my group members to collect radar data for training, which is explained in the team status report. This took some time for us to capture 1000 frames but it was a success. I also worked with Angie to integrate the GPS data with the web application. This is still in progress because I ran into some issues using websockets with django to receive the data, so I am currently still debugging that. Linsey and I also worked on testing the temperature sensor, but we did not have enough wire at home, so we will be meeting again in the next few days to properly test that with the arduino, but we have set up the code to run that test. Lastly, I looked into finding different views in the HERE maps api to make the pin functionality more useful, since right now the view is flat and does not provide much information to the user about what the location looks like.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

My progress is on schedule, but I need to figure out the sockets issue and have the GPS data integrated within the next week in order to stay on schedule, so my primary focus will be collecting data and debugging the socket issue or finding a better way to implement them.

What deliverables do you hope to complete in the next week?

As mentioned above, I hope to finish integrating the GPS with the web application, and also testing the temperature sensor so that I can integrate that as well. Once I figure out the issue for the GPS, the temperature sensor should be easy to integrate with the front end since it will use the same logic. Lastly, I will collect more radar data based on how the model is doing in the training stage.

Team Status Report for 4/8

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project are incurring issues with training, since our dataset is not going to produce an algorithm that would work with our use case. We managed this risk by collecting ~1000 frames of our own data for training purposes, and we will continue to collect more data as the model begins training to ensure that our use case is addressed. Specifically, we will try to collect data outdoors and in fog (from fog machines), since this is what our use case aims to address. In the meanwhile, we are working on integrating the other hardware and software components, such as the temperature sensors and GPS with the frontend so that the remaining features can be ready for final integration and testing.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The most significant change, as aforementioned, was going back to the original radar we had chosen instead of the green board. Besides that, we have not made any significant changes to our system design. The costs have been mitigated by us collecting more data this week with our original radar – we collected about 1000 frames with various scenes (static, one person, one person waving arms, two people, obstructions, etc.) from various distances. Now we can speed up the training process by using this data to train the ML algorithm. Another thing we are doing to mitigate any costs is integrating other parts, like testing and integrating the GPS/IMU sensors with the web application, as well as the temperature sensors. This is so that by the time training is done, we can integrate the software portions and be done.

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

We are planning to run tests after our model is trained by using the radar to capture live feed and comparing the accuracies of the detection to that of the previously gathered data. Since classification is not our primary concern, our entire testing plan is focused on the accuracy of the radar image human detection. Similar to how we gathered data, we will attach our radar to a swiffer to test it at various heights, and run tests on images of static scenes, humans breathing, waving arms, obstructed, interacting with other humans, and this will all be done from various distances, heights, and angles to test the doppler and azimuth axes. We will analyze these results and compare them to the anticipated measures by comparing the detection accuracy percentages, the F1 score of the algorithm, and the latency of displaying the detection results on the web application.

Angie’s Status Report for 4/1

What did you personally accomplish this week on the project?

This week, I tested integration of the Raspberry Pi with the radar and using the OpenRadar project code to stream data from the green board without using Texas Instruments’ mmwave studio which is only available on Windows.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Schedule is behind because we met to collect data containing all of us but were not able to collect the data since the AWR1642 could not connect. After swapping out with the AWR1843, we will collect data on Sunday.

What deliverables do you hope to complete in the next week?

  • Interim demo of real-time data collection
  • Test sending data to web app via wifi
  • Test sending alert to web app that system is not stationary or is too hot, based on IMU and temperature data

Team Status Report for 4/1

The most significant risk that could jeopardize this project is collecting enough data of moving humans with our own radar to train the machine learning architecture. On Friday, we met to test the radar. Angie brought the newer radar in hopes of getting the green board to work. However, it refused to work with her computer. We will instead meet tomorrow to test the older radar on moving humans. Angie will visualize the corresponding Doppler effect on her computer. Not only will this prepare us for the interim demo, it will also ensure that the collected radar data is suited for our machine learning architecture. After this assurance, Angie will be able to send Linsey the radar data, so that she can start training the architecture on our data as soon as possible.

Another risk in general is integration timeline. We spoke about this as a team and assured ourselves that we have enough time to integrate components. Ayesha and Linsey have already initiated integration of the web application frontend and the machine learning architecture. Ayesha installed the Django REST API that will connect the two components, and together they’ll migrate their code to the same location for integration.

There were no system design changes. Just to clarify, we plan on using the older radar for our project, because it works much better with our computers, and at this point in the project, we need something reliable.

Linsey’s Status Report for 4/1

This week I was able to successfully migrate my architecture and data to my personal AFS ECE space and make significant progress on getting the architecture to run. Previously, I struggled to find a place to run my machine learning architecture, because this course doesn’t provide AWS credits and I was only able to successfully SCP to the AFS Andrew space, which truncated my data at 2 GB according to the quota of that space. By changing my default home directory to the AFS ECE space and working with Samuel to successfully SCP my data directly to the AFS ECE space, I am now able to run my architecture, which has been a huge relief. Since then, I have generated training and validation datasets. That was somewhat challenging, because the data is so high dimensional, it not only took time, but I also had to do it piecemeal, because doing it all at once allocated too much memory and threw a CPU error. Then, I worked on the architecture itself. By more closely examining the 3D architecture I had coded locally inspired by https://keras.io/examples/vision/3D_image_classification/, I found a helpful research paper that discussed that 4 iterations of a 3D convolutional layer, max pooling, and batch normalization was effective at extracting features on 3D data. Therefore, I sought out to code this. I went back and forth between using TensorFlow and PyTorch. I feel like TensorFlow is easier to work with, but during data generation, it required converting between NumPy arrays and tensors, which took up too much memory. Therefore, I have settled on creating my architecture in PyTorch, which I am not as familiar with but am working through the different syntax. Currently, I am working out the dimensions by hand and verifying them in code to make sure they match input shapes.

Due to the workflow of my weekend, I will be spending significant amount of time on the machine learning architecture tomorrow, not today. Taking that into account, I am right on schedule to have a functioning architecture that is outputting some metrics by the interim demo.

In the next week, I will demo a machine learning architecture that is able to perform inference on one of the pieces of training data. I will also have corresponding loss, accuracy, and f1 score metrics.