Angie’s Status Report for 4/29

What did you personally accomplish this week on the project?

  • This week, I worked with Linsey and Ayesha to integrate the GPS with the Raspberry Pi. We could not get the GPS to fix on satellites indoors, so the GPS was able to be fixed by placing it outdoors for nearly an hour. The initial results are that the GPS data is precise (majority of locations are within 2 meters of the average location, and the standard deviation is 1.5m), but inaccurate, being about 20 miles away from the actual location. To maximize the location accuracy, the location shift will be compensated for, and the latitudes and longitudes will be filtered over time to reduce the influence of outliers.

  • I collected 3600 more samples of data to train the neural network in hopes of increasing the F1 score. Heeding feedback from the final presentation, I recorded the movement of non-human animals such as a wasp. The human data contained more representation of relatively stationary movements such as deep breathing, which decreased the F1 score to 0.33 again, so I collected data where the humans are dramatically moving (including behind barriers) such as arm-waving.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Progress is behind on GPS integration due to lack of connection with CMU-DEVICE and trouble getting GPS fix, and I will catch up by resolving errors specific to slow GPS fix time and not remembering its position between reboots.

What deliverables do you hope to complete in the next week?

  • Resolve problems with getting a GPS fix quickly (such as downloading almanacs from internet)
  • Test system on CMU-DEVICE
  • 3D print the chassis and put all parts in chassis

Team Status Report for 4/29

A significant risk is that the ML architecture, upon retraining, will not reach an F1 score of .7. However, last time we added the data, the F1 score jumped by .17. Our current best F1 score is .5, so we hope at the very least we can get an F1 score of .67. Another risk is that the hardware doesn’t integrate with the RPi well. However, the temperature sensor is fully integrated. The speaker is on its way to integration, and the GPS sensor is integrated.

No changes were made to the system design. By narrowing our application to the demo, instead of calculating when a radar frame should have inference performed on it using the IMU sensor data, we will instead use a key press to start inference. This has no impact on the cost.

Here is Ayesha and I testing the temperature sensor connected to the RPi. We used a hairdryer to increase the temperature and see the ambient temperature and temperature sensor reading increase accordingly.

Testing Details:

  • Machine learning architecture: We unit tested on 600 held-out test samples (collected of diverse scenes and roughly half humans and half no human scenes) measuring resulting F1 score and accuracy. Also, we recorded the inference time of the network.
  • Hardware
    • Temperature sensor: We have connected it to the RPi and seen the output of the sensor. By comparing the room temperature readings of the temperature sensor readings and the ambient thermometer readings, we saw that the temperature sensor was working at baseline. By using the hairdryer we saw both the temperature on the temperature sensor and ambient thermometer increase.
    • Radar: We tested real-time data acquisition at 5Hz on the laptop and connected it to the RPi, but have not tested real-time data acquisition over WiFi yet.
    • GPS/IMU sensor: We connected it to the RPi and logged 4927 locations and compared them to the actual stationary location. The location data is precise enough for our updated use case, with a standard deviation of 1.5m, but the location is way off by 20 miles, requiring compensation to output an acceptable location.
  • Web application: We have measured that the website updates in ~100 ms. Through HTTP requests, we also found that the web application is able to received formatted data.

Findings:

  • The GPS is currently ~20 miles off, so we may need to apply an offset to get accurate readings from the sensor.

Linsey’s Status Report for 4/29

This week, I got the temperature sensor working with the RaspberryPi with Ayesha. I have never worked with RaspberryPi’s before. With Angie’s help, I learned how to re-image it and configure it to my local Wi-Fi. Although Ayesha and I had previously gotten the temperature sensor working with the Arduino, we wanted to ensure that it was taking accurate readings with the RPi as well. We found a guide online for our temperature sensor and were able to get everything connected and test its accurate temperature readings by comparing with the ambient thermometer we acquired a week ago. I’ll show pictures of that process in the team status report. Additionally, Ayesha and I worked on getting the GPS sensor working with the RPi to see if Ayesha’s web app was receiving data correctly. Angie had already written a script to format and send the data from the GPS to the web app, so we tested that script. We used the same step-by-step guide that Angie used, but we were not able to get the GPS to fix on a location. Therefore, Ayesha wrote a dummy script that sent fixed GPS coordinates to her web app, and we tested that successfully and that data was received on her web app! Alone, I worked on integrating the speaker with the RPi. There wasn’t a straightforward guide on the speaker kit, but then I realized that speakers are more about the hardware connection than the actual module itself, because little to no code is involved. I was working on the speakers at home, so I didn’t have the soldering tools and wcrecutters necessary to actually assemble it. However, I was able to copy over a “.wav” file of me saying, “Wave your arms if you are able. This will help us detect you better.” I also installed the necessary audio libraries on the Pi, so once the speakers are connected properly via hardware, playing the message should be easy. Lastly, I retrained the machine learning network with 3600 more samples Angie collected. Unfortunately, the F1 score dropped to .33. After discussing this with Angie, it’s because all the new samples for the human labeled examples had humans deep breathing, but I don’t believe it’s smart for us to focus on that hard to detect case at the moment. So, I have hope that with more samples of humans waving arms, the F1 score will increase from the .5 that was the best we have collected previously.

My progress is on schedule.

In the next week, I will get the speakers working as a whole. I will retrain the network once Angie gives me more data. Once I get an F1 score close to .7, I will integrate my part with Ayesha’s.

Ayesha’s Status Report for 4/29

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours).

This week I worked a lot with integrating the hardware with the software. I worked with my group mates to connect the GPS to the raspberry pi, which is still giving us some issues because we were unable to get the GPS to fix. I also was able to work with linsey to get the temperature sensor reading values correctly, which we tested with an ambient thermometer and even tested at hotter temperatures with a hairdryer – pictures will be in the team status report. I was also able to run a script to get data to send from the raspberry pi to the web application and have it update within 100 ms, which is very exciting because now we can send temperature sensor data and also GPS data once that fixes. I also worked on practicing my final demo presentation a lot earlier on in the week.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Our progress is slightly behind because we wanted to be done integrating by the end of this week, but we are very close. In order to catch up, we are working on getting the sensor data this weekend and adjusting the web application code to accept requests that will read that data along with the GPS data. We will also finish up training since we are doing that on more data right now, and once that is done we will be able to integrate the software and test sending images through the raspberry pi.

What deliverables do you hope to complete in the next week?

Next week, along with the actual assignments that are due such as the poster, we hope to finish integrating the raspberry pi data sending to the web application, and we also hope to have integrated the software to have a fully finished product that we can focus on testing, specifically on bad network.

Linsey’s Status Report for 4/22

This week I migrated the existing 3D CNN model to TensorFlow. I translated the PyTorch code to TensorFlow, because I found that TensorFlow has better functionalities that are easier to work with. A part of this was getting the shape dimensions to work, because the Gent University dataset was 166 x 127 x 195 where the training data was much smaller. After this translation, I initially trained the model on the data that had been collected. The model reached a .99 validation accuracy. However, I wasn’t convinced by this model, because the vast majority it was training on (the data that we had collected) didn’t have humans. Therefore, Angie collected 1800 human and 1800 no human samples with a higher resolution. Then, I adjusted the shapes of the network to accommodate 128 x 32 x 8 and trained the network to a .99 validation accuracy and 1.00 F1 score. Additionally, this week the speaker the ambient temperature thermometer arrived. I have started getting those to work with the Raspberry Pi and plan on completing that process by tomorrow. Lastly, I worked on the final presentation slides.

My progress is on schedule.

Tomorrow, I will get the temperature sensor and speaker working with the Raspberry Pi. I will also be working with Ayesha to integrate our parts i.e. integrate the fronted end with my machine learning architecture. I will test the machine learning architecture on truly unseen data, which Angie has already collected.

Team Status Report for 4/22

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project are :

  • If the system is not effective at distinguishing humans from nonhumans. This manifests as a high rate of false positives and false negatives. This risk can be managed by changing the type of data that is inputted and how the data is preprocessed. In previous weeks, both have been changed, especially increasing the velocity resolution.
  • High latency of transmitting system data wirelessly to web app. Although it is crucial to have full-resolution radar data during each transmission, the data rate of the GPS and temperature data can be reduced to reduce latency, and the location is estimated using a Kalman filter.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

  • The dimensions of the input data were changed so that the doppler resolution was doubled and the range was halved. This change was necessary to reduce latency while providing higher quality data, as the time waiting to collect data beyond 5m reduces the frequency of data that can be sent and is extraneous anyway.
  • Additionally, the doubled doppler resolution provides more fine details that can help identify a human (such as the strong returns from movement of hands and weaker returns from individual fingers). Additionally, the input data is preprocessed to reduce noise in the range-doppler map which is expected to improve the accuracy of the neural network since the noise is less likely to erroneously be identified as a human.

Angie’s Status Report for 4/22

What did you personally accomplish this week on the project?

  • This week, I collected a new dataset of 3600 samples (1800 with humans, 1800 without humans) which was used to train the neural network. Compared to the old dataset, the new dataset has doubled velocity resolution and halved range to 5 meters, which is advantageous for our use case since the data beyond 5 meters is superfluous and increases latency.
  • I collected 600 samples of test data (300 with humans, 300 without humans) which is not used to train the neural network but to gauge its performance with data that was never used to train it.
  • The above test data, along with the real-time data, is preprocessed as shown in the picture below:

  • Ayesha and I wrote code to send radar, GPS, and IMU data from the Raspberry Pi through an http request.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Progress is on schedule

What deliverables do you hope to complete in the next week?

  • Work with Ayesha to write code to send temperature data
  • 3D print the chassis to contain the system
  • Collect metrics on latency for the integrated system

Ayesha’s Status Report for 4/22

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours).

This week I personally accomplished implementing and testing the ability to send location data to the web application and have it update and display live. This took a lot of time and debugging and testing, but I was able to successfully practice sending http requests, add the location as a pin to the map, and write a file outline to base the code for the raspberry pi on so that the data is formatted correctly. This was my goal for the week, so I am ecstatic to have accomplished it and have it working smoothly. I also worked on sending more information to the web application, like the time of the capture. Finally, I have been working on the final demonstration slides and preparing necessary materials for that.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

In the next week, I plan to integrate the software portions of the project and also test the new temperature sensor + speaker that we ordered and make sure those are working properly. I also plan to integrate the temperature sensor with the web application through the raspberry pi.

Angie’s Status Report for 4/8

What did you personally accomplish this week on the project?

  • During the interim demo, I demonstrated the radar subsystem collecting range-doppler data in real time.
  • Together with Linsey and Ayesha, we collected and labeled radar data consisting of indoor scenes with or without humans in different orientations at different positions in different poses (standing, sitting, lying on ground) performing different tasks (hand waving, jumping, deep breathing). In total, we collected 25 ten-second long scenes at a rate of four frames per second, leaving us with 1000 labeled scenes.
  • Ayesha and I started integrating the GPS and IMU data with the Raspberry Pi by writing and testing socket code for the Raspberry Pi to send and the web app to receive data over WiFi.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Progress is on schedule, according to the updated schedule presented at the interim demo.

What deliverables do you hope to complete in the next week?

  • Integrate web app with radar, location, and temperature data and conduct corresponding integration tests for latency, data rate, etc.
  • Train neural network on our own dataset
  • Continue exploring radar preprocessing such as denoising and mitigation of multipath returns

What tests have you run and are planning to run?

  • Radar subsystem testing: The radar meets our use case requirements, being able to clearly register a human with moving arms from within 5 meters. The radar is able to do so even when the human is obstructed by glass and plastic or partially obstructed by metal chairs. However, it would be good for us to collect more radar data of moving non-human objects to test how well the radar can specifically distinguish humans from non-humans, measured by its accuracy and F1 score.
  • Integration testing: The radar and GPS modules are able to stream data directly to the Raspberry Pi via UART at 4 frames per second. To meet the use case requirements, the web app must be able to receive all that data at the same rate via WiFi within 3 seconds of collection. This metric will be tested after the server is written to receive data from our system.