Team Status Report for 4/29

A significant risk is that the ML architecture, upon retraining, will not reach an F1 score of .7. However, last time we added the data, the F1 score jumped by .17. Our current best F1 score is .5, so we hope at the very least we can get an F1 score of .67. Another risk is that the hardware doesn’t integrate with the RPi well. However, the temperature sensor is fully integrated. The speaker is on its way to integration, and the GPS sensor is integrated.

No changes were made to the system design. By narrowing our application to the demo, instead of calculating when a radar frame should have inference performed on it using the IMU sensor data, we will instead use a key press to start inference. This has no impact on the cost.

Here is Ayesha and I testing the temperature sensor connected to the RPi. We used a hairdryer to increase the temperature and see the ambient temperature and temperature sensor reading increase accordingly.

Testing Details:

  • Machine learning architecture: We unit tested on 600 held-out test samples (collected of diverse scenes and roughly half humans and half no human scenes) measuring resulting F1 score and accuracy. Also, we recorded the inference time of the network.
  • Hardware
    • Temperature sensor: We have connected it to the RPi and seen the output of the sensor. By comparing the room temperature readings of the temperature sensor readings and the ambient thermometer readings, we saw that the temperature sensor was working at baseline. By using the hairdryer we saw both the temperature on the temperature sensor and ambient thermometer increase.
    • Radar: We tested real-time data acquisition at 5Hz on the laptop and connected it to the RPi, but have not tested real-time data acquisition over WiFi yet.
    • GPS/IMU sensor: We connected it to the RPi and logged 4927 locations and compared them to the actual stationary location. The location data is precise enough for our updated use case, with a standard deviation of 1.5m, but the location is way off by 20 miles, requiring compensation to output an acceptable location.
  • Web application: We have measured that the website updates in ~100 ms. Through HTTP requests, we also found that the web application is able to received formatted data.

Findings:

  • The GPS is currently ~20 miles off, so we may need to apply an offset to get accurate readings from the sensor.

Team Status Report for 4/22

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project are :

  • If the system is not effective at distinguishing humans from nonhumans. This manifests as a high rate of false positives and false negatives. This risk can be managed by changing the type of data that is inputted and how the data is preprocessed. In previous weeks, both have been changed, especially increasing the velocity resolution.
  • High latency of transmitting system data wirelessly to web app. Although it is crucial to have full-resolution radar data during each transmission, the data rate of the GPS and temperature data can be reduced to reduce latency, and the location is estimated using a Kalman filter.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

  • The dimensions of the input data were changed so that the doppler resolution was doubled and the range was halved. This change was necessary to reduce latency while providing higher quality data, as the time waiting to collect data beyond 5m reduces the frequency of data that can be sent and is extraneous anyway.
  • Additionally, the doubled doppler resolution provides more fine details that can help identify a human (such as the strong returns from movement of hands and weaker returns from individual fingers). Additionally, the input data is preprocessed to reduce noise in the range-doppler map which is expected to improve the accuracy of the neural network since the noise is less likely to erroneously be identified as a human.

Team Status Report for 4/8

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project are incurring issues with training, since our dataset is not going to produce an algorithm that would work with our use case. We managed this risk by collecting ~1000 frames of our own data for training purposes, and we will continue to collect more data as the model begins training to ensure that our use case is addressed. Specifically, we will try to collect data outdoors and in fog (from fog machines), since this is what our use case aims to address. In the meanwhile, we are working on integrating the other hardware and software components, such as the temperature sensors and GPS with the frontend so that the remaining features can be ready for final integration and testing.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The most significant change, as aforementioned, was going back to the original radar we had chosen instead of the green board. Besides that, we have not made any significant changes to our system design. The costs have been mitigated by us collecting more data this week with our original radar – we collected about 1000 frames with various scenes (static, one person, one person waving arms, two people, obstructions, etc.) from various distances. Now we can speed up the training process by using this data to train the ML algorithm. Another thing we are doing to mitigate any costs is integrating other parts, like testing and integrating the GPS/IMU sensors with the web application, as well as the temperature sensors. This is so that by the time training is done, we can integrate the software portions and be done.

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

We are planning to run tests after our model is trained by using the radar to capture live feed and comparing the accuracies of the detection to that of the previously gathered data. Since classification is not our primary concern, our entire testing plan is focused on the accuracy of the radar image human detection. Similar to how we gathered data, we will attach our radar to a swiffer to test it at various heights, and run tests on images of static scenes, humans breathing, waving arms, obstructed, interacting with other humans, and this will all be done from various distances, heights, and angles to test the doppler and azimuth axes. We will analyze these results and compare them to the anticipated measures by comparing the detection accuracy percentages, the F1 score of the algorithm, and the latency of displaying the detection results on the web application.

Team Status Report for 4/1

The most significant risk that could jeopardize this project is collecting enough data of moving humans with our own radar to train the machine learning architecture. On Friday, we met to test the radar. Angie brought the newer radar in hopes of getting the green board to work. However, it refused to work with her computer. We will instead meet tomorrow to test the older radar on moving humans. Angie will visualize the corresponding Doppler effect on her computer. Not only will this prepare us for the interim demo, it will also ensure that the collected radar data is suited for our machine learning architecture. After this assurance, Angie will be able to send Linsey the radar data, so that she can start training the architecture on our data as soon as possible.

Another risk in general is integration timeline. We spoke about this as a team and assured ourselves that we have enough time to integrate components. Ayesha and Linsey have already initiated integration of the web application frontend and the machine learning architecture. Ayesha installed the Django REST API that will connect the two components, and together they’ll migrate their code to the same location for integration.

There were no system design changes. Just to clarify, we plan on using the older radar for our project, because it works much better with our computers, and at this point in the project, we need something reliable.

Team Status Report for 3/25

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

As we begin to test our system more extensively, significant risks that could jeopardize the success of the project include damage to the system during testing, which is especially risky for our project since we use borrowed components that cost over $1000 in total. Even if the components are not exposed to high temperatures except the temperature sensor, environmental conditions may hasten damage such as corrosion to the antenna, which is visible on the AWR1642 radar module that came with the green board. To prevent the same from happening to the AWR1843, we have chosen to enclose the system with a radome when testing in high-moisture conditions such as fog. The contingency plan is that we have two radar modules available in case one fails.

Another risk is the dataset not being sufficient to train the neural network to detect a human. Right now, the neural network has only been trained on the publicly available Smart Robot dataset that detects a corner reflector, which has a very different radar signature compared to a human. To mitigate this risk, our contingency plan is to train the neural network on our own dataset of 3D range-doppler-azimuth data of actual humans that is continually collected throughout the course of the semester.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? 

No changes were made this week to the existing design of the system.

Team Status Report for 3/18

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The most significant risks that we currently have are from image capturing. This process will take the longest because there are a lot of parameters to tune with image capture, and all of the adjustments that need to be made will add some time. The risk that comes with this is delaying our integration. Specifically, we need the radar images to test our machine learning algorithm and be able to start attempting human detection. This risk is being managed by running the training and all other parts of the project in parallel with the image capture. The software components are being developed right now so that they are fully ready by the time the images are ready for training. Switching the radar is one contingency that is in place to help capture images better.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

As mentioned in Angie’s status report from last week, the main change that was made to the existing design was the radar. We have switched back to the original radar since that became available from someone in CyLab that Angie had been communicating with. They left a green board for us, which was very helpful, and helped steer our design choice. Another design element that was changed was the maps API. Instead of using Google Maps API, we will be using HERE Maps API because there were too many payment issues with the Google Maps API. HERE has the same marker functionality that we wanted from Google Maps so this is still a very usable API for us. Also, it has been used by past projects so we know it is doable. The last change is that we are deploying the machine learning algorithm on EC2 instead of locally because there were space issues. There are no extra costs incurred.

 

Provide an updated schedule if changes have occurred.

No changes have occurred.

Team Status Report for 3/11

A significant risk we identified was the real-time functionality of our system specifically the radar. We found that our current radar module didn’t support it, but using a different radar module (also from CyLab) would be more feasible. This radar is already procured, so this risk has been mitigated.

The radar change was necessary to support the real-time functionality of our system. This is driven by the use case requirement of our device helping in time-pressured search and rescue situations. No additional cost will be incurred, because it is the courtesy of CyLab. Because this radar is capturing the same data–range-Doppler and range-azimuth coordinates–from an integration viewpoint, it will be the same for the machine learning architecture.

The new radar is certainly a new tool. For the integration of the machine learning architecture with the web application, we identified that Django REST API, because it is compatible with supporting Python programs, which is how the machine learning architecture will be implemented. Lastly, while writing the design report, we realized that there was no clear delineation of when the radar would use the captured data, perform inference, and identify a human. Therefore, we established that we will use the IMU data to determine when the drone has zero horizontal acceleration and is upright and subsequently perform inference on the data by running the machine learning architecture.

Team Status Report for 2/25

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Risks that could jeopardize the success of the project are the limitations of the radar without the DCA1000EVM to quickly process and stream the raw radar data in real time, which requires a workaround in how the data is stored and sent, such as recording and saving a file, and then streaming the data afterwards, which increases the total time from data collection to wireless transmission to the base station computer to classification by the neural network. In order to fit the time constraint of 3 seconds, lower frame rate data may need to be sent, reducing the quality of the data and possibly reducing the F1 score of the neural network.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Changes were made to the circuit after assessing available parts, mostly concerning communication between components which do not affect cost. The communication between the radar and the Raspberry Pi was changed from SPI to USB 3 since the radar evaluation module has a USB port, and the communication between the GPS and the Raspberry Pi was changed from I2C to UART because in the GPS/IMU module we bought, the GPS has separate communication from the IMU, which still communicates with I2C. Since no buck converters were available at TechSpark and no estimate of when they will be restocked, a linear regulator, which is less expensive, was used to convert the battery from 9V to 5V, which decreases battery life due to lower efficiency compared to a buck converter. Instead of connecting the temperature sensor output to a transistor gate, the output voltage was simply added to 0.33V for simplicity due to the range of expected output voltages (0.3-1.8V) which coincide with the voltages accepted by the Raspberry Pi’s GPIO pins (0-3.3V).

Provide an updated schedule if changes have occurred.

One week is taken off of the circuit schedule, and is replaced with work on preprocessing the radar data to input into the neural network.

 

How have you have adjusted your team work assignments to fill in gaps related to either new design challenges or team shortfalls?

Since we are communicating internationally with authors of the Smart Robot drone radar dataset for clarification on preprocessing steps such as cubelet extraction of the radar data, which adds time, we are focusing on building the other parts of the project earlier, such as the circuit, so that the overall work is rearranged instead of delayed.

Team Status Report for 2/18

A high risk factor for this project is training the network to completion (a very time consuming task) and then testing it with the radar data only for it to not work due to the differences in training and test data. To mitigate this risk, Linsey spoke with researchers in Belgium this week to better understand the data we are training the network on. We learned that we must construct range-doppler maps from our radar data in order to improve image resolution and successfully detect humans. By learning from these researchers, we can make our data better for the network and thus better for detecting humans. There aren’t currently any contingency plans in place. Because Angie has already started collecting data using the radar and Linsey has confirmed the dataset, we will be able to soon compare our own range-doppler maps and the dataset’s maps to ensure a smooth integration process on this end.

We added a temperature sensor and speaker to our design. Since our use case is reaching areas where traditionally used infrared can’t (aka fire SAR missions), it’s extremely important that our drone attachment can withstand high temperatures, since fires can measure around 600 degrees Celsius. We know that the plastic chassis and radar will start deteriorating at 125 degrees Celsius. To stop this from happening, our temperature sensor will alert the user when the temperature reaches 100 degrees Celsius. This alert will be shown through our web application. On the victim side of our application, the speaker will emit a loud beeping noise. By making victims aware of the device’s presence, they can be cued to wave their arms, which will help our system detect them more easily by the Doppler shift. The temperature warning system will make our device more user friendly and help ensure the functionality of our device. The beeping noise helps our device function better as well by alerting victims that it’s there. The cost of the temperature sensor and speaker is very low and will have no impact on our budget.

No changes have occurred to the schedule.

To develop our design, we employed the following engineering principles: usability, ethics, and safety.

Team Status Report for 2/11

The most significant risks that could jeopardize the success of the project are related to integration. The first one will be gathering meaningful images to perform the ML algorithms on. As of now, we have acquired the radar and plan to begin image capturing within the next two weeks. With this comes the challenge of figuring out how to best position and use the radar so that the images it captures can be used with the ML algorithms we train. We will be using a dataset of standstill drone image captures to train our model, but until we begin image capture with the radar, the radar image quality is still a large risk that could delay the integration of the software with the hardware if the radar images are significantly different than the dataset images. These risks are being managed by starting the radar image capture as early as possible (i.e. within the next two weeks), since the ML training process will not be significantly far along before we start image capture. Therefore, we have allotted time to examine the radar captures together and ensure that they work with our dataset. In addition, we have looked into other radar image datasets and sources to find these datasets in case we find that our dataset is drastically different in comparison to our radar images.

One change we made to the existing design of the system is that we narrowed our project scope down to fire search and rescue missions. While we were planning on doing search and rescue missions that did not involve metal, since that would interfere with the radar, we did not explicitly narrow down the scope further than that. We received feedback from our TA that our use case scope was not extremely clear in our presentation and that it would be very helpful to do so in order to make clearer goals for ourselves and allow us to come up with a more specific testing plan. This change incurs no extra costs since it allows us to create more specific plans going forward and narrow down our needs. In addition to this change, we also added the creation of 3D chassis to encapsulate our device and have it rest on the drone legs. We had not previously included this in our project spec, but we needed to include something that would safely keep our entire device together and allow it to attach to any drone that could hold its weight. This did not incur many extra costs, since Angie has experience with 3D printing and was confident she would be able to create this chassis with ease. We had to allot one week to design and print this chassis to hold our radar and raspberry pi, which will occur once we acquire the raspberry pi, since we have already acquired the radar. This did not add extra time, since it can be done in parallel with many of the other tasks and does not have many dependencies.

As of now we are on schedule with our project, and plan to stay on track with our plans for the next few weeks.

Our project includes considerations for public health and safety concerns because of our use case. Our project is designed to help first responders stay safe by limiting the amount of time they are exposed in high danger areas. Our project also focuses on improving the efficiency and cost of search and rescue missions by using an mmWave radar. Currently the infrared sensor is more commonly used but can provide unclear results due to the flames. Since our radar would not get blocked by the waves from the fire, our project should allow for better human detection in fire, and thus help save more people.