Team Status Report for 2/25

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Risks that could jeopardize the success of the project are the limitations of the radar without the DCA1000EVM to quickly process and stream the raw radar data in real time, which requires a workaround in how the data is stored and sent, such as recording and saving a file, and then streaming the data afterwards, which increases the total time from data collection to wireless transmission to the base station computer to classification by the neural network. In order to fit the time constraint of 3 seconds, lower frame rate data may need to be sent, reducing the quality of the data and possibly reducing the F1 score of the neural network.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Changes were made to the circuit after assessing available parts, mostly concerning communication between components which do not affect cost. The communication between the radar and the Raspberry Pi was changed from SPI to USB 3 since the radar evaluation module has a USB port, and the communication between the GPS and the Raspberry Pi was changed from I2C to UART because in the GPS/IMU module we bought, the GPS has separate communication from the IMU, which still communicates with I2C. Since no buck converters were available at TechSpark and no estimate of when they will be restocked, a linear regulator, which is less expensive, was used to convert the battery from 9V to 5V, which decreases battery life due to lower efficiency compared to a buck converter. Instead of connecting the temperature sensor output to a transistor gate, the output voltage was simply added to 0.33V for simplicity due to the range of expected output voltages (0.3-1.8V) which coincide with the voltages accepted by the Raspberry Pi’s GPIO pins (0-3.3V).

Provide an updated schedule if changes have occurred.

One week is taken off of the circuit schedule, and is replaced with work on preprocessing the radar data to input into the neural network.

 

How have you have adjusted your team work assignments to fill in gaps related to either new design challenges or team shortfalls?

Since we are communicating internationally with authors of the Smart Robot drone radar dataset for clarification on preprocessing steps such as cubelet extraction of the radar data, which adds time, we are focusing on building the other parts of the project earlier, such as the circuit, so that the overall work is rearranged instead of delayed.

Angie’s Status Report for 2/25

What did you personally accomplish this week on the project? 

This week I worked on and presented the design review presentation, finalized the circuit design for the attachment, and changed how the data will be sent and preprocessed. Also, the AWR1843 radar cannot stream real time data without the DCA1000EVM module, but it can record and save data, so the plan is changed to writing the data to one of two files every two seconds and streaming the other file to provide non-real time but acceptably fast results. I also discussed preprocessing the inputs to the neural network with Linsey and the format of the inputs.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I went a week ahead on the circuit schedule and a week behind on preprocessing schedule, so I am on schedule in total. My actions to catch up on the preprocessing schedule will be to work on processing the radar data into inputs to the neural network instead of working on the circuit.

What deliverables do you hope to complete in the next week?

Next week, I hope to obtain both the GPS and IMU so that I can finalize the plastic chassis design and 3D print it right afterwards. I also plan to do more preprocessing of the data and near-real time wireless streaming of radar data through the Raspberry Pi.

Linsey’s Status Report for 2/25

At the start of this week, I finished up the slides for the design review. To fully flesh out my approach, I read a few articles to understand differences between different CNN architectures. I have worked with 2D CNN architectures before and am most familiar with them. However, since the range-Doppler maps that we will input are 3D, in the end, I found that using a 3D CNN architecture would be the most fitting for the task. I learned that 3D CNN’s differ in that the kernel slides in all three dimensions. After pinning down my architecture, it was fairly easy to find code that accomplished this network (I was unable to attach the Python file, but I did want to). However, it’s important that the parameters are relevant to the radar classification task. I found a very interesting research paper on exactly this topic, and I plan on reading it and changing the parameters of the network accordingly (currently the network is for classifying CT images). While I made progress on the architecture, I struggled with generating inputs from the dataset. We have the range-Azimuth and range-Doppler data, but from that a range-Doppler map needs to be constructed. Furthermore, the label for each training data piece will be a cubelet outlining the human in the frame (this is something I didn’t previously understand). Therefore, this will be a big focus for me moving forward–making both the heat maps and the cubelets. Lastly, I drafted the introduction and use case requirements sections of the design report. I also added relevant details about the ML architecture in the architecture, design requirements, design trade studies, and test, verification, and validation sections.

My progress is behind. I was hoping to get the network training by now, but the input and label generation was much more complicated than I expected. However, as long as I can get the network reliably training by spring break, I think we are on a good track for leaving an ample amount of time for integration.

By the end of this week, I hope to get the network training. The bulk of that work will be generating the inputs and the targets.

 

Ayesha’s Status Report for 2/25

This week I worked mostly on continuing to set up a base for the web application. I created a Django app for our site and created some basic HTML and CSS files to set up a login page. This week, I focused more on laying out each page and outlining what needs to be done for each one, such as a login page, a map tracker page, a photo page, etc. I am also working more on deciding how I want the user experience to be in terms of website flow, such as what should be automatically loaded/redirected and what the user should have to navigate to themselves based on what they want. Next week, I will work more on implementing the actual functionalities for each page. In addition to this, I have also been working on the design review report. I have been specifically been working on the architecture, design requirements, design trade studies, testing, and project management sections. For the first four, I have been focusing on the front end and the specific implementation and design details for the web app. For the project management section, I am focusing on how we are all splitting up our work and the timelines.

 

My progress is on schedule. Next week I plan to request a purchase for the Google Maps API and have a base site set up so that I can work on marker functionality and style tweaks. My goal is to have all of that done by the time my teammates are ready to integrate so that I don’t have to work on both the marker functionality and the integration in parallel.

Ayesha’s Status Reports for 2/18

This week I worked on setting up the Github repository for my web app and reproducing some files to set up a basic web application for our site. I have set up a login/registration page based on a previous site I made, and made random pages to plan out how we want to set up our own web app. I also looked into purchasing the Google Maps API and how to integrate that into our site, with the ability to save markers. Furthermore, I worked on our design review slides. I laid out all of the information that needs to be added to each slide, including fleshing out a more thorough testing plan with specific metrics and more clearly defined outputs compared to what we had previously said. This allowed us to have a clear idea of how testing would look and what defines a passing test. This took a lot of time and research but was extremely helpful in figuring out how to narrow down our scope and what that should look like.

My progress is on schedule. Next week I plan to start changing the style of some pages and laying out specifics for our site. The week after I plan to start implementing the google maps API. I also plan to work on the design report in the upcoming weeks.

The ECE courses that covered the engineering, science, and math principles we used involved 10-301 (intro to ML) and 17-437 (web app dev). These classes were most useful because I learned a lot of important machine learning principles in 10-301 and also, I did a project in web app dev.

Angie’s Status Report for 2/18

This week, I acquired parts for the standalone attachment, including the Raspberry Pi and temperature sensor. After ordering a GPS module, all required parts for the circuit (others sourced from TechSpark and previous coursework) would have arrived and the whole circuit would be integrated. Literature also confirms that a suitable radome for patch antennas can be 3-D printed with PLA as the material, which is available at CMU. The knowledge to build the circuit was learned in 18-220. I will implement the below simple slotted radome into our 3-D printed chassis (Karthikeya et. al.)

I also confirmed that I can use the drone, but I will also build the standalone attachment for the MVP, adding time to the schedule. Due to focusing on the circuit instead of the radar data, parts in my schedule have swapped since I previously planned to build the circuit after working on processing radar data, so I am behind schedule for data processing. For the next week, I plan to finish implementing real-time data collection from the radar and generating range-doppler maps as input to the neural network.

Team Status Report for 2/18

A high risk factor for this project is training the network to completion (a very time consuming task) and then testing it with the radar data only for it to not work due to the differences in training and test data. To mitigate this risk, Linsey spoke with researchers in Belgium this week to better understand the data we are training the network on. We learned that we must construct range-doppler maps from our radar data in order to improve image resolution and successfully detect humans. By learning from these researchers, we can make our data better for the network and thus better for detecting humans. There aren’t currently any contingency plans in place. Because Angie has already started collecting data using the radar and Linsey has confirmed the dataset, we will be able to soon compare our own range-doppler maps and the dataset’s maps to ensure a smooth integration process on this end.

We added a temperature sensor and speaker to our design. Since our use case is reaching areas where traditionally used infrared can’t (aka fire SAR missions), it’s extremely important that our drone attachment can withstand high temperatures, since fires can measure around 600 degrees Celsius. We know that the plastic chassis and radar will start deteriorating at 125 degrees Celsius. To stop this from happening, our temperature sensor will alert the user when the temperature reaches 100 degrees Celsius. This alert will be shown through our web application. On the victim side of our application, the speaker will emit a loud beeping noise. By making victims aware of the device’s presence, they can be cued to wave their arms, which will help our system detect them more easily by the Doppler shift. The temperature warning system will make our device more user friendly and help ensure the functionality of our device. The beeping noise helps our device function better as well by alerting victims that it’s there. The cost of the temperature sensor and speaker is very low and will have no impact on our budget.

No changes have occurred to the schedule.

To develop our design, we employed the following engineering principles: usability, ethics, and safety.

Linsey’s Status Report for 2/18

This week I planned on developing the machine learning architecture. I started by loading up our chosen dataset and examining its structure. Upon this analysis, I found several confusing points that weren’t answered in the dataset’s readme file. I then emailed the authors of the paper “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach” (the paper on which were basing our approach). I’ve since been corresponding with those authors. They pointed out that the dataset didn’t include range-doppler signatures, which we absolutely have to train on to successfully detect moving humans. They pointed me to another dataset which is must more suitable. The authors also attached 4 extremely helpful research papers that spoke more about drones and using mmWave radar to detect humans. I read and reviewed each of those. I learned two very important things. Due to the low resolution of radar data, it will be necessary to construct range-doppler maps to gain more information and achieve higher resolution, which will both in turn aid in detecting humans. Next, although I knew I wanted to implement a CNN architecture, one of the papers–“Radar-camera Fusion for Road Target Classification”–pointed out a high performing CNN architecture that I would like to implement.

Because I spent this week understanding and laying out the architecture, I wasn’t able to get to implementation. This puts me a week behind schedule. However, I think this time was very well spent, because my progress going forward now feels a lot more structured. Although I’m behind schedule, I think by implementing the network this week, our progress will still be on track.

By the end of this week, I would like to get the network training. Since I am using a CNN architecture on high dimensional training data, this process will take a long time, including tuning hyperparameters. Therefore, it’s very important I get this started early.

Before this project, I had only been exposed to radar briefly in 18-220. To learn more about radar and its collected data, I read the 4 research papers recommended by the authors of the aforementioned paper. For the ML architecture, I am a machine learning minor and have implemented my own CNN’s in Intermediate Deep Learning (10-417).

Angie’s Status Report for 2/11

After consulting with Akarsh Prabhakara from CyLab about our project, we received an AWR1843Boost radar which we could immediately test with. I also consulted Professor Sebastian Scherer from AirLab about drones to use for the project. We also received relevant literature about detecting vital signs with drone-based radar. Before capturing any data that will be used for training, I tested basic functionality by setting up an indoor scene with a 3 m by 3 m cardboard wall and recorded a moving human (me) in front and behind it. Below are the point clouds obtained using constant false alarm rate (CFAR), and Doppler-range plots clearly indicating the moving human both in front and behind the cardboard obstruction. Our progress is on schedule. Next week, I hope to finalize the drone situation, which will inform the decision of whether to order parts. I will continue to collect data and also work to isolate a human’s radar signature from moving background clutter.