Ayesha’s Status Report for 4/1

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours).

This week I spent a lot of time reading into the REST framework to integrate the web app with the ML algorithm. I installed the framework into the web application and have been working on moving my files over to the ece machines, since that is the only place with enough storage to run the ML algorithm. Understanding how the framework would allow the two software portions to integrate took a lot of time because there was a very specific structure to follow and I had to make sure this framework would work with the model we implemented. I also worked with my team to gather data for training the ML algorithm and testing the radar image capture functionality. We met up and attached a swiffer to our radar so that we could hold it up at heights of 5ish meters, and the goal was to test the capture abilities of the radar while one of us laid on the ground and waved our arms to be detected by the Doppler shift. The radar unfortunately not connect to the computer, so we will instead meet on Sunday to redo this using the radar that is with Angie.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Personally, my progress is on schedule but I need to spend more time helping gather data so that I can begin integrating, because otherwise all of the parts and the integration will fall behind.

What deliverables do you hope to complete in the next week?

In the next week, I hope to gather more radar data so that the machine learning algorithm can train better. I also hope to finish integrating the ML with the web app so that the software can work cohesively. Lastly, this is a stretch goal but if all goes well with capturing data, I hope to work with Angie on getting the GPS data so that I can start working on the marker display functionality, since I have to hardcode that until I can get the input data from somewhere.

Ayesha’s Status Report for 3/25

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours)

This week I accomplished everything I wanted to. I integrated the HERE Maps API into my project and tested the map display and the marker addition functionalities. I spent a lot of time reading into how HERE works and how to use the different features, as well as adjusting it to the code I already had. I also implemented the zoom and scroll features on the map. Here is a picture to display the map with an example marker. In addition, I spent a lot of time adjusting the style to make sure that the map fit in the layout I had set up on the website.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

My progress is now on schedule, which is great.

 

What deliverables do you hope to complete in the next week?

In the next week, I hope to integrate the web app with the ML portion so that the detection can be displayed where the blue box currently is.

Linsey’s Status Report for 3/25

Although I spent a lot of time on the project this week, I am frustrated, because I didn’t make any progress. After trying many times unsuccessfully to migrate my code to AWS, I realized that for it to work I would have to use a tier that requires money. I then learned that this course doesn’t provide AWS credits. Therefore, Tamal pointed me towards the ECE machines and lab computers. I first tried the ECE machines, because I can ssh into them remotely. I accomplished this. However, the size of the data I am working with is too large for the andrew directories, so it is necessary to use the AFS directories. After reading the guide online, I wasn’t able to successfully change directories. I emailed IT services, and they responded with some help that didn’t work for me, and they didn’t respond to my most recent follow-up, which leaves me with the option of working in person on the lab computers. Tomorrow, I plan on trying this option.

My progress is behind. This is not at all where I planned on being. By working in the lab tomorrow, I plan on getting the architecture training and finally catching up.

This week, my deliverables will include a trained architecture and integration with at least one of the subsystems (either the frontend of the hardware).

 

Linsey’s Status Report for 3/18

This week I worked to migrate my code to AWS. Previously, making the training and target datasets locally was working fine. However, when I tried to concatenate those two datasets and split them into training and validation sets, it stopped working locally due to running out of memory. Therefore, I migrated everything to AWS to overcome those issues. However, this has proven much more difficult than I thought. I spent hours trying to fix the ssh pipeline on VSCode and installing the necessary packages in the AWS environment. Ssh was finicky and sometimes wouldn’t connect at all; fixing that was a lot of Stack Overflow and looking through Git forums. Once I got into the AWS environment, everything was successfully copied over–the code files and the data. However, every time I tried to install PyTorch on the AWS environment, it would disconnect me. Additionally, once it logged me out of the ssh window, it wouldn’t let me back in a second time. I’ve looked at many pages for this and am still struggling to get this to work.

My progress is behind. I am very frustrated by the AWS migration. I hope I can figure this out soon, because after that, running and training the architecture will be very simple.

By the end of the week, I hope to have successfully run the architecture on AWS and started integration with either the web application or radar.

Linsey’s Status Report for 3/11

I have helped put together the design review report for its submission deadline and have continued working on the machine learning architecture. For the report, I wrote more general sections like the abstract, introduction, use case requirements, and related work. Much of this was expanding upon our presentation, but I carefully examine the requirements and made sure to hit all the discussion points and reviewed all these sections with my teammates. Additionally, in the other sections, I wrote material pertaining to the machine learning architecture. Again, I built off of the presentation. I did have to organize which of my references pertained to which parts of the architecture and really understand exactly how my part is integrating with Angie’s and Ayesha’s. I also drew out a whole system diagram for the report. For the machine learning architecture, with Angie’s help I labeled each data piece, so that they all have targets. Therefore, I was able to generate train and test sets. I started testing the architecture that I had written out with the training data, but I am working through dimension errors. This will take some shape examination but hopefully shouldn’t be that much of a barrier to further progress.

I am slightly behind, because I wanted to have the machine learning architecture fully training at this point. However, I truly took a break for spring break and will be back on track after overcoming the dimension errors.

I hope to deliver the machine learning architecture fully trained by the end of this week.

Linsey’s Status Report for 2/25

At the start of this week, I finished up the slides for the design review. To fully flesh out my approach, I read a few articles to understand differences between different CNN architectures. I have worked with 2D CNN architectures before and am most familiar with them. However, since the range-Doppler maps that we will input are 3D, in the end, I found that using a 3D CNN architecture would be the most fitting for the task. I learned that 3D CNN’s differ in that the kernel slides in all three dimensions. After pinning down my architecture, it was fairly easy to find code that accomplished this network (I was unable to attach the Python file, but I did want to). However, it’s important that the parameters are relevant to the radar classification task. I found a very interesting research paper on exactly this topic, and I plan on reading it and changing the parameters of the network accordingly (currently the network is for classifying CT images). While I made progress on the architecture, I struggled with generating inputs from the dataset. We have the range-Azimuth and range-Doppler data, but from that a range-Doppler map needs to be constructed. Furthermore, the label for each training data piece will be a cubelet outlining the human in the frame (this is something I didn’t previously understand). Therefore, this will be a big focus for me moving forward–making both the heat maps and the cubelets. Lastly, I drafted the introduction and use case requirements sections of the design report. I also added relevant details about the ML architecture in the architecture, design requirements, design trade studies, and test, verification, and validation sections.

My progress is behind. I was hoping to get the network training by now, but the input and label generation was much more complicated than I expected. However, as long as I can get the network reliably training by spring break, I think we are on a good track for leaving an ample amount of time for integration.

By the end of this week, I hope to get the network training. The bulk of that work will be generating the inputs and the targets.

 

Ayesha’s Status Report for 2/25

This week I worked mostly on continuing to set up a base for the web application. I created a Django app for our site and created some basic HTML and CSS files to set up a login page. This week, I focused more on laying out each page and outlining what needs to be done for each one, such as a login page, a map tracker page, a photo page, etc. I am also working more on deciding how I want the user experience to be in terms of website flow, such as what should be automatically loaded/redirected and what the user should have to navigate to themselves based on what they want. Next week, I will work more on implementing the actual functionalities for each page. In addition to this, I have also been working on the design review report. I have been specifically been working on the architecture, design requirements, design trade studies, testing, and project management sections. For the first four, I have been focusing on the front end and the specific implementation and design details for the web app. For the project management section, I am focusing on how we are all splitting up our work and the timelines.

 

My progress is on schedule. Next week I plan to request a purchase for the Google Maps API and have a base site set up so that I can work on marker functionality and style tweaks. My goal is to have all of that done by the time my teammates are ready to integrate so that I don’t have to work on both the marker functionality and the integration in parallel.

Team Status Report for 2/18

A high risk factor for this project is training the network to completion (a very time consuming task) and then testing it with the radar data only for it to not work due to the differences in training and test data. To mitigate this risk, Linsey spoke with researchers in Belgium this week to better understand the data we are training the network on. We learned that we must construct range-doppler maps from our radar data in order to improve image resolution and successfully detect humans. By learning from these researchers, we can make our data better for the network and thus better for detecting humans. There aren’t currently any contingency plans in place. Because Angie has already started collecting data using the radar and Linsey has confirmed the dataset, we will be able to soon compare our own range-doppler maps and the dataset’s maps to ensure a smooth integration process on this end.

We added a temperature sensor and speaker to our design. Since our use case is reaching areas where traditionally used infrared can’t (aka fire SAR missions), it’s extremely important that our drone attachment can withstand high temperatures, since fires can measure around 600 degrees Celsius. We know that the plastic chassis and radar will start deteriorating at 125 degrees Celsius. To stop this from happening, our temperature sensor will alert the user when the temperature reaches 100 degrees Celsius. This alert will be shown through our web application. On the victim side of our application, the speaker will emit a loud beeping noise. By making victims aware of the device’s presence, they can be cued to wave their arms, which will help our system detect them more easily by the Doppler shift. The temperature warning system will make our device more user friendly and help ensure the functionality of our device. The beeping noise helps our device function better as well by alerting victims that it’s there. The cost of the temperature sensor and speaker is very low and will have no impact on our budget.

No changes have occurred to the schedule.

To develop our design, we employed the following engineering principles: usability, ethics, and safety.

Linsey’s Status Report for 2/18

This week I planned on developing the machine learning architecture. I started by loading up our chosen dataset and examining its structure. Upon this analysis, I found several confusing points that weren’t answered in the dataset’s readme file. I then emailed the authors of the paper “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach” (the paper on which were basing our approach). I’ve since been corresponding with those authors. They pointed out that the dataset didn’t include range-doppler signatures, which we absolutely have to train on to successfully detect moving humans. They pointed me to another dataset which is must more suitable. The authors also attached 4 extremely helpful research papers that spoke more about drones and using mmWave radar to detect humans. I read and reviewed each of those. I learned two very important things. Due to the low resolution of radar data, it will be necessary to construct range-doppler maps to gain more information and achieve higher resolution, which will both in turn aid in detecting humans. Next, although I knew I wanted to implement a CNN architecture, one of the papers–“Radar-camera Fusion for Road Target Classification”–pointed out a high performing CNN architecture that I would like to implement.

Because I spent this week understanding and laying out the architecture, I wasn’t able to get to implementation. This puts me a week behind schedule. However, I think this time was very well spent, because my progress going forward now feels a lot more structured. Although I’m behind schedule, I think by implementing the network this week, our progress will still be on track.

By the end of this week, I would like to get the network training. Since I am using a CNN architecture on high dimensional training data, this process will take a long time, including tuning hyperparameters. Therefore, it’s very important I get this started early.

Before this project, I had only been exposed to radar briefly in 18-220. To learn more about radar and its collected data, I read the 4 research papers recommended by the authors of the aforementioned paper. For the ML architecture, I am a machine learning minor and have implemented my own CNN’s in Intermediate Deep Learning (10-417).

Team Status Report for 2/11

The most significant risks that could jeopardize the success of the project are related to integration. The first one will be gathering meaningful images to perform the ML algorithms on. As of now, we have acquired the radar and plan to begin image capturing within the next two weeks. With this comes the challenge of figuring out how to best position and use the radar so that the images it captures can be used with the ML algorithms we train. We will be using a dataset of standstill drone image captures to train our model, but until we begin image capture with the radar, the radar image quality is still a large risk that could delay the integration of the software with the hardware if the radar images are significantly different than the dataset images. These risks are being managed by starting the radar image capture as early as possible (i.e. within the next two weeks), since the ML training process will not be significantly far along before we start image capture. Therefore, we have allotted time to examine the radar captures together and ensure that they work with our dataset. In addition, we have looked into other radar image datasets and sources to find these datasets in case we find that our dataset is drastically different in comparison to our radar images.

One change we made to the existing design of the system is that we narrowed our project scope down to fire search and rescue missions. While we were planning on doing search and rescue missions that did not involve metal, since that would interfere with the radar, we did not explicitly narrow down the scope further than that. We received feedback from our TA that our use case scope was not extremely clear in our presentation and that it would be very helpful to do so in order to make clearer goals for ourselves and allow us to come up with a more specific testing plan. This change incurs no extra costs since it allows us to create more specific plans going forward and narrow down our needs. In addition to this change, we also added the creation of 3D chassis to encapsulate our device and have it rest on the drone legs. We had not previously included this in our project spec, but we needed to include something that would safely keep our entire device together and allow it to attach to any drone that could hold its weight. This did not incur many extra costs, since Angie has experience with 3D printing and was confident she would be able to create this chassis with ease. We had to allot one week to design and print this chassis to hold our radar and raspberry pi, which will occur once we acquire the raspberry pi, since we have already acquired the radar. This did not add extra time, since it can be done in parallel with many of the other tasks and does not have many dependencies.

As of now we are on schedule with our project, and plan to stay on track with our plans for the next few weeks.

Our project includes considerations for public health and safety concerns because of our use case. Our project is designed to help first responders stay safe by limiting the amount of time they are exposed in high danger areas. Our project also focuses on improving the efficiency and cost of search and rescue missions by using an mmWave radar. Currently the infrared sensor is more commonly used but can provide unclear results due to the flames. Since our radar would not get blocked by the waves from the fire, our project should allow for better human detection in fire, and thus help save more people.