Team Status Report for 2/24/2024

Significant Risks and Contingency Plans

This week, we met with Andrew Jong to discuss what kind of drones would be useful for our project, and what capabilities would be feasible to implement. After this, and with further discussions with Prof. Kim and Tamal, we decided that making a pivot to using a land-based autonomous vehicle instead of a drone would be more feasible to implement, as it would be less risky and expensive.

The most significant risks right now all involve the capabilities of the rover. It is imperative that we find a suitable rover that has a software API to navigate it autonomously. Additionally, the rover must be able to navigate through an uneven terrain effectively and accurately. There are also some risks with figuring out how to conduct our scenario testing for demo day, and how a camera and laser mount would exactly function. We are actively investigating many possible types of such rovers and what capabilities they hold. Our main prospect are Waveshare WAVE ROVERs, which are cheap, have software API, GPIO pins, and an all-around versatile body. Regarding this, most of our risks and back-up plans come around to learning how to use this rover, and changing our implementation plans to fit around this. While our general idea may not be changed, our implementation plans certainly will be.

Our contingency plan, in case we don’t find such a suitable rover, which seems improbable, is to buy a more expensive drone that has all of the capabilities we need to complete our project. However, it seems highly unlikely that we use a drone, as using a rover seems easier for testing purposes. Additionally, a rover would be more durable than a drone, as any accidental collisions on the drone might break it. We would also be looking into potential issues with testing a rover on campus, and look if any licenses or certifications are needed for this.

System Changes

Depending on what kind of rover we decide on purchasing, there are numerous system changes in terms of the API we would use to control the rover, how communication between the rover and the Django and CV servers would work, and what kind of path the rover would take. There are also some changes in our use case, as using a rover allows us the flexibility of being able to search an uneven terrain and potentially avoid obstacles (undecided if we actually should implement this). However, much of the implementation of the web application and the distributed CV server should remain the same. All aspects of the design and system integration of the rover are a work-in-progress, and we are working hard to make sure this pivot goes smoothly and doesn’t hinder our progress. As aforementioned, learning to interface with the rover (like the WAVE ROVER) would require implementation differences on all portions, and these differences will be the focus of our work moving forward.

Other Updates

Schedule updates are expected, but not finalized, to accommodate for the delay in obtaining our mobile platform. This delay will attempt to mitigated by ordering the parts prior to spring break, and then analyzing the documentation on using these parts during their delivery. Also, now that we plan to use a WAVE ROVER instead of a drone, each of our individual components are expected to face some changes in integration and implementation.

Ronit’s Status Report for 2/24/2024

Tasks accomplished this week

This week, I primarily worked on the design review presentation and prepared myself to present our project design. I worked on the slides with my partners, which involved revisiting the flowchart, adding relevant diagrams, and curating content (7+ hours). I also practiced giving the presentation (2+ hours).

I also continued some work on the leader server of the distributed CV server. I implemented logic to break a video stream down into frames (4+ hours), and I wrote test cases to make sure this was working well with the load balancer (1+ hour). However, I still need to write code for integrating the YOLOv5 algorithm. I did some research (1+ hour) on how to do so and what alternate object detection algorithms potentially are better.

Progress

I am a little bit behind on my progress, as I hoped to have the object detection algorithm integrated by now. However, I will work next week to get back on track so that the server is complete before the design review report.

Deliverables for next week

There was a release of the YOLOv9 object detection algorithm this week, which is faster and more accurate, so I might look into integrating that and if it would be feasible. I would like to complete full server testing and integrate the object detection module. I also would work on finishing up the design review report with my partners.

David’s Status Report for 2/24/2024

Accomplished Tasks

This week was the week for the Design Presentation. I also had an official meeting with the robotics/drone people at their office in Squirrel Hill. While I had hoped to meet with Prof. Basti, he unfortunately could not make it, and I instead met with one of his students, Andrew Jong. The main takeaways from that meeting were:

  • Due to how expensive the drones are, Andrew doubts that Prof. Basti would let us simply take them for our own usage. That said, if we were to assist in his work in his fields, then that would be alright (albeit it’s more likely that Andrew himself flies the drone; we’re assisting him and his team with work).
  • Andrew proposes a very interesting set of topics to work with. Notably, there is a project on “wildfire drones”, involving drones being able to assist with wildfire rescue. The drone is designed to perform CV to find objects (potentially very difficult due to all the smoke), and also aims to plot what is seen on a map, so that we know 1) who/what needs rescuing, 2) how to escape, 3) movement of the fire over time, and more.
  • The focus he proposes is that currently their mapping only plots on a 2D surface, leading to inaccurate maps (since there is no accounting for topography). Being able to “search and 3D-annotate” the surroundings would be extremely useful for them.
  • In closer regards to our project, for further reach, we can also assist in developing our own perception network (the object detection portion) and sensor modules (the thing gathering the data for object detection). We could put these all together to have a functioning “drone” that we can demonstrate over simulation.

After reporting this discussion to Prof. Kim and Tamal, we had a meeting, to which we decided that we would turn down Andrew’s recommendation, and approach our original project with a new outlook. Rather than using a drone (which, while still viable at a high price, ran the risk of breaking and failing much more likely), we would use a “rover-esque” project. The core essence of the project would be the same; Search and Point would still be done, except this time, it would be by a rover running along the ground. Thus, a new set of research began on finding a rover, with the same capabilities as needed prior- having a software API, and able to carry some amount of weight. The benefit to this is that a rover carrying more weight is much more manageable! The main focus now is to find such a rover, and re-shape our project to work on a rover instead. (See below for more.)

Progress

With the pivot to a rover body, it puts me behind schedule. That said, if the rover does have the abilities to have a software API, this can potentially relieve a lot of issues for the future, allowing for reduced time then. To make up for the fact that we are behind on obtaining parts, with the help of TA Aden, I found the optimal solution may be the Waveshare WAVE ROVER, which contains nearly everything we are looking for (software API, GPIO pins, etc.). I would need to analyze how the WAVE ROVER controls work (as per my original task of “controlling the rover”), but determining our rover body to be this would bring us back on track.

Next Week’s Deliverables

Next week, I plan to have everything ordered and requested, particularly the drone. I plan to also analyze how ESP32 communication works with JSON commands, and generally speaking, how to control the WAVE ROVER. By our next meeting with Prof. Kim, we should have solidified how this project will be implemented moving forward (and get everything ordered/requested prior to spring break).

Nina’s Status Report For 2/24/2024

Accomplished Tasks

This week, my team and I worked on the design presentation and made changes to our use case requirements as we adjusted from having a heavy-lifting drone that would drop aid packages to a drone with a laser attachment that would shine a spotlight on people it identifies. We also met with some drone labs and robotics institute professors to help us finalize the type of drones we would be using for our project (crazyflies, parrot drones, custom-built). Due to difficulties in acquiring a drone with all the necessary features, we have decided to pivot to a rover that would traverse across a semi flat landscape in order to identify for humans in an area difficult for rescue workers to reach. We originally decided to display GPS information and live camera footage through the livestreaming and GPS sharing features established by a DJI drone app but are now looking into mounting a GoPro camera on the rover and using a transmitter to track the location instead.

Progress

Due to the drastic change to a rover, it has now changed my plans of originally using the RTMP video link I had set up to be hosted on the website. Now, I am considering embedding a Youtube link to the website that would enable to use platform livestreaming due to the lack of streaming from just a camera attachment. In addition, although I have live user location displayed through Google Maps API, getting information from an AirTag has proven to be difficult due to Apple’s privacy rules that don’t allow for ease of sharing. Then, I tried looking into hosting the Find My iPhone application itself on the website but Apple does not allow for cross platform sharing without some security measures.

Next Week’s Deliverables

Next week, I plan to have the video sharing and live location sharing features completed and work on styling the website to improve user experience. In addition, I will be working on refining our use case and its requirements due to the change in method of transport for our search and detection application.

Nina’s Status Report For 2/17/2024

Accomplished Tasks

This week, I finished up most of the base of the website: home feed with explanation of what the site does and some UX.  I began implementing FastAPI and GeoDjango for the live GPS interface of the drone as well as the video streaming. So far, I’m able to get and display the GPS location of the current user on a live map display.

Progress

Unfortunately, due to the change in drone plans, I can no longer use the RTMP video link that would come with using a DJI drone’s fly app. I can still use FastAPI for streaming, but since the type of drone is dependent on what Professor Basti can occur, this is currently still in progress.

Next Week’s Deliverables

Next week,I will try to finish up the GPS map interface this week and get a tracker for the drone (possible Apple airtag) in order to get live coordinates of something that isn’t just the current user’s laptop. In addition, once we meet with Professor Basti and confirm the type of drone we receive, I will look into what sort of byte streaming I can use from the drone’s footage.

Team Status Report for 2/17/2024

Significant Risks and Contingency Plans

The most significant risks to our project right now involve the uncertainty behind our drone. A large majority of our plans and back-up plans have been turned down for various reasons, creating a potential void of paths to move forward on. Fortunately, last week, we came across a very promising route, being with Prof. Basti Scherer. Prof. Basti has performed a wide variety of drone research, including areas like search and rescue. Our primary hope now is that Prof. Basti is able to lend us drones and/or the software that he uses to control them. Should he be unable to do so (unlikely, but possible, given his email), our back-up is that Prof. Basti gives us valuable information on what types of drones to use, along with other implementation-specific advice. Furthermore, we will adjust the data transmission from an RTMP video link that’s given by our original drone, DJI Mini 2 SE, to a video streaming method that’s compatible with the drone that Professor Basti may lend us to use. This new link will connect the drone to our website application as well as the object detection server.

System Changes

Under heavy advice from Prof. Kim and Tamal, finding a drone with software API proves to be the utmost important. Going this route is challenging, but under the guidance of Prof. Basti, it is now seeming much more feasible. Having a software API means that the entirety of the drone-controller controller (DCC) would not be needed anymore. On the other hand, it would require learning the entirety of the software API once we got ahold of it. The time and effort spent doing this can replace the time and effort spent on building the DCC, so generally speaking, this cost can be mitigated well.

Other Updates

There have been no schedule changes nor other updates. Potential schedule changes may occur after our meeting with Prof. Basti next week.

Specified Needs Fulfilled Through Product Solution

There are a variety of considerations that must be made with developing our product. We identify these considerations by breaking them down into three broad groups. The first group (A) deals with considerations of public health, safety or welfare. The second group (B) deals with the consideration of social factors. The third group (C) involves economic considerations. A was written by David, B was written by Nina, and C was written by Ronit.

Public health, safety, or welfare considerations (A)

Search and Aid would have significant health benefits for society. Being able to assist first responders in finding and aiding those in need is critical when fast timing is a necessity. In times where searching over an area of land is extremely difficult or impossible, sending a drone to provide air coverage would be a necessity. Having a search and aid drone could literally save lives; suppose a situation where a stranded hiker was in desperate need of water. Our drone could help scan through areas, searching for human life while  carrying an aid package (like a water bottle). Upon finding the hiker, it would be able to deliver it the water, and alert the first responders of the hiker’s location. Essentially, the Search and Aid drone is able to help provide a bird’s eye view of the land, allowing us to provide a way to ensure greater health protection of individuals in need and a higher standard of safety for all.

Social factors (B)

In our scenario of implementing a search and aid drone used by government rescue agencies, social concerns will be alleviated in areas such as search and aid in politically sensitive disaster situations. In the scenario of wartime or politically sensitive situations where people of different backgrounds are in need of assistance, a search and aid drone is perfect for providing resources without bias. Non-specific help organizations, such as Doctors Without Borders, would be able to offer medical assistance in dangerous areas without being physically in danger. Furthermore, this would reduce concerns of additional aggravation between warring parties as the help organizations would be operating remotely and even under anonymity. 

Similarly, using a search and aid drone would help alleviate the concerns of families and friends of emergency responders as they would no longer have to risk their own lives in dangerous situations. In cases of wildfires or dangerous terrains, our search aid drone would be able to peruse over the area of danger looking for those in need while workers could search and monitor along the perimeter safely. Traditionally, workers would need to search through the dark and traipse through unknown areas in order to look for people, potentially putting themselves in harm’s way. Now, through the help of the drone, this can be done entirely remotely and entirely reduce the worries of rescue workers’ loved ones. 

Economic considerations (C)

There are some significant economic considerations that come with our search and aid drone application. One of the use case sub-requirements is cost effectiveness. Overall, using drones would be cheaper than humans due to low long term costs. Currently, many search and rescue missions are costly due to the large amount of human labour required. The drone application reduces these costs and the risks associated with manned rescue operations, as drones would be able to navigate through rescue areas without the endangerment and need of humans. Through this, the economic burden on governmental rescue agencies is lessened due to lower operational costs.  Additionally, using accurate and fast drones maximises the impact of rescue efforts. Using machine learning leads to optimisation and a positive economic impact in general.

Furthermore, there are some economic considerations with the production of such drones. Even though the initial fixed cost of making these drones would be high, they would be cost effective in the long term due to factors like high durability. This would minimise the need of replacing these drones regularly. Considerations with respect to consumers of these drone applications must be made as well. These drones must only be used by governmental rescue agencies and humanitarian organisations, as the use of such drones by the general public might be dangerous. Finally, these drones would also help improve the general economy through the creation of jobs and boosting the demand for specialised drones.

Ronit’s Status Report for 2/17/2024

Tasks accomplished this week

This week, I continued my work on the load balancing algorithm. This required me to write Go code for the “leader” server node as well as the computer vision nodes. The load balancing algorithm used is a simple round robin load balancing algorithm, and this algorithm is obviously executed by the leader server. For now, I used a series of stock images as a placeholder to distribute images across the CV nodes (8+ hours). I also spent time writing unit tests to test out whether load balancing was done correctly (2+ hours). Some time was also spent debugging (1+ hour).

I also spent time searching for drones and talking with various robotics labs at CMU with my partners (2+ hours). We were able to get valuable feedback on what drones would be useful for us, as well as opportunities to borrow equipment.

Progress

I am happy with my progress. However, I would have liked to have researched how to implement the YOLO detection algorithm. I would say that I’m on track with my schedule, and spending time next week on the computer vision algorithm would allow me to complete the distributed CV server.

Deliverables for next week

For next week, I am going to work exclusively on the YOLO detection algorithm to implement on the CV nodes. I have some preliminary ideas on what to do for this. Additionally, I have to implement logic to break down the video into frames for processing, so I’m hoping to implement this too. Lastly, I wanna comprehensively write tests to test the server completely.

David’s Status Report for 2/17/2024

Accomplished Tasks

This week was a week of research and searching. I continued to research extensively for the proper drone, fully exploring all possible approaches to our project – in particular, all the methods with which to control our drone (which is my focus of the project). The paths that I laid out were:

  • Drones with software API. Primary options include Parrot Drones (which are unfortunately discontinued, but purchasable off eBay), DJI Mavic Drones (which are incredibly expensive, and according to Prof. Tamal, are not purchasable by CMU), and CrazyFlies (CMU-owned drones that are free to fly, but incredibly small and carry no weight). There are also flight vehicles that contain ArduPilot, but these are extremely expensive as well.
  • Drones without software API, but can be purchased separately. ArduPilot provides a series of “Open/Closed Hardware”, which drone users can purchase so that they can build their own drone. These components are not as expensive, but require the purchase of another compatible drone (or the entirety of the drone’s body). Self-construction of a drone is well outside our knowledge and/or may be excessively complicated for our project.
  • Drones without a software API. This falls under our original plan, allowing us to widen our scope of purchasable drones. Controlling the drones in these aspects would involve either creating a controller for the drone controller (as previously planned) or hacking the controller communication with a Wifi signal. Both of these have been heavily discouraged by Prof. Kim and Tamal.

I also reached out to many people involving our drone project, notably Prof. Basti Scherer. Prof. Basti is apparently the drone professor of CMU, with all points of contact redirecting us to him. He has performed some drone research similar to our project scope as well. I visited his Squirrel Hill office, and despite waiting for over an hour, was unable to get an audience with him. Fortunately he replied to our email, and has offered to work with us – including lending and using his drones. This is a great point to work with, and I will be keeping in close contact to ensure success on the drone-obtaining front.

Progress

My progress is unfortunately a bit behind schedule, though given that the entirety of my section of the project is undergoing heavy reconsideration, under a potentially revised Gantt Chart, I may be on schedule. Once I am able to meet with Prof. Basti this coming week, I will be able to learn exactly what drones we will be handling and how to control them. In other words, in terms of my section of the project, should this work out, extensive progress will have been made this week.

Next Week’s Deliverables

Next week, I plan to have been able to meet with Prof. Basti, and clearly delineate what we can and cannot do and use from his research facility. I should be able to have a very good understanding of the drones he has, how he controls his drone, and potentially many useful points of information regarding our project (since he has done very similar research before). While this last week was a little bleak, the coming week is expected to be very fruitful.

Team Status Report for 2/10/2024

Significant Risks and Contingency Plans

The most significant risks right now all involve the capabilities of the drone. Right now, it is crucial that the drone that we manage to obtain is most suitable for our project, as it may not be monetarily possible to be able to purchase another one. Having the most apt drone for our situation will make everything moving forward much easier (and possible). We are investigating many possible types of drones and their capabilities, and have many contingency plans for if we cannot find the perfect drone. These plans include shifting the purpose of our project, or coming up with different methods around whatever the drone lacks (for example, if the drone is unable to carry much weight, then an aid package may not be the best approach, in which case we adapt towards a laser pointer). In addition, due to our testing being done on campus, we will also be looking into getting a license this week for our drone if necessary.

System Changes

Depending on which drone we are able to obtain in the end, there are foreseeable system changes. Also, Prof. Kim suggests we take on a Search and Point project instead of a Search and Aid project. Should we end up switching to this, it would require many changes on all aspects – our use case and requirements would differ entirely, and our communication with the drone would need to include laser pointer controlling. Speaking of which, Prof. Kim also suggested we get a drone with an API, which would allow us to avoid having to build the drone-controller controller (and make controlling the drone a lot easier overall). These are things that are currently a work-in-progress, and should be updated more in detail in the next status report.

Other Updates

There have been no schedule changes nor other updates.

David’s Status Report for 2/10/2024

Accomplished Tasks

This week was the week for the Proposal Presentation. Upon obtaining Prof. Kim’s feedback for our project, I looked at how to achieve the goal he requested: find an appropriate drone for our project. As the main head of the drone-related tasks, I wanted to make sure I found the perfect drone, one that was capable of having a camera, carry some weight, and (as Prof. Kim emphasized) have an API. I performed extensive research on this matter, finding a few candidates, but have not finalized on a drone still. There is still a rather large concern of being able to find a suitable drone with an API. I have reached out to other people who have flown drones before to gauge their recommendations. I also looked into obtaining clearance for flying the drone (if needed), and plan to contact those who may know more about these.

Prof. Kim also brought up the idea of replacing Search and Aid with Search and Point (with a laser pointer). While I have found a suitable method of implementation for Search and Aid, I still need to perform more research on how to perform Search and Point – the challenges lay in finding a drone with a suitable API within budget, along with other concerns of managing I/O on the drone itself. Research continues along these aspects. On an unrelated note, I also helped to set up the website for our weekly reports.

Progress

My progress is still on schedule; this week is designated for parts research.

Next Week’s Deliverables

I hope to be able to complete what I had planned for this week, which is order the parts necessary for our project (in particular, the drone in question). Due to this being a very important part of our project, I want to make absolutely sure that this part goes correctly.