David’s Status Report for 3/16/2024

Accomplished Tasks

This week was a week of intense progress. Early into the week, I realized that hacking the web application was not a suitable approach, due to the fact that the web app was hosted on the rover’s own wifi. As such, it would not be possible, or at least, rather inconvenient to try and mesh this rover wifi together with school wifi. Thus, I went with the backup plan, which was to communicate with the rover through the onboard Raspberry Pi, and then send the JSON commands through UART communication. This required reading Python documentation on how UART communication worked, and how to send the appropriate JSON commands. The code for this has been flushed out nearly entirely; the only main confusing part is the exact JSON command that should be serialized. The documentation is somewhat confusing and inconsistent, and trying to sort out exactly what is required is where I am currently at. I am also helping with my teammates in working out the camera, which has hit some difficulties in setting up. There appears to be issues involving the OS version of the RPi causing inconsistencies in downloaded packages.

As a side note, the rover works! It can be controlled with the web app, albeit the batteries we purchased were too large. I ordered smaller replacement batteries to resolve this issue.

Progress

My progress is now much more on track. Working out this control code, and having functioning hardware was a huge success. I am concerned about the camera’s status, which is falling behind, so I will devote efforts to making that work as well.

Next Week’s Deliverables

Next week, I plan to have the rover be successfully controlled using programming; in other words, sort out the exact JSON commands needed to move the rover. I also want to have an end-to-end version of the rover running, which, at this point in time, means getting the camera to work properly. This is highest priority, and I plan to help to make sure the camera can function. This could mean working out the bugs, installing a new OS, getting a new camera, etc.

Team Status Report for 3/16/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is getting the PTZ Camera to work. The past week, we worked extensively on downloading libraries and debugging dependencies to get the software API of the PTZ Camera to work to display video feed. We still have to get this functioning correctly, and are working extremely hard to get it done. We are optimistic that we can get the camera to function correctly, as 3/4 of the libraries have been installed correctly, and we are planning on getting the last installed latest by tomorrow to get video feed. In the scenario where the camera doesn’t work correctly, we might consider changing the MCU to the NVIDIA Jetson Nano, which has a lot of easy to use cameras available for cheap. However, this is a worst case scenario and is unlikely to occur.

We also made progress on the rover, and the code to control the search pattern of the rover is complete. We will now focus on tuning the search pattern. As all of our individual components are nearing the end of their implementation, a significant risk is the communication between components and how to use the WiFi module to relay information. A lot of online documentation exists on this, so we plan on conducting a lot of research to identify the best way to handle communication between components. We also have some recommendations from our TA Aden to use scp to transfer information to/from the computer (communication hub) to the rover as well.  If WiFi doesn’t work, we have a contingency plan of using bluetooth communication on the built-in RaspberryPi bluetooth module to relay video data to a central hub (computer) that handles communication. Again, however, this is an unlikely scenario as preliminary research shows that WiFi communication should be feasible to implement.

System Changes

As of right now, no major system changes from the last Team Status Report exist. There was a decision to switch from hacking the web app to using the RPi and UART communication to send JSON commands to the rover. Other than that, we still plan on continuing to implement the system with the PTZ Camera and WiFi communication, and would only consider a pivot from these to alternate technologies mentioned above if absolutely necessary. We would know of these changes as we work more on the communication protocol of the rover.

Other Updates

There have been no schedule changes nor any other updates. Potential schedule changes may occur after we work on the communication protocol of the rover next week. We plan on having a working interim demo with communication between the rover and the server where the rover would stop and turn towards a human if detected in its search pattern by the end of next week.

Ronit’s Status Report for 3/16/2024

Tasks accomplished this week

This week, I primarily worked on getting the PTZ Camera working with Nina. I spent most of my time working on installing libraries, upgrading the Raspberry Pi OS from Buster to Bullseye, and debugging installation dependencies (6+ hours). We were able to make significant progress and we would be able to get the camera up and running by the weekend.

Furthermore, I wrote code to integrate the YOLOv5 algorithm in the CV server nodes as planned (2+ hours). This essentially means that everything in the distributed CV server is completed, with the remaining task of relaying information to different components in the system. I also wrote a lot of unit tests to ensure the integrity of the system (6+ hours). With this done, I can fully devote my time to the PTZ Camera and communication between components.

Progress

In general, I still believe I am a little behind, as I hoped to have the camera up and running; however, we are extremely close to getting this done. I am going to work extensively this week to make sure that the PTZ Camera is working correctly, and that we are able to relay video data to the CV server from the Raspberry Pi.

Deliverables for next week

As mentioned above, I still need to work with Nina to get the PTZ Camera completely up and running. This should be straightforward now, as a significant amount of work has been done; only some basic libraries need to be installed. I also plan on working on the communication and integration of the Camera and the Distributed server next week, allowing us to have functional live computer vision in time before the interim demo.

Nina’s Status Report For 3/16/2024

Accomplished Tasks

This week, I received the Arducam with PTZ Gimbal from inventory along with a Raspberry Pi 4 in order to control it. I was able to attach the Arducam to the PTZ Gimbal using an acrylic sheet and screws in order to have 180 degree panning. In addition, I used wires to connect the Arducam and gimbal to the RPI and set up the RPi as well. David helped me with the setup process on the RPi as it was our first time using one and we were missing extra tools such as a monitor, keyboard, and wired mouse. To test the camera on the RPi desktop, I found some sample code online and the corresponding libcamera packages used for the Arducam.

Progress

I spent a majority of my time setting up the Raspberry Pi as there were issues connecting it to CMU-DEVICE. In addition, since the RPi  wasn’t the latest model or updated to the latest OS, we were unable to download essential packages and dependencies. In order to fix this, I had to manually convert the OS version from Buster to Bullseye since the picamera2 library is not supported. There was many issues regarding installations (couldn’t use pip3 to install since it was deprecated, deprecated config file name in RPi sources list, lack of swap to install dependencies, and deprecated commands in general from the outdated set up guide of this camera). Because of these issues, I faced a lot of errors just running the code. After finally getting a preview window when running ‘libcamera-hello’ to show the camera’s perspective, it stopped working after updating the OS. However, the code now runs without import errors, so I believe it’s fixable after rebuilding and compiling the libcamera library from source.

Next Week’s Deliverables

Next week, I plan on getting a working stream of the camera visible from the RPI desktop and have it attached to the camera mount that will be on top of the rover and continue refining the code that will move the PTZ gimbal. In addition, I will help with the laser modules feature that will be connected to the camera in order to have our spotlight implemented.

Team Status Report for 3/9/2024

Significant Risks and Contingency Plans

The most significant risks to our project right now is getting everything to work. The last two weeks, we worked on quickly readjusting our project to accommodate for the fact that we are now using a rover instead of a drone. This meant developing new plans for our implementation and our overall system so that our project could maintain the same functionality as before. Now, the key risks are making sure that our new implementation strategies will work. To ensure this is the case, we made sure to try and not change what we had from before, and then analyze what differences occurred now that we had a rover. This was easier to handle since in the past, we had focused on modularity, meaning the CV servers and the website were not affected as heavily. To add to this modularity approach to add an information hub to help with information flow in our project, and also providing a barrier to note where one module starts and ends.

Regarding the rover, our main concern is having controlled communication with the rover. We are working on using a pre-made web app to learn existing methods of communication, then interface/hack off these concepts. Should this method not work, we also plan on having a Raspberry Pi aboard the rover (for the camera video data), and we can also send JSON commands to this Raspberry Pi instead. The JSON commands can then be transferred to the rover using UART communication.

System Changes

We have conducted a significant amount of research to figure out how communication and functionality of the rover would work. We decided that the easiest and practical way for communication would be through the ESP32 module on the rover. Additionally, using a Raspberry Pi module on the rover would allow us to control the laser pointer and gather video data from the camera. Apart from these inclusions, there have been no major system changes, and it seems like integration of all the components would be seamless through WiFi communication. There might be some changes in the way the distributed CV server and the Django Web server would receive and send out information, and we would know of any changes once we work on figuring out the communication protocol of the WAVESHARE rover via the Raspberry Pi.

Other Updates

There have been no schedule changes nor other updates. Potential schedule changes may occur after we work on the rover next week after its delivery, notably as we attempt to accomplish our goal (as requested by Prof. Kim) of having an end-to-end solution by our meeting Wednesday. If this is not accomplished, then the plan is to have something close to this by the end of the week.

Specified Needs Fulfilled Through Product Solution

We also make some further considerations that are important to think about while we develop our product. We identify these considerations by breaking them down into three more groups. The first group (A) deals with global factors. The second group (B) deals with cultural factors. The third group (C) involves environmental considerations. A was written by Ronit, B was written by Nina, and C was written by David.

Consideration of global factors (A)

Our product of a search and rescue rover heavily deals with global factors. The need for efficient and cost-effective search and rescue operations is inherently a global concern, as disaster and war could affect any country in the world. In general, our rover would be able to leave a positive economic impact on the global economy, as risks to human life are mitigated by removing the need for manual intervention in search and rescue operations. Furthermore, our rover caters to different terrains, making it usable in various geographical locations and scenarios. 

The technologies our rover uses also considers global factors. The use of a web application that could be accessed by users anywhere in the world could allow for real-time collaboration between various international rescue agencies. Using WiFi to communicate with various components in our system could allow us to expand our system to incorporate satellite WiFi, allowing for the distributed server to communicate with any rover anywhere in the world. Using a distributed system also improves scalability of the product, as multiple rovers in multiple rescue missions could send their video data for analysis, improving efficiency of rescue operations that might be happening concurrently in different locations.

Cultural considerations (B)

During the development of our search and point rover, it’s important to be mindful of its use case across different communities and cultures. Some features to consider would be adapting the rover’s interface to support different languages, communication styles, and symbols. In addition, abilities to customize and personalize the rover would allow users to tailor the rover’s interface to their cultural preferences. Ensuring that the rover does not impose on sacred cultural areas, being mindful of gender sensitivities, and privacy considerations are also vital. Additionally,  receiving feedback from these different cultural communities in the rover’s design facilitates optimized technology that respects cultural and ethical values of diverse populations. By integrating these cultural considerations, we can create a search and point rover that is not only technologically advanced but also respectful across various cultural backgrounds.

Environmental considerations (C)

with consideration of environmental factors. Environmental factors are concerned with the environment as it relates to living organisms and natural resources.

Search and Point would have no environmental side effects whatsoever. As the rover performs its tasks of searching and pointing the laser pointer, it does not leave behind any sort of residual presence. The rover is battery-powered, allowing it to not have any harmful byproducts (unlike gasoline-powered motors). The batteries are also rechargeable, allowing for a renewable and safe way of maintaining the rover’s functionality. The decentralized system revolving around a centralized information hub means that the actual part of the project that is deployed into the environment is actually quite small; all the main computing power is offloaded to servers elsewhere. The only part of the environment our project actually leaves an imprint on is the rover running over the landscape, which is an arguably negligible part of environmental damage. The laser we chose is also a relatively low power laser, meaning that there are not any harmful side effects from using the laser.

Ronit’s Status Report for 3/9/2024

Tasks accomplished this week

This week, I primarily focused on writing the design report. I was responsible for completing the sections for our system architecture (2+ hours), design requirements (4+ hours), portions of trade studies (2+ hours), and testing and validation (3+ hours). I also took up responsibilities for making the diagrams in the report and general formatting (2+ hours).

Furthermore, I was able to implement the sequential YOLOv5 algorithm (3+ hours). This will serve as a baseline for comparisons of speedup and also gave me a significant insight into how I can implement it with the Distributed server. This shouldn’t take long to implement now, and then I can focus on writing test cases to verify the integrity of the distributed system.

Progress

I am still a little bit behind on my progress, as I hoped to have all testing and verification for the distributed server. However, this is okay, as I still have some slack time left. Furthermore, finishing unit testing will reduce time taken in the future to verify everything. I will work next week to complete all testing and implementations so that I can focus on the rover.

Deliverables for next week

As mentioned above, I still need to implement the YOLOv5 algorithm into the children server nodes. This should be straightforward now that the sequential algorithm is implemented. I also plan on finishing writing more comprehensive unit tests to ensure that all parts of the distributed server work as expected. This would involve somehow gathering a validation dataset, which I have to look into thoroughly.

David’s Status Report for 3/9/2024

Accomplished Tasks

These weeks were the weeks for the Design Review and Spring Break. A large chunk of my first week was spent on writing portions of the design review. I was primarily in charge of writing the Design Implementation portion, along with Trade Studies. This took a rather long time, as due to the recent change in our plans (eg. drone -> rover), I had to rework how exactly our rover project would work. The details of the plans are inside the report, but essentially speaking, our project now consists of a rover with a mounted camera/laser that will search and point as previously planned. The overall structure of our system now has an information hub, which will help with information control/parsing/propagation to the three other parts (the rover, the website, and the CV servers).

During Spring Break, I was mostly travelling. However, I also devoted what time I could into looking into the rover communication methods. Since this section of communication is the most critical to be able to have rover guidance, I looked into how the pre-made web application was able to send the JSON commands to the rover. I am currently working on interfacing and/or hacking this web application to be able to send the commands that we desire instead.

Progress

The original goal was to work out all the rover controls by this Wednesday. Prof. Kim had an even larger request of managing to have a full end-to-end product working by our meeting on Wednesday. Our rover, which we had ordered prior to Spring Break was set to arrive over Spring Break, but we have not had any notification of its arrival yet, which is concerning. Since I am in the process of working out the rover controlling right now, I am somewhat on track, though depending on how much progress I am able to accomplish over the coming few days, I may end up slightly behind schedule. Following Prof. Kim’s request would put us ahead of schedule.

Next Week’s Deliverables

Next week, I plan to have the rover be fully controllable using programming. The stretch goal is to have an end-to-end version of the final rover running by our meeting with Prof. Kim on Wednesday. For me, these goals should be essentially the same, as due to the modularity of our system design, the interactions between the three components should be relatively small. Rover movement may need to be packaged and encoded when received from the CV servers, but this should not be a small end portion; getting the rover to move in a general controlled manner is the main plan and goal.

Nina’s Status Report For 3/9/2024

Accomplished Tasks

For these two weeks, I was busy working on ordering parts (rover, batteries, camera, etc). In addition, I worked mostly on the design review report and was tasked with the introduction, use-case requirements, part of the design requirements, revising the system implementation, project management, related work, and the summary. Much of the time was spent on researching related works that would help augment our proposal as well as how we should be formatting our system.

During spring break, I was away on a trip. I did pick up some inventory requests and made additional orders of parts that we needed . I also began looking into the camera mount 3D printing that Ronit and I would need to work on and how to build a frame of the mount through AutoCAD.

Progress

Since our rover has yet to arrive, I will just be working primarily on designing the camera mount that would be hoisting the camera 2-3+ ft on top of the rover. Regardless, of whether or not it will be done through 3D print or not, I will try to have this done by the end of the week. This is doable without the rover as we have the dimensions of the rover as well as the camera itself. I’m also working on getting a live feed of the video from the ArduCam using its API that works with OpenCV.

Next Week’s Deliverables

Next week, I plan on working on finishing the camera mount and have it set up properly on the rover with the laser pointer incorporated as well. I will be checking the camera’s perspective and have the camera feed be shown on the website prior to it being mounted. Then, I will work on adding additional features to the site listed in the design review such as an emergency overdrive button to enable manual control of the rover.

Team Status Report for 2/24/2024

Significant Risks and Contingency Plans

This week, we met with Andrew Jong to discuss what kind of drones would be useful for our project, and what capabilities would be feasible to implement. After this, and with further discussions with Prof. Kim and Tamal, we decided that making a pivot to using a land-based autonomous vehicle instead of a drone would be more feasible to implement, as it would be less risky and expensive.

The most significant risks right now all involve the capabilities of the rover. It is imperative that we find a suitable rover that has a software API to navigate it autonomously. Additionally, the rover must be able to navigate through an uneven terrain effectively and accurately. There are also some risks with figuring out how to conduct our scenario testing for demo day, and how a camera and laser mount would exactly function. We are actively investigating many possible types of such rovers and what capabilities they hold. Our main prospect are Waveshare WAVE ROVERs, which are cheap, have software API, GPIO pins, and an all-around versatile body. Regarding this, most of our risks and back-up plans come around to learning how to use this rover, and changing our implementation plans to fit around this. While our general idea may not be changed, our implementation plans certainly will be.

Our contingency plan, in case we don’t find such a suitable rover, which seems improbable, is to buy a more expensive drone that has all of the capabilities we need to complete our project. However, it seems highly unlikely that we use a drone, as using a rover seems easier for testing purposes. Additionally, a rover would be more durable than a drone, as any accidental collisions on the drone might break it. We would also be looking into potential issues with testing a rover on campus, and look if any licenses or certifications are needed for this.

System Changes

Depending on what kind of rover we decide on purchasing, there are numerous system changes in terms of the API we would use to control the rover, how communication between the rover and the Django and CV servers would work, and what kind of path the rover would take. There are also some changes in our use case, as using a rover allows us the flexibility of being able to search an uneven terrain and potentially avoid obstacles (undecided if we actually should implement this). However, much of the implementation of the web application and the distributed CV server should remain the same. All aspects of the design and system integration of the rover are a work-in-progress, and we are working hard to make sure this pivot goes smoothly and doesn’t hinder our progress. As aforementioned, learning to interface with the rover (like the WAVE ROVER) would require implementation differences on all portions, and these differences will be the focus of our work moving forward.

Other Updates

Schedule updates are expected, but not finalized, to accommodate for the delay in obtaining our mobile platform. This delay will attempt to mitigated by ordering the parts prior to spring break, and then analyzing the documentation on using these parts during their delivery. Also, now that we plan to use a WAVE ROVER instead of a drone, each of our individual components are expected to face some changes in integration and implementation.

Ronit’s Status Report for 2/24/2024

Tasks accomplished this week

This week, I primarily worked on the design review presentation and prepared myself to present our project design. I worked on the slides with my partners, which involved revisiting the flowchart, adding relevant diagrams, and curating content (7+ hours). I also practiced giving the presentation (2+ hours).

I also continued some work on the leader server of the distributed CV server. I implemented logic to break a video stream down into frames (4+ hours), and I wrote test cases to make sure this was working well with the load balancer (1+ hour). However, I still need to write code for integrating the YOLOv5 algorithm. I did some research (1+ hour) on how to do so and what alternate object detection algorithms potentially are better.

Progress

I am a little bit behind on my progress, as I hoped to have the object detection algorithm integrated by now. However, I will work next week to get back on track so that the server is complete before the design review report.

Deliverables for next week

There was a release of the YOLOv9 object detection algorithm this week, which is faster and more accurate, so I might look into integrating that and if it would be feasible. I would like to complete full server testing and integrate the object detection module. I also would work on finishing up the design review report with my partners.