Team Status Report for 4/27/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is determining what kind of search pattern to implement if creeping line search does not work despite running the same rover commands. However, as mentioned in the previous status report, using a somewhat randomized search would be optimal and wouldn’t compromise on the time taken to detect the human. We simply must ensure that the rover doesn’t move out of our scenario arena by fine tuning the rover commands. Additionally, the accuracy of the object detection has a tradeoff with increasing the latency of communication from the CV server to the rover. However, this will also be finalized with increasing the sample rate of breaking the video in frames in the CV server and spawning fewer worker nodes to ensure the system can take up more frames without compromising on latency. Finally, even though our scenario design is mostly finalized, we need to work on what might be the best way to demonstrate our project. We will flush these details out in the next week.

We had many unit tests during experimentation, as we had designed our system to be very modular. This involves testing the CV detection accuracy (feeding images to see if human detected), movement accuracy (running a preset path to see distance offset), latency tests (timing CV and information transfer), laser accuracy (turning the laser on, and measuring offset from intended position), and power consumption (measuring battery time until death). The overall system test involves running the rover along a search pattern, letting it detect the human, and then point the laser at them.

We found through our tests that movement control was often erratic and not replicable. This led us to search for new searching methods. Our latency proved to be small, but still enough to provide errors such as failed target convergence, so we had to tune the rate at which the rover adjusted to deal with this. We will explore tradeoffs between laser accuracy and pointing speed. The laser accuracy was very accurate as long as the laser was indeed pointing forward, with accuracy becoming worse the further the laser was (which makes sense, angle scales more the further you are).

System Changes

Currently, our rover only turns on the x-axis. However, we decided to implement moving the PTZ camera as well to point the laser with more accuracy on the y-axis. This will allow us to account for the various heights people have and whether they’re sitting or standing.

Apart from this, we might look into implementing obstacle detection with the ultrasonic sensor if time permits this, even though it is likely we would just focus on testing our system intently. It is okay for the purposes of our project if this isn’t implemented as our use case requirement is of a semi-flat terrain with no obstacles.

Other Updates

For other updates, we are working on our final deliverables. Not much would change from our design report apart from our system changes, and we will also work on designing a final poster that is informative and intriguing.

Ronit’s Status Report for 4/27/2024

Tasks accomplished this week

This week, I worked on the final presentation with my partners (6+ hours). This involved writing out the slides, getting images from our rover and from overall testing, and changing some of the block diagrams to reflect our new system design.

I also heavily worked on scenario testing with David and Nina (4+ hours). This involved running repeated tests with humans in different positions with the search pattern. During these scenario tests, we tested offsets between laser and human’s position, latency, rover movement, and overall functionality.

I also implemented some logic in changing the communication with the rover to include y-axis angle turns to point the laser at the person (2+ hours). This involved some more trigonometric calculations and changes in the file creation on the RaspberryPi for rover movement. I also looked into whether we should implement obstacle detection using an ultrasonic sensor even though it isn’t a part of our use case requirement (2+ hours). If it would be possible to execute such a program, we might look into implementing it if we have enough time after testing.

Progress

I believe I am on track, and our testing results are successfully meeting a lot of our design goals. Slight improvements are needed in the system before the final demo, and I am confident we will achieve our goals.

Deliverables for next week

To wrap up the capstone, for next week, I will work intently on making sure our scenario test is well designed and demoed. I will also work on the poster and final report with my partners.

Ronit’s Status Report for 4/20/2024

Tasks accomplished this week

This week, I primarily worked on the rover side of things, where I worked with my partners on laser cutting parts to hold all of our components on the rover stable (4+ hours). This allowed us to achieve stability on the rover during movement.

I also worked on making modifications to code for communicating with the rover (5+ hours). This involved researching a different way than SSH to create a file on the rover remotely. I also worked with David on what kind of file to create and how the rover should interpret the file (1+ hour). I also performed significant testing of the CV server to make sure the program works through various edge cases like the camera not being turned on, duplication of remote file creation et cetera (4+ hours).

Progress

I believe I am still on track, and we have testing results that verify our design requirements. I am a little behind on working on the final slides, but that will be done over the course of the day.

Deliverables for next week

For next week, I will work on the final presentation and general testing of the rover. This heavily involves scenario design as we prepare for the final demo.

Team Status Report for 4/6/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is getting the entire rover together and assembling the various parts. A lot of our subsystems are working in parallel right now, so assembling it onto the rover might involve dealing with some integration of these systems. Additionally, because we’re running so many processes on the rover, we must figure out if the Raspberry Pi can handle execution of multiple processes. This potentially might involve using another Raspberry Pi, so we must figure out a way to power and mount the other Raspberry Pi if this system change is needed. The contingency plan for this is to get a small portable power supply and strap that onto the rover in case the rover cannot power two Raspberry Pis. We might get one anyways so that no drive time is lost when using two Raspberry Pis.

There is also a risk that the rover cannot turn completely accurately. There is an inconsistency every time I run the same code, causing perfect 90 degree turns to not be possible. This is definitely concerning, causing for investigations into a possible “randomized” search algorithm, or a way to have sensors to allow for the rover to turn properly. This could perhaps involve the camera feed as such a sensor.

System Changes

As of right now, there are no major system changes apart from the one discussed above. We have to figure out if the Raspberry Pi can handle the execution of our multiple processes concurrently. If this is not possible, there will be a system change with the use of two Raspberry Pis and a portable power supply to power one Raspberry pi.

Other Updates

For our interim demo, we created our updated Gantt Chart to reflect our progress and planning for our future. There has been no change in this schedule, and we plan on starting complete testing and validation of our rover soon.

Validation and Verification

We have been using a series of unit tests to test out the functionality of our subsystems, which is discussed in our individual status report. For the testing of our system, we are going to conduct a variety of scenario tests. We will be monitoring all of our subsystems like the CV server and the website visually to make sure everything is going smoothly. We will also visually inspect if the rover is able to consistently travel in a creeping line search pattern and if it can accurately detect and point a laser pointer at a human. This will be done by making sure that the rover doesn’t miss a human and the rover is able to stop, turn, and point the laser at the human accurate to ±1 feet. We will also be measuring the time taken for the entire process of human detection using timestamps on each of our subsystems. This will ensure whether we’re meeting our 50ms latency requirement.

Ronit’s Status Report for 4/6/2024

Tasks accomplished this week

This week, I worked primarily on making sure everything was functional for our interim demo. This involved coding some logic to show the professor and TA that the CV server is actually able to detect humans. I did this primarily by implementing code that draws bound boxes on the region of the image detected as a person (4+ hours). This potentially opens up to the possibility of displaying this information on the website, which I will discuss thoroughly with Nina.

I also implemented some control logic for the rover (9 + hours). This was implemented on the CV server and involved making trigonometric calculations on how much the rover should turn. This involved recognising the depth of the human (how for the person is from the camera), which I did by measuring the relative size of people from the camera. This allows us to accurately turn the rover towards the person. However, I still need to account for latency between the CV server and the person, which I will implement next week.

Progress

I believe I am still on track. Every component of the distributed CV server is implemented, and some design decisions need to be finalised, which would be completed within a couple of hours. This would allow for me to focus more on scenario testing in the future.

Deliverables for next week

For next week, I hope to make decisions on how many worker nodes to spawn by performing speedup analysis. Additionally, by doing this, I hope to get a sense of the latency of communication between the CV server and the rover and include it within my control logic.

Verification and Validation

Throughout my progress in developing the CV server, the laser pointer, and communication protocols, I have been running unit tests to make sure the modules being implemented function as desired and meet our design requirements. I have tested out the laser pointer activation thoroughly through visual inspection and execution of the server-to-rover communication protocol. I have also thoroughly tested out the accuracy of the object detection algorithm through execution on stock images and have attained a TOP-1 accuracy of 98%, which is significantly greater than our design requirement of 90%.

I am also going to analyse the speedup achieved by the distributed CV server through unit testing of the execution of object detection on the video stream, which will allow me to determine the number of CV nodes to spawn. This must be greater than 5x as per our design requirements. I will also thoroughly test out the accuracy of the point control of the laser through scenario testing and make sure to tune this logic to achieve our goal offsets. This will be done by manually measuring the distance between the laser and the human upon execution of the entire system.

Ronit’s Status Report for 3/30/2024

Tasks accomplished this week

This week, I worked on getting the laser diode working by constructing a small circuit on the breadboard (3+ hours). This involved using the Raspberry Pi to turn on or turn off the diode, so I also had to write a small script to control the pins (3+ hours). This task now allowed us to achieve the laser pointing functionality of the rover.

In addition, as mentioned in the status report last week, I worked on communication from the distributed CV server to the rover (5+ hours). This involved researching on the best way to send instructions, and I implemented a communication protocol over WiFi. I also started working a little bit on the control logic for how much the rover has to turn and where the rover should stop (2+ hours); however, this isn’t fully implemented yet.

Progress

I believe I am on track now, as many of the important components of our system are now implemented; it’s just a matter of testing it out and refining certain components. I am going to work extensively next week with my partners to test out our overall system in time for the interim demo.

Deliverables for next week

For next week, I hope to implement and test out the control logic for the rover. This involves trigonometric calculations and measuring the latency of communication between the CV server and the rover.

Ronit’s Status Report for 3/23/2024

Tasks accomplished this week

This week, I primarily worked on getting video feed from the PTZ Camera on the Raspberry Pi. This task was crucial as it handled communication between the rover and the distributed CV server. I spent most of my time researching and implementing logic for receiving, decoding, and interpreting the byte stream received from the Raspberry Pi (8+ hours). I also worked on using the video frames and feeding it into YOLOv5 object detection (2+ hours)

Furthermore, I worked with Nina on the PTZ functionality (4+ hours). This involved debugging the stock code received with the camera to ensure the movability of the PTZ motors. This brings us significantly closer to our goals with the rover, as we would now be able to capture different angles.

Progress

I believe I am a little behind, as communication from the distributed CV server to the rover must be worked out. However, this is not much of a concern as I was able to figure out how exactly communication over WiFi can be implemented between the two components. I am going to work extensively this week to make sure the rover is able to receive instructions and execute them in a timely manner before the interim demo.

Deliverables for next week

As mentioned above, I am going to work extensively on communication from the CV server to the rover. Additionally, I am hoping to make headway on the trigonometric calculations (control logic) that must be performed on the server to ensure that the rover orients itself correctly if a human is detected.

Team Status Report for 3/16/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is getting the PTZ Camera to work. The past week, we worked extensively on downloading libraries and debugging dependencies to get the software API of the PTZ Camera to work to display video feed. We still have to get this functioning correctly, and are working extremely hard to get it done. We are optimistic that we can get the camera to function correctly, as 3/4 of the libraries have been installed correctly, and we are planning on getting the last installed latest by tomorrow to get video feed. In the scenario where the camera doesn’t work correctly, we might consider changing the MCU to the NVIDIA Jetson Nano, which has a lot of easy to use cameras available for cheap. However, this is a worst case scenario and is unlikely to occur.

We also made progress on the rover, and the code to control the search pattern of the rover is complete. We will now focus on tuning the search pattern. As all of our individual components are nearing the end of their implementation, a significant risk is the communication between components and how to use the WiFi module to relay information. A lot of online documentation exists on this, so we plan on conducting a lot of research to identify the best way to handle communication between components. We also have some recommendations from our TA Aden to use scp to transfer information to/from the computer (communication hub) to the rover as well.  If WiFi doesn’t work, we have a contingency plan of using bluetooth communication on the built-in RaspberryPi bluetooth module to relay video data to a central hub (computer) that handles communication. Again, however, this is an unlikely scenario as preliminary research shows that WiFi communication should be feasible to implement.

System Changes

As of right now, no major system changes from the last Team Status Report exist. There was a decision to switch from hacking the web app to using the RPi and UART communication to send JSON commands to the rover. Other than that, we still plan on continuing to implement the system with the PTZ Camera and WiFi communication, and would only consider a pivot from these to alternate technologies mentioned above if absolutely necessary. We would know of these changes as we work more on the communication protocol of the rover.

Other Updates

There have been no schedule changes nor any other updates. Potential schedule changes may occur after we work on the communication protocol of the rover next week. We plan on having a working interim demo with communication between the rover and the server where the rover would stop and turn towards a human if detected in its search pattern by the end of next week.

Ronit’s Status Report for 3/16/2024

Tasks accomplished this week

This week, I primarily worked on getting the PTZ Camera working with Nina. I spent most of my time working on installing libraries, upgrading the Raspberry Pi OS from Buster to Bullseye, and debugging installation dependencies (6+ hours). We were able to make significant progress and we would be able to get the camera up and running by the weekend.

Furthermore, I wrote code to integrate the YOLOv5 algorithm in the CV server nodes as planned (2+ hours). This essentially means that everything in the distributed CV server is completed, with the remaining task of relaying information to different components in the system. I also wrote a lot of unit tests to ensure the integrity of the system (6+ hours). With this done, I can fully devote my time to the PTZ Camera and communication between components.

Progress

In general, I still believe I am a little behind, as I hoped to have the camera up and running; however, we are extremely close to getting this done. I am going to work extensively this week to make sure that the PTZ Camera is working correctly, and that we are able to relay video data to the CV server from the Raspberry Pi.

Deliverables for next week

As mentioned above, I still need to work with Nina to get the PTZ Camera completely up and running. This should be straightforward now, as a significant amount of work has been done; only some basic libraries need to be installed. I also plan on working on the communication and integration of the Camera and the Distributed server next week, allowing us to have functional live computer vision in time before the interim demo.

Team Status Report for 3/9/2024

Significant Risks and Contingency Plans

The most significant risks to our project right now is getting everything to work. The last two weeks, we worked on quickly readjusting our project to accommodate for the fact that we are now using a rover instead of a drone. This meant developing new plans for our implementation and our overall system so that our project could maintain the same functionality as before. Now, the key risks are making sure that our new implementation strategies will work. To ensure this is the case, we made sure to try and not change what we had from before, and then analyze what differences occurred now that we had a rover. This was easier to handle since in the past, we had focused on modularity, meaning the CV servers and the website were not affected as heavily. To add to this modularity approach to add an information hub to help with information flow in our project, and also providing a barrier to note where one module starts and ends.

Regarding the rover, our main concern is having controlled communication with the rover. We are working on using a pre-made web app to learn existing methods of communication, then interface/hack off these concepts. Should this method not work, we also plan on having a Raspberry Pi aboard the rover (for the camera video data), and we can also send JSON commands to this Raspberry Pi instead. The JSON commands can then be transferred to the rover using UART communication.

System Changes

We have conducted a significant amount of research to figure out how communication and functionality of the rover would work. We decided that the easiest and practical way for communication would be through the ESP32 module on the rover. Additionally, using a Raspberry Pi module on the rover would allow us to control the laser pointer and gather video data from the camera. Apart from these inclusions, there have been no major system changes, and it seems like integration of all the components would be seamless through WiFi communication. There might be some changes in the way the distributed CV server and the Django Web server would receive and send out information, and we would know of any changes once we work on figuring out the communication protocol of the WAVESHARE rover via the Raspberry Pi.

Other Updates

There have been no schedule changes nor other updates. Potential schedule changes may occur after we work on the rover next week after its delivery, notably as we attempt to accomplish our goal (as requested by Prof. Kim) of having an end-to-end solution by our meeting Wednesday. If this is not accomplished, then the plan is to have something close to this by the end of the week.

Specified Needs Fulfilled Through Product Solution

We also make some further considerations that are important to think about while we develop our product. We identify these considerations by breaking them down into three more groups. The first group (A) deals with global factors. The second group (B) deals with cultural factors. The third group (C) involves environmental considerations. A was written by Ronit, B was written by Nina, and C was written by David.

Consideration of global factors (A)

Our product of a search and rescue rover heavily deals with global factors. The need for efficient and cost-effective search and rescue operations is inherently a global concern, as disaster and war could affect any country in the world. In general, our rover would be able to leave a positive economic impact on the global economy, as risks to human life are mitigated by removing the need for manual intervention in search and rescue operations. Furthermore, our rover caters to different terrains, making it usable in various geographical locations and scenarios. 

The technologies our rover uses also considers global factors. The use of a web application that could be accessed by users anywhere in the world could allow for real-time collaboration between various international rescue agencies. Using WiFi to communicate with various components in our system could allow us to expand our system to incorporate satellite WiFi, allowing for the distributed server to communicate with any rover anywhere in the world. Using a distributed system also improves scalability of the product, as multiple rovers in multiple rescue missions could send their video data for analysis, improving efficiency of rescue operations that might be happening concurrently in different locations.

Cultural considerations (B)

During the development of our search and point rover, it’s important to be mindful of its use case across different communities and cultures. Some features to consider would be adapting the rover’s interface to support different languages, communication styles, and symbols. In addition, abilities to customize and personalize the rover would allow users to tailor the rover’s interface to their cultural preferences. Ensuring that the rover does not impose on sacred cultural areas, being mindful of gender sensitivities, and privacy considerations are also vital. Additionally,  receiving feedback from these different cultural communities in the rover’s design facilitates optimized technology that respects cultural and ethical values of diverse populations. By integrating these cultural considerations, we can create a search and point rover that is not only technologically advanced but also respectful across various cultural backgrounds.

Environmental considerations (C)

with consideration of environmental factors. Environmental factors are concerned with the environment as it relates to living organisms and natural resources.

Search and Point would have no environmental side effects whatsoever. As the rover performs its tasks of searching and pointing the laser pointer, it does not leave behind any sort of residual presence. The rover is battery-powered, allowing it to not have any harmful byproducts (unlike gasoline-powered motors). The batteries are also rechargeable, allowing for a renewable and safe way of maintaining the rover’s functionality. The decentralized system revolving around a centralized information hub means that the actual part of the project that is deployed into the environment is actually quite small; all the main computing power is offloaded to servers elsewhere. The only part of the environment our project actually leaves an imprint on is the rover running over the landscape, which is an arguably negligible part of environmental damage. The laser we chose is also a relatively low power laser, meaning that there are not any harmful side effects from using the laser.