David’s Status Report for 4/27/2024

Accomplished Tasks

This week was the week of the Final Presentation. I was the one who presented this week, and the final presentation went well, mostly due to the preparation we did beforehand. I worked on putting the slides together, and made sure to rehearse my presentation for the big finale!

For content, I worked hard on ensuring that the rover would work when put altogether. This meant tuning the rover so that there could be accuracy during the laser pointing. At first I ran into issues where due to the latency of the CV, the rover was unable to converge on a correct point to aim the laser at; essentially it would keep overshooting the correction. Resolving this issue meant making the rover adjust very slowly so that it would converge and fire with great accuracy! It just happens to be perhaps overly slow, which is still under investigation.

For the presentation, we left it at “x-axis” accuracy, but I also worked on improving “y-axis” accuracy as well. This is similar to x-axis accuracy, except the tuning comes from turning the camera up and down instead. Tuning for this will need to be done still.

Progress

My progress is on track, with the full end-to-end rover put together, and very accurate! Now it comes down to tuning the rover to perform even more accurately by toying with vertical accuracy. This will also involve many tests and small adjustments, but with the main infrastructure there, it should be doable. There is also further consideration into the “search” behavior for the rover, since the inability to replicate movement is proving to make exact creeping line search not possible.

Next Week’s Deliverables

Next week, I plan to have the rover completely accurate, just in time for the ultimate final demo! I will also work on deciding a finalized search method. Lastly, I will also work on putting together all the final documentation pieces.

Team Status Report for 4/27/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is determining what kind of search pattern to implement if creeping line search does not work despite running the same rover commands. However, as mentioned in the previous status report, using a somewhat randomized search would be optimal and wouldn’t compromise on the time taken to detect the human. We simply must ensure that the rover doesn’t move out of our scenario arena by fine tuning the rover commands. Additionally, the accuracy of the object detection has a tradeoff with increasing the latency of communication from the CV server to the rover. However, this will also be finalized with increasing the sample rate of breaking the video in frames in the CV server and spawning fewer worker nodes to ensure the system can take up more frames without compromising on latency. Finally, even though our scenario design is mostly finalized, we need to work on what might be the best way to demonstrate our project. We will flush these details out in the next week.

We had many unit tests during experimentation, as we had designed our system to be very modular. This involves testing the CV detection accuracy (feeding images to see if human detected), movement accuracy (running a preset path to see distance offset), latency tests (timing CV and information transfer), laser accuracy (turning the laser on, and measuring offset from intended position), and power consumption (measuring battery time until death). The overall system test involves running the rover along a search pattern, letting it detect the human, and then point the laser at them.

We found through our tests that movement control was often erratic and not replicable. This led us to search for new searching methods. Our latency proved to be small, but still enough to provide errors such as failed target convergence, so we had to tune the rate at which the rover adjusted to deal with this. We will explore tradeoffs between laser accuracy and pointing speed. The laser accuracy was very accurate as long as the laser was indeed pointing forward, with accuracy becoming worse the further the laser was (which makes sense, angle scales more the further you are).

System Changes

Currently, our rover only turns on the x-axis. However, we decided to implement moving the PTZ camera as well to point the laser with more accuracy on the y-axis. This will allow us to account for the various heights people have and whether they’re sitting or standing.

Apart from this, we might look into implementing obstacle detection with the ultrasonic sensor if time permits this, even though it is likely we would just focus on testing our system intently. It is okay for the purposes of our project if this isn’t implemented as our use case requirement is of a semi-flat terrain with no obstacles.

Other Updates

For other updates, we are working on our final deliverables. Not much would change from our design report apart from our system changes, and we will also work on designing a final poster that is informative and intriguing.

Ronit’s Status Report for 4/27/2024

Tasks accomplished this week

This week, I worked on the final presentation with my partners (6+ hours). This involved writing out the slides, getting images from our rover and from overall testing, and changing some of the block diagrams to reflect our new system design.

I also heavily worked on scenario testing with David and Nina (4+ hours). This involved running repeated tests with humans in different positions with the search pattern. During these scenario tests, we tested offsets between laser and human’s position, latency, rover movement, and overall functionality.

I also implemented some logic in changing the communication with the rover to include y-axis angle turns to point the laser at the person (2+ hours). This involved some more trigonometric calculations and changes in the file creation on the RaspberryPi for rover movement. I also looked into whether we should implement obstacle detection using an ultrasonic sensor even though it isn’t a part of our use case requirement (2+ hours). If it would be possible to execute such a program, we might look into implementing it if we have enough time after testing.

Progress

I believe I am on track, and our testing results are successfully meeting a lot of our design goals. Slight improvements are needed in the system before the final demo, and I am confident we will achieve our goals.

Deliverables for next week

To wrap up the capstone, for next week, I will work intently on making sure our scenario test is well designed and demoed. I will also work on the poster and final report with my partners.

Nina’s Status Report For 4/27/2024

Accomplished Tasks

This week, I worked on the slides for the final presentation and did 20+ rounds of testing for our full integration of our search and shine rover. Since we are trying to merge all our subsystems to have one unified project, we are working on solving the inaccuracies in centering the rover to be straight as well as the long latency it takes to turn on a laser. While David and Ronit were refining their software to center the rover laser on a newly detected person, I was acting as the person the rover would center on and did test trials in different positions, lighting, and angles. We are working on minimizing inaccuracies that come with edge cases for these sort of situations. Furthermore, I also added some frontend to the website to make it look nicer and also added additional information for the rescue worker to understand how to use the site to monitor.

Progress

My progress is on track, however, I would like to add communication between my website and Ronit’s object detection server in order to classify and count the people found. However, I am running into issues as with having to constantly trigger the event to collect outside data and is creating large amounts of overhead and thus causing the web application to become slow. I am still currently deciding on whether or not to incorporate this feature.

Next Week’s Deliverables

Next week, I plan on largely working on the final documents with my team and continuing to optimize and refine the features on my web app. Also, we will as a whole continue to test and refine our search and shine system for the final demo.

David’s Status Report for 4/20/2024

Accomplished Tasks

These weeks were the weeks of Carnival and putting things together. Over the last two weeks, I worked on putting all our separate components together, both in hardware and in software. I first worked with the team to put together the hardware; this meant designing the wiring schematic to figure out which wires would be connected where, and linking together the rover, the PTZ motor (and camera), and the laser. I added a breadboard to help unify the wires, and also labelled the length of the wires (to determine which should be longer than others. Fortunately, the hardware unification was a success! All the components worked after combining them all, which was amazing as now the whole rover was functional with all the pieces on it. Several wood structures were laser cut to help provide structure and stability to the rover’s components.

On the software end, I had to combine all the software code together. This comprised of unifying 4 disjoint files for the laser, moving, camera streaming, and PTZ moving into one big file. The moving code was also created to properly take in inputs from the CV side and points towards the targeted person as was initially planned. All the code unification worked as well, as the rover was now able to concurrently (using threads) stream, move, and point lasers, as well as read inputs from the CV side. Unfortunately, I have not resolved the issue of making the rover be able to do the same things twice, causing changes that still need to be done for the search pattern.

In regards to new tools and new knowledge, I had to learn a lot for this project from all sorts of places. I was unfamiliar with Raspberry Pi’s, UART communication, and all the sorts of libraries needed for the code. To learn these, I had to read an extensive amount of documentation online, along with observing examples of what people did on forums. Typically reading documentation provided the bulk of the understanding, while forum posts helped guide alongside. I also asked help from my TA Aden, who helped greatly in providing some invaluable guidance, advice, and assistance throughout the whole process.

Progress

My progress is on track, with the full end-to-end rover now put together. Now it comes down to tuning the rover to perform as accurately as we had planned for it to do. This will involve many tests and small adjustments, but with the main infrastructure there, it should be doable. There can also be some work in “pretty-ifying” the work as well.

Next Week’s Deliverables

Next week, I plan to have the rover fully functional *and* accurate, as well as present the Final Presentation. This means making sure that the rover is able to perform the given task of finding the person, and pointing at them with the laser with an acceptable accuracy. This is mostly fine-tuning, but is still critical nonetheless.

Ronit’s Status Report for 4/20/2024

Tasks accomplished this week

This week, I primarily worked on the rover side of things, where I worked with my partners on laser cutting parts to hold all of our components on the rover stable (4+ hours). This allowed us to achieve stability on the rover during movement.

I also worked on making modifications to code for communicating with the rover (5+ hours). This involved researching a different way than SSH to create a file on the rover remotely. I also worked with David on what kind of file to create and how the rover should interpret the file (1+ hour). I also performed significant testing of the CV server to make sure the program works through various edge cases like the camera not being turned on, duplication of remote file creation et cetera (4+ hours).

Progress

I believe I am still on track, and we have testing results that verify our design requirements. I am a little behind on working on the final slides, but that will be done over the course of the day.

Deliverables for next week

For next week, I will work on the final presentation and general testing of the rover. This heavily involves scenario design as we prepare for the final demo.

Team Status Report for 4/20/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is improving the accuracy of our object detection of finding people. Recently, it has detected people as cats, dogs, and even a refrigerator which means we need to refine the CV algorithm used in our camera feed. In addition, we still need to parallelize the rover’s movements as well as the camera PTZ function since they can not be done in a concurrent manner. If this is unresolvable, we can use another RPi attached to a power bank to run the PTZ code separately from the rover’s. This also brings up the concern of power consumption as our rover has around 40 minutes of runtime. We might add an additional power source for the RPi to be connected to should this be not enough for our demo. We also have spare batteries for the rover in case its overall runtime is too short.

System Changes

Currently, our rover is integrated with our camera being able to send information to the CV server and web application as the rover is moving and will automatically ping the laser to be turned on once a person has been spotted. We are still working on stability issues and making sure the camera has a centralized line of direction so we can maintain our search pattern even after it has found someone.

Also, our rover has been struggling to maintain consistent movement. The same code running twice may cause different behavior for seemingly unknown reasons, causing a constant search pattern to be erratic. As a result, a design change has been to have a more randomized search pattern to “embrace” this erratic-ness, instead of having the rigid creeping line search pattern from before.

Other Updates

For our final demo, we are working on the final presentation slides as well as making sure there are no edge cases with our demo regarding turning and object detection.  We are running simulations in Techspark using a preset search pattern and ensuring communication is smooth and fast between subsystems.

Nina’s Status Report For 4/20/2024

Accomplished Tasks

For this week, my team and I worked on integrating all of our subsystems. We began by moving all the code to the rover’s RPi and meshing together the laser circuit and I/O from the PTZ so that it would be unified on a breadboard with shared ground and power. In addition, to mount the camera on the rover, I glued and stabilized pieces of wood with laser cut holes to hold the camera in place. This was to ensure the camera would be at a tall enough height for the rover to “search” and find people with visible parts.

For new tools and technologies learned, I had to learn about the RaspberryPi framework from scratch as I had never previously worked on it before. Due to the older hardware we were working with, I had a lot of struggles dealing with errors from deprecated libraries that were no longer maintained on certain RPi operating systems. This meant flashing multiple SD cards with different OS’s trying to resolve firmware dependencies and installation requirements. To mitigate these obstacles, I primarily found watching youtube videos of people working with RPi’s to be helpful since they had step by step explanations of how the hardware was connected to the RPi and what different ribbon cables were for. Regarding solving problems with RPi dependent libraries and getting the camera to work in general, I surveyed across a variety of RPi forums and stack exchange threads as many people would either be dealing with the same issue or something somewhat relevant to mine. Many times, I would need to think of a novel way to communicate the video feed between the camera and my web application as many online methods were using modern picamera libraries that weren’t compatible with our older Arducam. Although there was a steep learning curve, it was good to know that people online were also dealing with similar issues with setting up the camera and finding workarounds with the configurations was shared throughout forums everywhere.

Progress

My progress is slightly behind on my web application since I was originally researching on how to configure an interface on the web application that would send keypresses from the PC to the RPi which would register movements for the PTZ gimbal since the PTZ movement library is only compatible on the RPi operating system. Thus, triggering keypresses would need to be sent remotely to the RPi through means of SSH or possibly even sockets. However, we recently decided against manual authorization of the PTZ by rescue workers since we already planned on creating a script that would have it move autonomously and adding manual movement could cause us to veer off course as we don’t have sensors to prevent the rover from running into obstacles. I hope to also include tracking people’s locations, however, communication with the CV server has proven to be difficult since I am broadcasting the camera stream to the server with no obvious input being able to be sent back.

Next Week’s Deliverables

Next week, I plan to finish my web application and have it be deployed onto the AWS server while ensuring low latency communication with the camera feed. In addition, I will help with ensuring the rover is fully integrated as well as run tests to make sure we have hit all our design goals.

Team Status Report for 4/6/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is getting the entire rover together and assembling the various parts. A lot of our subsystems are working in parallel right now, so assembling it onto the rover might involve dealing with some integration of these systems. Additionally, because we’re running so many processes on the rover, we must figure out if the Raspberry Pi can handle execution of multiple processes. This potentially might involve using another Raspberry Pi, so we must figure out a way to power and mount the other Raspberry Pi if this system change is needed. The contingency plan for this is to get a small portable power supply and strap that onto the rover in case the rover cannot power two Raspberry Pis. We might get one anyways so that no drive time is lost when using two Raspberry Pis.

There is also a risk that the rover cannot turn completely accurately. There is an inconsistency every time I run the same code, causing perfect 90 degree turns to not be possible. This is definitely concerning, causing for investigations into a possible “randomized” search algorithm, or a way to have sensors to allow for the rover to turn properly. This could perhaps involve the camera feed as such a sensor.

System Changes

As of right now, there are no major system changes apart from the one discussed above. We have to figure out if the Raspberry Pi can handle the execution of our multiple processes concurrently. If this is not possible, there will be a system change with the use of two Raspberry Pis and a portable power supply to power one Raspberry pi.

Other Updates

For our interim demo, we created our updated Gantt Chart to reflect our progress and planning for our future. There has been no change in this schedule, and we plan on starting complete testing and validation of our rover soon.

Validation and Verification

We have been using a series of unit tests to test out the functionality of our subsystems, which is discussed in our individual status report. For the testing of our system, we are going to conduct a variety of scenario tests. We will be monitoring all of our subsystems like the CV server and the website visually to make sure everything is going smoothly. We will also visually inspect if the rover is able to consistently travel in a creeping line search pattern and if it can accurately detect and point a laser pointer at a human. This will be done by making sure that the rover doesn’t miss a human and the rover is able to stop, turn, and point the laser at the human accurate to ±1 feet. We will also be measuring the time taken for the entire process of human detection using timestamps on each of our subsystems. This will ensure whether we’re meeting our 50ms latency requirement.

Ronit’s Status Report for 4/6/2024

Tasks accomplished this week

This week, I worked primarily on making sure everything was functional for our interim demo. This involved coding some logic to show the professor and TA that the CV server is actually able to detect humans. I did this primarily by implementing code that draws bound boxes on the region of the image detected as a person (4+ hours). This potentially opens up to the possibility of displaying this information on the website, which I will discuss thoroughly with Nina.

I also implemented some control logic for the rover (9 + hours). This was implemented on the CV server and involved making trigonometric calculations on how much the rover should turn. This involved recognising the depth of the human (how for the person is from the camera), which I did by measuring the relative size of people from the camera. This allows us to accurately turn the rover towards the person. However, I still need to account for latency between the CV server and the person, which I will implement next week.

Progress

I believe I am still on track. Every component of the distributed CV server is implemented, and some design decisions need to be finalised, which would be completed within a couple of hours. This would allow for me to focus more on scenario testing in the future.

Deliverables for next week

For next week, I hope to make decisions on how many worker nodes to spawn by performing speedup analysis. Additionally, by doing this, I hope to get a sense of the latency of communication between the CV server and the rover and include it within my control logic.

Verification and Validation

Throughout my progress in developing the CV server, the laser pointer, and communication protocols, I have been running unit tests to make sure the modules being implemented function as desired and meet our design requirements. I have tested out the laser pointer activation thoroughly through visual inspection and execution of the server-to-rover communication protocol. I have also thoroughly tested out the accuracy of the object detection algorithm through execution on stock images and have attained a TOP-1 accuracy of 98%, which is significantly greater than our design requirement of 90%.

I am also going to analyse the speedup achieved by the distributed CV server through unit testing of the execution of object detection on the video stream, which will allow me to determine the number of CV nodes to spawn. This must be greater than 5x as per our design requirements. I will also thoroughly test out the accuracy of the point control of the laser through scenario testing and make sure to tune this logic to achieve our goal offsets. This will be done by manually measuring the distance between the laser and the human upon execution of the entire system.