Nina’s Status Report For 4/27/2024

Accomplished Tasks

This week, I worked on the slides for the final presentation and did 20+ rounds of testing for our full integration of our search and shine rover. Since we are trying to merge all our subsystems to have one unified project, we are working on solving the inaccuracies in centering the rover to be straight as well as the long latency it takes to turn on a laser. While David and Ronit were refining their software to center the rover laser on a newly detected person, I was acting as the person the rover would center on and did test trials in different positions, lighting, and angles. We are working on minimizing inaccuracies that come with edge cases for these sort of situations. Furthermore, I also added some frontend to the website to make it look nicer and also added additional information for the rescue worker to understand how to use the site to monitor.

Progress

My progress is on track, however, I would like to add communication between my website and Ronit’s object detection server in order to classify and count the people found. However, I am running into issues as with having to constantly trigger the event to collect outside data and is creating large amounts of overhead and thus causing the web application to become slow. I am still currently deciding on whether or not to incorporate this feature.

Next Week’s Deliverables

Next week, I plan on largely working on the final documents with my team and continuing to optimize and refine the features on my web app. Also, we will as a whole continue to test and refine our search and shine system for the final demo.

Team Status Report for 4/20/2024

Significant Risks and Contingency Plans

The most significant risk to our project right now is improving the accuracy of our object detection of finding people. Recently, it has detected people as cats, dogs, and even a refrigerator which means we need to refine the CV algorithm used in our camera feed. In addition, we still need to parallelize the rover’s movements as well as the camera PTZ function since they can not be done in a concurrent manner. If this is unresolvable, we can use another RPi attached to a power bank to run the PTZ code separately from the rover’s. This also brings up the concern of power consumption as our rover has around 40 minutes of runtime. We might add an additional power source for the RPi to be connected to should this be not enough for our demo. We also have spare batteries for the rover in case its overall runtime is too short.

System Changes

Currently, our rover is integrated with our camera being able to send information to the CV server and web application as the rover is moving and will automatically ping the laser to be turned on once a person has been spotted. We are still working on stability issues and making sure the camera has a centralized line of direction so we can maintain our search pattern even after it has found someone.

Also, our rover has been struggling to maintain consistent movement. The same code running twice may cause different behavior for seemingly unknown reasons, causing a constant search pattern to be erratic. As a result, a design change has been to have a more randomized search pattern to “embrace” this erratic-ness, instead of having the rigid creeping line search pattern from before.

Other Updates

For our final demo, we are working on the final presentation slides as well as making sure there are no edge cases with our demo regarding turning and object detection.  We are running simulations in Techspark using a preset search pattern and ensuring communication is smooth and fast between subsystems.

Nina’s Status Report For 4/20/2024

Accomplished Tasks

For this week, my team and I worked on integrating all of our subsystems. We began by moving all the code to the rover’s RPi and meshing together the laser circuit and I/O from the PTZ so that it would be unified on a breadboard with shared ground and power. In addition, to mount the camera on the rover, I glued and stabilized pieces of wood with laser cut holes to hold the camera in place. This was to ensure the camera would be at a tall enough height for the rover to “search” and find people with visible parts.

For new tools and technologies learned, I had to learn about the RaspberryPi framework from scratch as I had never previously worked on it before. Due to the older hardware we were working with, I had a lot of struggles dealing with errors from deprecated libraries that were no longer maintained on certain RPi operating systems. This meant flashing multiple SD cards with different OS’s trying to resolve firmware dependencies and installation requirements. To mitigate these obstacles, I primarily found watching youtube videos of people working with RPi’s to be helpful since they had step by step explanations of how the hardware was connected to the RPi and what different ribbon cables were for. Regarding solving problems with RPi dependent libraries and getting the camera to work in general, I surveyed across a variety of RPi forums and stack exchange threads as many people would either be dealing with the same issue or something somewhat relevant to mine. Many times, I would need to think of a novel way to communicate the video feed between the camera and my web application as many online methods were using modern picamera libraries that weren’t compatible with our older Arducam. Although there was a steep learning curve, it was good to know that people online were also dealing with similar issues with setting up the camera and finding workarounds with the configurations was shared throughout forums everywhere.

Progress

My progress is slightly behind on my web application since I was originally researching on how to configure an interface on the web application that would send keypresses from the PC to the RPi which would register movements for the PTZ gimbal since the PTZ movement library is only compatible on the RPi operating system. Thus, triggering keypresses would need to be sent remotely to the RPi through means of SSH or possibly even sockets. However, we recently decided against manual authorization of the PTZ by rescue workers since we already planned on creating a script that would have it move autonomously and adding manual movement could cause us to veer off course as we don’t have sensors to prevent the rover from running into obstacles. I hope to also include tracking people’s locations, however, communication with the CV server has proven to be difficult since I am broadcasting the camera stream to the server with no obvious input being able to be sent back.

Next Week’s Deliverables

Next week, I plan to finish my web application and have it be deployed onto the AWS server while ensuring low latency communication with the camera feed. In addition, I will help with ensuring the rover is fully integrated as well as run tests to make sure we have hit all our design goals.

Nina’s Status Report For 4/6/2024

Accomplished Tasks

This week, I was able to get a TCP stream running on my personal computer rather than just having the camera stream show on the RaspberryPi desktop by running a libcamera-vid script. I did so through the use of picamera2 documentation that allowed me to capture the video stream and point it to a web browser. In addition, I embedded this stream link using iframe into my website application to introduce formal monitoring of where our rover will be searching. Furthermore, I was able attach and implement a separate imx219 camera to our rover using the ribbon cable but still keep the i2c connections for the PTZ gimbal. This way, we can still have an ongoing camera feed but also have the ability to move our camera around for 180 degree panning.

Progress

Currently, I am working on introducing more features to my web application such as incorporating keypresses that allow for manual movement of the camera PTZ gimbal, however, I am struggling with communication across devices since the keypresses must register on the RaspberryPi desktop. In addition, I am trying to implement concurrent ability to move and stream from the camera. Right now, I am only able to have mutually exclusive use of the two features since both requires the camera. However, I am planning on using threading to possibly have 2 threads spawning in order for the camera to work or add an additional RaspberryPi for one of the features and a power bank to power it separately.

Next Week’s Deliverables

Next week, I will be working on the camera mount design since we finally have the camera working. After inquiring about the 3D print and seeing how expensive it may be, I will likely be working with laser cutting a mount for the camera, laser, and gimbal combo in order to attach to the rover. Additionally, I will work on the frontend portion of the web application and get it hosted on an AWS server where I will further work on the security of the website.

Verification

Since I will be working on the verification of the camera stream latency as well as the security of the monitoring site to make it resistant to hacker attacks, I will be checking for the real-time processing of stream data to my website application and ensure it is under 50ms to minimize communication delay of where a person is to the object detection server. To do so, I will use Ping, a simple command-line tool that sends a data packet to a destination and measures the round-trip time (RTT). I will ensure the live camera feed from the RPi to the stream on my website will be under 50ms through this method. Furthermore, I will be checking the security of the website by using vulnerability scanning tools that will check for site insecurities such as cross-site scripting, SQL injection, command injection, path traversal and insecure server configuration. I will use Acunetix, a website scanning tool, that performs these tests in order to formally check the site. In addition, to prevent unwanted users from accessing the site, I will use Google OAuth to authenticate proper users (rescue workers) to be the only ones to access the site. I will get my friends to test or “break” the site by asking them to perform a series of GET and POST requests to see if they can access any website data as an authorized user. To prevent these from happen, I will introduce cross-site request forgery tokens to ensure data will not be improperly accessed.

Nina’s Status Report For 3/30/2024

Accomplished Tasks

This week, I was finally able to get the camera up and running with a stable and high definition video stream on the RaspberryPi desktop. It involved a lot of firmware updates as well as trying out different pieces of hardware such as the ribbon cable used to send camera data to the RPi. I also managed to deal with the stale external trigger issue that would cause the stream to become stale by increasing the timeout of the camera within the config file. Finally, I asked another team to borrow their PTZ camera since they ordered the same one from inventory as us. I realized the camera itself was the issue and ultimately decided to request a new one from inventory albeit without the Pan Tilt Zoom feature. I hope to attach the camera module to the top of the old PTZ Camera in order to continue using the panning and tilting feature without needing to order another PTZ camera and use our budget.

Progress

Currently, I am working on using OpenCV to get the CSI camera to open and begin streaming on my own personal computer. This way, I can create a central hub of communication for the video stream to be extracted and used in our monitoring web application and also to perform real time object detection on. This has proven to be somewhat difficult as the library our camera uses doesn’t currently work with openCV, so I may need to try another method of streaming the camera footage from my computer or else I may have to stream the video from the RPi and send it to my computer. Either way, I will work on reducing latency between our independent elements.

Next Week’s Deliverables

Next week, I will work on the video communication between Ronit’s object detection server and my web application since the video will be used in both of our subsystems. Then, I will work on adding more features to my web application which was previously delayed to the new camera issue caused by our conversion from drone to rover.

Nina’s Status Report For 3/23/24

Accomplished Tasks

This week, I was working on the ArduCam and PTZ gimbal and trying to get it to display a camera stream on the RaspberryPi Desktop. Due to issues with the old RPi4 OS installation, I wiped the SD card and did a clean installation of the Bullseye OS on the SD card and installed the OpenCV and Libcamera libraries. Thankfully, now the PTZ gimbal interface showed up and I am able to use keypresses to move and tilt the camera around.

Progress

Although the camera is able to pan 180 degrees and tilt, the camera stream is not showing up and only a black preview window is showing.  I’ve been dealing with possibly a firmware error that prevents the camera from showing due to the deprecated libcamera -apps library that has now become rpicam-apps. Even if I wanted to download the rpicam-apps library, it is not compatible with and RPi4 and would need me to order an RPi5. On the other hand, it could be a hardware issue due to a damaged ribbon cable which I will request a new one from Quinn and reattach it. I’ve been struggling a lot with staying on track due to all these unforeseen problems with the camera and the lack of documentation and tech support online. If this camera doesn’t work, I may potentially request a different one or buy a new one altogether that doesn’t require RaspberryPi.

Next Week’s Deliverables

Next week, I plan on getting a video stream whether it comes from the ArduCam or a new camera altogether to integrate with Ronit’s object detection software. In addition, I will make sure that it’s integrated within my web application for monitoring.

Nina’s Status Report For 3/16/2024

Accomplished Tasks

This week, I received the Arducam with PTZ Gimbal from inventory along with a Raspberry Pi 4 in order to control it. I was able to attach the Arducam to the PTZ Gimbal using an acrylic sheet and screws in order to have 180 degree panning. In addition, I used wires to connect the Arducam and gimbal to the RPI and set up the RPi as well. David helped me with the setup process on the RPi as it was our first time using one and we were missing extra tools such as a monitor, keyboard, and wired mouse. To test the camera on the RPi desktop, I found some sample code online and the corresponding libcamera packages used for the Arducam.

Progress

I spent a majority of my time setting up the Raspberry Pi as there were issues connecting it to CMU-DEVICE. In addition, since the RPi  wasn’t the latest model or updated to the latest OS, we were unable to download essential packages and dependencies. In order to fix this, I had to manually convert the OS version from Buster to Bullseye since the picamera2 library is not supported. There was many issues regarding installations (couldn’t use pip3 to install since it was deprecated, deprecated config file name in RPi sources list, lack of swap to install dependencies, and deprecated commands in general from the outdated set up guide of this camera). Because of these issues, I faced a lot of errors just running the code. After finally getting a preview window when running ‘libcamera-hello’ to show the camera’s perspective, it stopped working after updating the OS. However, the code now runs without import errors, so I believe it’s fixable after rebuilding and compiling the libcamera library from source.

Next Week’s Deliverables

Next week, I plan on getting a working stream of the camera visible from the RPI desktop and have it attached to the camera mount that will be on top of the rover and continue refining the code that will move the PTZ gimbal. In addition, I will help with the laser modules feature that will be connected to the camera in order to have our spotlight implemented.

Nina’s Status Report For 3/9/2024

Accomplished Tasks

For these two weeks, I was busy working on ordering parts (rover, batteries, camera, etc). In addition, I worked mostly on the design review report and was tasked with the introduction, use-case requirements, part of the design requirements, revising the system implementation, project management, related work, and the summary. Much of the time was spent on researching related works that would help augment our proposal as well as how we should be formatting our system.

During spring break, I was away on a trip. I did pick up some inventory requests and made additional orders of parts that we needed . I also began looking into the camera mount 3D printing that Ronit and I would need to work on and how to build a frame of the mount through AutoCAD.

Progress

Since our rover has yet to arrive, I will just be working primarily on designing the camera mount that would be hoisting the camera 2-3+ ft on top of the rover. Regardless, of whether or not it will be done through 3D print or not, I will try to have this done by the end of the week. This is doable without the rover as we have the dimensions of the rover as well as the camera itself. I’m also working on getting a live feed of the video from the ArduCam using its API that works with OpenCV.

Next Week’s Deliverables

Next week, I plan on working on finishing the camera mount and have it set up properly on the rover with the laser pointer incorporated as well. I will be checking the camera’s perspective and have the camera feed be shown on the website prior to it being mounted. Then, I will work on adding additional features to the site listed in the design review such as an emergency overdrive button to enable manual control of the rover.

Nina’s Status Report For 2/24/2024

Accomplished Tasks

This week, my team and I worked on the design presentation and made changes to our use case requirements as we adjusted from having a heavy-lifting drone that would drop aid packages to a drone with a laser attachment that would shine a spotlight on people it identifies. We also met with some drone labs and robotics institute professors to help us finalize the type of drones we would be using for our project (crazyflies, parrot drones, custom-built). Due to difficulties in acquiring a drone with all the necessary features, we have decided to pivot to a rover that would traverse across a semi flat landscape in order to identify for humans in an area difficult for rescue workers to reach. We originally decided to display GPS information and live camera footage through the livestreaming and GPS sharing features established by a DJI drone app but are now looking into mounting a GoPro camera on the rover and using a transmitter to track the location instead.

Progress

Due to the drastic change to a rover, it has now changed my plans of originally using the RTMP video link I had set up to be hosted on the website. Now, I am considering embedding a Youtube link to the website that would enable to use platform livestreaming due to the lack of streaming from just a camera attachment. In addition, although I have live user location displayed through Google Maps API, getting information from an AirTag has proven to be difficult due to Apple’s privacy rules that don’t allow for ease of sharing. Then, I tried looking into hosting the Find My iPhone application itself on the website but Apple does not allow for cross platform sharing without some security measures.

Next Week’s Deliverables

Next week, I plan to have the video sharing and live location sharing features completed and work on styling the website to improve user experience. In addition, I will be working on refining our use case and its requirements due to the change in method of transport for our search and detection application.

Nina’s Status Report For 2/17/2024

Accomplished Tasks

This week, I finished up most of the base of the website: home feed with explanation of what the site does and some UX.  I began implementing FastAPI and GeoDjango for the live GPS interface of the drone as well as the video streaming. So far, I’m able to get and display the GPS location of the current user on a live map display.

Progress

Unfortunately, due to the change in drone plans, I can no longer use the RTMP video link that would come with using a DJI drone’s fly app. I can still use FastAPI for streaming, but since the type of drone is dependent on what Professor Basti can occur, this is currently still in progress.

Next Week’s Deliverables

Next week,I will try to finish up the GPS map interface this week and get a tracker for the drone (possible Apple airtag) in order to get live coordinates of something that isn’t just the current user’s laptop. In addition, once we meet with Professor Basti and confirm the type of drone we receive, I will look into what sort of byte streaming I can use from the drone’s footage.