David’s Status Report for 4/6/2024

Accomplished Tasks

This week was the week of the Interim Demo. In regards to this, I managed to present the biggest success I had this project: getting the rover to move under my command through my own code! It involved a lot of hours of work, running into trouble after trouble. The inconsistency of documentation certainly did not help, since the JSON commands were unclear. However, even worse was the number of Raspberry Pi errors. My original Raspberry Pi had a broken TX pin that caused the JSON commands to be unable to be sent over UART communication; this was only debugged with TA Aden’s help (THANK YOU!). Another RPi I tried struggled to connect to the Internet despite visibly being connected. Fortunately, the last RPi I tried (the FIFTH one) managed to work, and I was able to control the rover through the RPi. As a reminder, this means that I am now able to control the rover through a computer connected to CMU wifi, effectively allowing for the system communication to all come together.

In order to verify the rover movement system works correctly, I have implemented benchmark testing to get a better understanding as to how much power needs to be given for the rover to move as directed (eg. how much power is needed to turn left). I also test whether or not the rover can drive straight. This involves testing the rover with predetermined simple paths (like a rectangle), and seeing if it can return to the original position, and facing in the right way.

See the rover move!

Progress

My progress is quite on track, though the end-to-end demonstration has not been put together yet. We had ideally wanted this to work a week or so ago, so all my focus will be on how to put everything together, including system communication and the infrastructure on the rover.

Next Week’s Deliverables

Next week, I plan to have the communication working out between the CV servers and the rover controlling system. This involves investigating the threading abilities of the RPi, along with figuring out how to translate the CV-detected person to directional commands. Working these out would finalize communication amongst the whole system, enabling everything to be put together.

Nina’s Status Report For 4/6/2024

Accomplished Tasks

This week, I was able to get a TCP stream running on my personal computer rather than just having the camera stream show on the RaspberryPi desktop by running a libcamera-vid script. I did so through the use of picamera2 documentation that allowed me to capture the video stream and point it to a web browser. In addition, I embedded this stream link using iframe into my website application to introduce formal monitoring of where our rover will be searching. Furthermore, I was able attach and implement a separate imx219 camera to our rover using the ribbon cable but still keep the i2c connections for the PTZ gimbal. This way, we can still have an ongoing camera feed but also have the ability to move our camera around for 180 degree panning.

Progress

Currently, I am working on introducing more features to my web application such as incorporating keypresses that allow for manual movement of the camera PTZ gimbal, however, I am struggling with communication across devices since the keypresses must register on the RaspberryPi desktop. In addition, I am trying to implement concurrent ability to move and stream from the camera. Right now, I am only able to have mutually exclusive use of the two features since both requires the camera. However, I am planning on using threading to possibly have 2 threads spawning in order for the camera to work or add an additional RaspberryPi for one of the features and a power bank to power it separately.

Next Week’s Deliverables

Next week, I will be working on the camera mount design since we finally have the camera working. After inquiring about the 3D print and seeing how expensive it may be, I will likely be working with laser cutting a mount for the camera, laser, and gimbal combo in order to attach to the rover. Additionally, I will work on the frontend portion of the web application and get it hosted on an AWS server where I will further work on the security of the website.

Verification

Since I will be working on the verification of the camera stream latency as well as the security of the monitoring site to make it resistant to hacker attacks, I will be checking for the real-time processing of stream data to my website application and ensure it is under 50ms to minimize communication delay of where a person is to the object detection server. To do so, I will use Ping, a simple command-line tool that sends a data packet to a destination and measures the round-trip time (RTT). I will ensure the live camera feed from the RPi to the stream on my website will be under 50ms through this method. Furthermore, I will be checking the security of the website by using vulnerability scanning tools that will check for site insecurities such as cross-site scripting, SQL injection, command injection, path traversal and insecure server configuration. I will use Acunetix, a website scanning tool, that performs these tests in order to formally check the site. In addition, to prevent unwanted users from accessing the site, I will use Google OAuth to authenticate proper users (rescue workers) to be the only ones to access the site. I will get my friends to test or “break” the site by asking them to perform a series of GET and POST requests to see if they can access any website data as an authorized user. To prevent these from happen, I will introduce cross-site request forgery tokens to ensure data will not be improperly accessed.