Tzen-Chuen’s Status Report for 3/22

Ideally I would have been finishing the integration of UI, raspberry pi, and physical components, but I’ve been stuck on getting the main camera program working. The issue is that the camera appears to be plugged in, but is not detected by the computer. This is strange as before it was working just fine and we were able to take pictures with it.

Aside from the camera, much of the work this week was the ethics assignments, as it was a really unexpected time sink. Currently I need to solve the camera connection issue as soon as possible, and get back to work on other parts of the project where I’m needed like the UI.

Progress is currently behind schedule because of the camera hiccups and the ethics work, but with a bit of extra elbow grease next week we should be back on track. (Hopefully.)

Eric’s Status Report for 3/22/25

This week, I intended to continue testing the end-to-end data upload flow from the Raspberry Pi to Supabase. However, testing was temporarily blocked due to some dependency issues on the RPi, which prevented full integration with the latest version of the upload scripts. In the meantime, I focused on other tasks to continue making progress. I refactored the system architecture to use Supabase database event triggers instead of HTTP POST-based Edge Functions. Since the RPi inserts data directly into the possible_matches table, event triggers should be used instead, and worked out more details of how the parts will communicate with Richard to minimize issues we’ll face during integration. I also added functionality on the database side to send active alerts to the RPi.

Although testing on the RPi is slightly behind schedule due to the dependency issues, we plan to resolve that as soon as possible and begin end-to-end testing. Next week, I aim to begin implementing the Sagemaker api with the Supabase code and start full testing.

Team Status Report for 3/15

From the feedback received for our design report, our current risks are to figure out the cloud implementation in more detail and have something we can use to integrate with the Edge compute part. Additionally, we need to test the camera to see if it matches what we need as soon as possible. To that end, we have gotten a barebones and minimal but still usable cloud implementation for our case, and Tzen-Chuen will be hooking up the camera in the next couple days.

Currently, there are no changes to the existing design. We forecast changes next week however, as the major components should be integrated and we will begin testing our MVP, likely learning what could be improved. The current schedule is MVP testing next week, with working on new/replacement components/making the requisite changes the week after that. 

Right now, we have the locally run code mostly finished, and it can be found here. We also have made large strides on the cloud side of things, with databases set up.

Richard’s Status Report for 3/15

This week I focused on the integration between the Raspberry Pi and the cloud.  I worked on creating the database update code, where a Python function will retrieve the latest version of the Amber Alert database from the supabase and update the locally saved database with it. If the Raspberry Pi does not have an internet connection, it will simply continue using the locally saved version.  I also wrote the code to check for matches in the database. A loop checks the OCRed text with the database and if a match is found, the original image is uploaded to a supabase bucket, and an entry is made into a “possible matches” table for further verification by the cloud models. We will later integrate Eric’s edge function to move more processing to the cloud. The updated code can be found here.

My progress is on schedule. By next week, I hope to finalize the locally run code and work on the edge function that should run when a match is uploaded to corroborate with larger models.

Tzen-Chuen’s Status Report for 3/15

This week was dedicated to coding all the systems together. I worked on adapting the OCR models to functions that can be called in a main routine on the raspberry pi. The supabase is created and linked to the pi as well, and work has begun on the front end implementation.

In addition to adapting the OCR models, I also began work coding the camera to take images in our program. This was done through using the picamera2 libraries. An unforeseen obstacle is the sheer depth of configuration that one can do on the settings of the camera. I sill need to work through what are the optimal settings for us, and if there should be some form of dynamic camera configurations for different conditions.

The next steps would be to determine what is the best way to host the UI frontend and begin significant work on that portion. Next week will be working on the UI and testing our current code on the pi and camera. Progress is on schedule, no concerns on my end.

Eric’s Status Report for 3/15/25

After Receiving feedback on the design report, we decided to focus more on the cloud and front end part of the pipeline this week.  I focused on building the cloud backend pipeline for the CALL ALPR dashcam system. I implemented a very minimal version of the serverless backend using Supabase Edge Functions and PostgreSQL. Specifically, I created the possible_matches table to store incoming data from the Raspberry Pi, including plate text, GPS location, image URL, and timestamps. I also worked on an Edge Function that receives HTTP POST requests from the RPi, parses the incoming data, and inserts it into the database. I also worked on the starter code for calling the edge function that will do ML inferencing. At this point, my progress is still on schedule. The cloud backend MVP has something which we can test and try to integrate with the Rpi. In the coming week, I plan to work with Richard to see if we can start getting actual data from the Rpi to the cloud. I’ll also add additional functionality to the front end to deal with the permissions for respective users.

 

Team’s status report for 3/8/25

The most significant risks have remained unchanged, with it being the delay of MVP. These risks are being mitigated by having spare components ordered from ECE inventory to substitute instead of our actual desired components. In terms of contingency plans, there are none, as MVP is a crucial step that cannot be circumvented in any way. However, we are testing the parts of our design as they finish, so we are confident that they will work as we assemble them into an MVP.

We made a few changes to the system design. We updated the cloud reliability target from 95% to 97% to reduce downtime risks and ensure timely database lookups for license plate matching, as AWS’s baseline uptime guarantee is closer to 95%. This is pretty realistic and also follows the published statistics for server uptime on AWS and shouldn’t change our costs. We also refined the edge-to-cloud processing pipeline to improve accuracy and efficiency. Both high and low-confidence detections are sent to the cloud, but low-confidence results also include an image for additional verification using more complex models. This change ensures that uncertain detections receive extra processing while still keeping the system responsive and scalable.

This will not significantly alter the current schedule, as lowering the accuracy requirement will make training easier and potentially quicker.

In addition, we have written the code for using the ML models on the raspberry pi and it can be found here.

Part A (Richard):

Our design should make the world a safer place with regards to child kidnapping. Our device, if deployed at scale, will be able to locate the cars of suspected kidnappers using other cars on the road quickly and effectively, allowing law enforcement to act as fast as possible. While we currently only plan on using the device with amber alerts, a US system, the design should largely work in other countries. The car and license plate detection models are not trained on the cars and plates of any specific country, and PaddleOCR supports multiple languages (over 80) if needed with foreign plates. This means that if other countries have a similar system to amber alerts, they can use our design as well. Our device may also motivate other countries who do not have a similar system to start their own in order to use our design and better find suspected kidnappers in their country.

Part B (Tzen-Chuen):

CALL sits at a conflicting cross-section of cultural values. Generally, our device seeks to protect children, a universal human priority. It accomplishes this through a distributed surveillance network, akin to the saying “it takes a village to raise a child.” By enabling a safer, more vigilant nation, we are in consideration of a global culture.

In terms of traditional “American values,” CALL presents a privacy problem. While not explicitly a constitutional right, it is implied in the 4th amendment. A widespread surveillance network is bound to raise concerns among the general public. We attempt to mitigate this concern by only sending license plate matches that have a certain confidence level to the cloud, and never to the end users. This way we balance the shared cultural understanding of child protection with the American tradition of privacy.

Part C (Eric):

Our solution minimizes environmental impact by leveraging edge computing, which reduces reliance on energy-intensive cloud processing, lowering power consumption and data transmission demands depending on the confidence of the edge model output. The system runs on a vehicle’s 12V power source, eliminating the need for extra batteries and reducing electronic waste. Additionally, its modular design makes it easy to repair and update, extending its lifespan compared to full replacements. These considerations ensure efficient operation while reducing the system’s environmental footprint.

Richard’s status report for 3/8/25

This week I worked on deploying the ML models to the raspberry pi. This consisted of setting up the python environment, converting the Jupyter notebook into a standard python file, and setting up the file structure the raspberry pi will use. Since the notebook displays the bounding boxes and images when inferencing, when converting to a python file, I removed this code for faster performance since the end user will not see this anyway. I tested this implementation with a sample image that had two license plates in plain view. This is the same image used when testing the Jupyter notebook in Google colab. The program ran in just over 23 seconds, which should be plenty fast enough for our 40 second timing requirement. The models I used were the NCNN models but no quantization was used, so this number can be easily lowered further if needed. The code can be found here. When setting up the file system, I put the pictures and models into their own folders to easily switch between models and the images I test. Last week I worked on the design report, where I focused on the system implementation as well as the design trade studies.

My progress is on schedule. By next week I hope to finalize the MVP of the dashcam side of things, and shift focus to setting up the cloud

Tzen-Chuen’s Status Report for 3/8

This week I configured the raspberry pi. This included installing a new headless OS for improved performance, and configuring the IPv4 address to allow for remote ssh. This allows us to set the raspberry pi up on CMU wifi and let us ssh into it to program it from wherever we are. The github repo is now also installed on it and progressing towards basic integration.

I didn’t get to creating a table and linking supabase to a user interface yet, but I have been looking into setting it up on a github domain. Now instead of zero cameras we have two cameras, and one cable to test the feedback from the other similar group that we may not have the requisite resolution.

Another significant part of the work this past week was the design report. I handled the Introduction and Project management sections, along with the Testing and Verification sections. While the design report seemed simple on its surface, actually putting every idea down in writing had a very clarifying effect on the overall direction of the project.

Progress is on schedule and now that all major components are here progress will go even smoother.

Eric’s Status Report for 3/8/25

This week, I mainly focused on refining and updating the design report, specifically working on the Use-Case Requirements, Architecture and/or Principle of Operation, and Design Requirements sections. Some specific changes I made are:

  • Architecture and/or Principle of Operation: I refined the block diagram and system interactions, ensuring that the data flow from image capture → edge processing → cloud verification → database matching was clearly structured. I also improved the documentation/planning of how the edge processor filters and sends high and low-confidence detections to the cloud, reducing unnecessary bandwidth use.
  • Design Requirements: The biggest change since the Design presentation was updating the cloud reliability target. After reviewing existing cloud service reliability standards, I adjusted the uptime requirement from 95% to 97% to strike a balance between AWS’s guaranteed availability and real-world reliability needs. This change ensures that the system remains operational and responsive in high-demand scenarios, reducing the likelihood of missed detections due to cloud downtime.

I also worked with Richard to further define how the cloud infrastructure handles license plate matching and how that would be implemented, specifically using Supabase and AWS Sagemaker. My progress is on schedule, and we have begun testing timing on the Rpi. Next week I plan to continue working with Richard on testing the models on the Rpi, and hopefully begin testing using images from the camera module.