Vicky’s Status Report for 3/22

Personal Accomplishments

  • Dash Cam Bringup & Testing:
    • Worked with Andy to connect Blues Notecard to Notehub
    • Worked with Christine to send HTTP GET requests to central server
      • Debugging why parsing the server response as json fails, current workaround is to take it as plain/text then decode it
    • Finalized Adafruit Ultimate GPS as the separate GPS module
    • Wrote launcher script to automatically run camera recording and detection at power on
    • Debugging the end-to-end pipeline

Progress

My progress is on schedule.

Schedule

  • Order GPS module
  • Central server integration and testing
  • Opt-in switch module bringup and testing
  • End-to-end pipeline debugging

Christine’s Status Report for 3/22

✅ Progress This Week

  • Finished the initial implementation of the match upload layer (PR Link)
    This layer handles the ingestion, processing, and storage of detected license plate matches. The flow is as follows:

    • Dash cams upload images via pre-signed S3 URLs

    • An S3 event triggers a Lambda function to:

      • Extract GPS and timestamp metadata from image EXIF

      • Store the metadata in the match_logs table in DynamoDB

    • Architecture: S3 → S3 Event → Lambda → DynamoDB (match_logs)

  • Documented the full flow in the PR description, covering:

    • Pre-signed URL generation

    • Upload handling via S3 and Lambda

    • Metadata extraction and logging

    • All changes are fully covered by tests

This gives a scalable and decoupled foundation for matching plates to images and metadata.

🚧 What I’m Working on Next Week

  • Dash Cam Integration

    • Begin integrating the new match upload layer with the dash cam system

    • Revisit existing endpoints (/detections, S3 upload) to determine required changes

    • Prioritize approaches that minimize friction for the dash cam side

    • Coordinate with others for schema consistency across services

  • Observability Improvements

    • Expand CloudWatch logging:

      • Currently tracks S3 and Lambda activity

      • Will add API Gateway request logs, performance metrics, and error tracking

    • Goal: improve visibility into API usage, failure rates, and overall latency

📌 Overall Status

Currently on track. The core functionality for match upload is complete and tested. Integration with the dash cam system will be the main challenge next week.

 

Christine’s Status Report for 3/15

Progress This Week

  • Completed Watchlist Management Layer (PR)
    Implemented an AWS Lambda function for managing the global watchlist, integrating DynamoDB for storage and API Gateway for HTTP access. The system now supports adding, retrieving, and removing plates via secure API endpoints. All changes are covered by manual tests and automated Jest tests with CI/CD.

  • Redesigned API Interface Based on Design Report Feedback
    Based on the design report feedback, we adopted the “tip line” approach instead of a full web app. I redesigned the API interface to expose a POST endpoint for law enforcement agencies to add plates. See the team status report for detailed changes.

  • Refactored for API-First Approach (PR)
    Removed the web app dependency (no more officers) and restructured watchlist management endpoints (/plates) to work with API keys. /detections now only checks for plate matches without officer tracking, and /plates requires an API key for modifications. The refactor simplifies external service integrations and is fully covered by automated tests.

In Progress

  • Match Upload Implementation (Draft PR)
    Implemented a Lambda function for extracting metadata, logging matches, and attaching images to S3. Integrated DynamoDB for match tracking. Currently testing and debugging before deployment.

Overall Status

  • The project is on track with API refinements complete and match upload nearing finalization. Here’s a list of API endpoints ready for integration.

Team Status Report for 3/15

Risks

  • Communicating between the dash cam and the central server will be challenging, as we need to determine how they communicate and how the dash cam locates and uploads images. Since we’re no longer building a web app, Andy can focus more on integration, and we plan to implement this in blocks to manage complexity and reduce risks.
  • Determining how images and metadata are stored on the dash cam is an issue we talked about and need more discussion on tradeoffs. We discussed storing the information about the license plate number that we sent on dash cams so we do not send them again in recent times. The benefit of this would be to reduce flushing to end users, but this introduces more computational complexity on Raspberry Pi and might cause unconfident images only being sent once that are not enough for human verification. We also need more discussion on if we should store temporary images on RAM and send them to the server directly, or we should always store them to SD card first and send from SD card.

Changes

  • Based on feedback from our design report, we decided to switch to an API-first approach instead of a full web app, as it better aligns with our system’s intended use. Since law enforcement agencies likely already have their own software, exposing an API enables seamless integration rather than requiring them to adopt a separate interface. Moving forward, we plan to expose a single POST endpoint for adding plates to the watchlist. To ensure security, external services will be required to use an API key to access the endpoint, and we will implement audit logging to track who added what. If a match is detected, we will log the event as if notifying a tip line—without actually sending notifications. This keeps the system lightweight and secure while still preserving important tracking data.
  • Based on dash cam bringup experimentations, we found writing raw footage to SD card to be the performance bottleneck, limiting the frames per second the dash cam can capture. To address this issue, we decided to store raw footage at 480p, while still having the dash cam inference on higher resolution (2304*1296) frames.

Schedule

Vicky’s Status Report for 3/15

Personal Accomplishments

  • Dash Cam Bringup & Testing:
    • Setted up Python virtual environment to run detection and OCR models with streaming Camera Module 3 input
    • Deployed and benchmarked ML license plate detection model on RPi 5, PyTorch format achieves around 370ms latency per frame, ONNX format achieves around 350ms latency per frame, and NCNN format achieves around 160ms latency per frame with streaming camera input
    • Deployed and benchmarked ML license plate OCR model on RPi 5, Paddle format achieves around 120ms latency per frame with streaming camera input, and ONNX format was deployed unsuccessfully
    • Wrote the end-to-end pipeline, with the main thread recording footage in 1 minute 480p20 clips, and the inference thread inferencing 2304*1296 resolution images and storing cropped license plate results at around 3fps

Progress

My progress is on schedule.

Schedule

  • GPS module bringup and testing
  • Network module bringup and testing
  • Opt-in switch module bringup and testing
  • End-to-end pipeline debugging

Andy’s Status Report for 3/15

his week, I focused on planning power testing, discussing the web application, and beginning the Blues module bring-up by testing it on the RPi4.

To ensure a stable power supply for the RPi5, I developed a power testing plan. I researched various Uninterruptible Power Supply (UPS) HATs compatible with the RPi5 and placed an order. While waiting for delivery, I started planning the circuit testing process to ensure the module functions correctly and continues supplying power even when its main power source is turned off.

On the software side, I discussed with my team and Tamal the idea of not implementing a web portal but instead opting for a flexible and easier-to-implement API for user interaction.

Additionally, I began bringing up the Blues module, starting with initial tests on the RPi4 to verify communication and connectivity. I booted up the RPi4 and started integrating the Blues cellular chip with it.

Progress:

Overall, progress is on track. I have shifted my focus from the web app to assisting Vicky and Christine with dashcam and server implementation.

Schedule:

•Test the UPS module (if it arrives)

•Bring up the Blues module and test GPS

•Test uploading and receiving data from the server using the Blues chip on the RPi

Christine’s Status Report for 3/1

Progress

  • Design Report Completed: Finalized and refined the system design after conducting additional AWS research. Key architectural changes were made based on our findings (see the team status report for details).
  • Updated Central Server Block Diagram: The design now reflects the latest refinements, ensuring clarity in system architecture. 
  • CI/CD Pipeline Implemented: Initially, I was manually setting up AWS services, but as the project grew, it became unsustainable. To streamline deployment, I set up a repository (Github Repo) with CI/CD (PR), which now automates testing and deployment. The folder structure is modular and designed for maintainability.
  • Watchlist Query Layer Implemented: A basic version (PR1 + PR2) is now complete and fully tested with manual tests + jest tests, though it currently operates without message queuing. The system generates and returns a pre-signed URL for secure data access.

In Progress

  • Web App Access Layer (Watchlist Management): Currently working on implementing this to allow the web app backend to post updates to the global watchlist. I’ve completed a basic implementation and tested it manually (draft PR).  I plan to add a Jest test suite for automated testing. My goal is to wrap this up as soon as possible so the web app can begin integrating with it.

Next Steps

  • Set Up Web App Backend Architecture: Set up a basic backend folder structure (controller, model, db, env) and implement a basic authentication endpoint to enable user login. Once completed, Andy will be able to take over further web app backend tasks.

Overall Status

  • The project remains on track. While the workload is demanding, the transition from manual AWS setup to automated CI/CD has significantly improved efficiency. Now, my focus is on completing the web app integration to ensure smooth interaction with the backend.

Vicky’s Status Report for 3/1

Personal Accomplishments

  • Design Report:
    • Wrote and edited the design report
  • ML License Plate OCR:
    • Cleaned up platesmania.com dataset through script and manual inspection to improve training quality
    • Benchmarked a variety of OCR models and selected en_PP-OCRv3_rec model for its ease of integration with Python and lesser likelihood to overfit (93% accuracy on platesmania.com 80% synthetic + 20% real-world license plate dataset, 85% accuracy on platesmania.com 100% real-world license plate dataset)
  • ML End-To-End:
    • Designed, implemented, and tested the end-to-end script, achieving 81% end-to-end accuracy
  • Dash Cam Bringup & Testing:
    • Collaborated with Andy to bringup and test the RPi 5 board and camera module 3

Progress

My progress is on schedule.

Schedule

  • Single-board computer bringup and testing
  • Camera module bringup and testing
  • GPS module bringup and testing
  • Network module bringup and testing

Andy’s Status Report for 3/1

This week, I focused on solving the dash cam circuit issue, worked on rpi5 and camera module bring up, started web app backend implementation, and worked on the design report.

Since the RPi5 must shut down gracefully when the car’s engine is turned off, ensuring a stable interim power supply is crucial. I compared several available modules based on output voltage, run-time capacity, size constraints, and ease of integration, with the goal of finding a solution that meets both our technical and cost requirements.

I collaborated with Vicky to bring up the RPi5 and Camera Module 3, verifying that the drivers and interfaces worked correctly on our target hardware. We tested that the camera is working by taking test images to see if it meets the resolution. And we started thinking how to implement the loop recording and ALPR calling function on the rpi camera.

On the software side, I began implementing the backend for our web application.

Lastly, I spent time writing and refining our design report.

Progress:

overall the progress is on time, but the web app backend implementation could go a little faster as there are still many functionality not implemented yet. But luckly next week is the spring break and I plan to make some progress during the week. But overall, I am on progress

Schedule:

  1. Finalize the UPS module and purchase.
  2. Test the UPS module
  3. Finish deploy ML on edge
  4. Blues module bring up and GPS test
  5. Web app backend

Team Status Report for 3/1

Risks

  • We have identified a new risk related to our power source. During our weekly meeting, we discussed using a UPS to address the circuit issue in the dash cam system. We need to finalize which UPS solution to adopt, since it must be compatible with the RPi5 and ideally support power input from a car’s 12V cigarette lighter. We also decided that, as a fallback, we could mitigate sudden power loss by installing a manual switch to perform a clean shutdown. This ensures the RPi can complete its shutdown process and minimize the risk of data corruption.

Changes

  • We re-evaluated the scope of our ALPR accuracy. Previously, we considered accuracy on a per-frame (per-image) basis, but through experimentation, we realized that assessing accuracy per car provides a more practical measurement. If an average of three images is captured for a single vehicle, we consider it a missed detection only when all three images fail to detect the plate correctly. Shifting to a “per-car” approach more accurately reflects real-world conditions and helps us measure performance in a way that aligns better with our actual use case.
  • We updated the use-case requirement of ALPR accuracy from 90% to 80%. Based on experimentations and literature review, we find 80% accuracy to be a more realistic requirement for U.S. real-world dataset end-to-end accuracy due to license plates’ varied and complicated designs.
  • We modified the image upload flow in the Central Server to use direct S3 uploads instead of routing through API Gateway. Initially, the system validated and forwarded image uploads via API Gateway and AWS Lambda before storing them in S3. However, this approach introduced inefficiencies due to API Gateway’s payload limits, added latency, and increased costs. To optimize performance, we implemented pre-signed S3 URLs, allowing dash cams to upload images directly to S3. When a license plate is detected, the watchlist query layer checks against a DynamoDB-stored watchlist; if a match is found, an AWS Lambda function generates a pre-signed URL and sends it to Blues NoteHub for direct upload. Once the upload completes, an S3 event triggers another Lambda function, logging the match in RDS and sending alerts via SNS. This change removes API Gateway as a bottleneck, improves system efficiency, reduces costs, and enhances security by restricting upload access to authorized clients with temporary URLs.
  • We updated the Web App access layer in the Central Server by replacing an EC2-based backend with AWS API Gateway and Lambda functions. Initially, we assumed an EC2 instance would be more cost-effective, but upon further analysis, we found that law enforcement officers access the system from multiple geographic locations, requiring a more scalable solution. API Gateway now serves as the system’s entry point, with Lambda functions dynamically handling officer requests, interacting with DynamoDB for watchlist storage and RDS for match history retrieval. This adjustment improves scalability by automatically adjusting to traffic loads without manual intervention. While API Gateway introduces some cost, it remains minimal, estimated at $3.50 per million requests, and offers greater flexibility compared to EC2. Given that officer interactions are a small portion of total traffic, this approach balances scalability and cost-effectiveness while ensuring efficient and reliable access to the system.

Schedule

Special Questions

A was written by Andy, B was written by Vicky, and C was written by Christine.

  • Global Factors: Our crowdsourced ALPR (Automatic License Plate Recognition) system is designed with global factors in mind to ensure its effectiveness and ethical use across different regions. One of the primary considerations is legal and regulatory compliance. Laws governing ALPR technology vary significantly across countries and even within regions. To address these concerns, we are implementing privacy-focused features, such as not storing license plate information and our data retention policy, ensuring compliance with different legal frameworks. Another key global factor is technological accessibility and infrastructure. The system needs to function reliably in various environments, from developed cities to other areas with limited connectivity. To accommodate this, we are designing edge-processing capabilities so that the dash cameras can locally process images and only upload essential metadata, reducing bandwidth usage. Additionally, the license plate recognition system is trained with real world data that will be optimized to work in diverse conditions, from bright urban environments to low-light or rainy conditions in rural settings.
  • Cultural Factors: On one hand, PlatePatrol harnesses community empowerment and civic engagement by transforming everyday dash cams into a crowdsourced public safety network, resonating with America’s tradition of grassroots participation and innovation. On the other hand, the continuous collection and processing of vehicle data raise deep-seated privacy concerns and fears of surveillance in a society that values individual freedom, while historical issues of biased enforcement heighten anxieties about disproportionate targeting of minority and economically disadvantaged communities.
  • Environmental Factors: The PlatePatrol system promotes environmental sustainability by leveraging existing dash cams instead of deploying new fixed surveillance cameras, reducing hardware waste and energy consumption. It utilizes AWS Lambda for serverless computing, ensuring resources are used only when needed, minimizing energy waste. Furthermore, by digitizing and streamlining law enforcement workflows, PlatePatrol reduces reliance on paper-based documentation, cutting down waste and promoting an eco-friendly approach to public safety operations.