Eric’s Status Report for 4/12/25

This week, I collaborated with Richard to research how to implement AWS SageMaker for our cloud-based license plate inference system. We worked through some of the required setup steps in the AWS environment, and figured out how to ensure that it works as a serverless inference endpoint. I also did additional research into Amazon Rekognition as a possible alternative to running our setup with Sagemaker, which Bhavik mentioned as an option. This would simplify deployment by using pre-built services. I thoroughly tested Rekognition’s OCR capabilities on license plate images and evaluated its strengths and limitations in the context of our use case. The results indicated that Rekognition provides general text detection but lacks license plate localization and precision control, which means we would result in a lot of junk text in addition to the desired license plate readings. Still, it’s relatively simple to filter those out, so we will meet to discuss which solution to work with.

 

At this stage, my progress is on schedule, and we already have a working system together for MVP since the cloud step isn’t part of it. We plan to work on integrating the cloud solutions and the replacement camera next week, and test using a portable battery once we get the camera connected. We’ve already completed verification for most subsystems, including the edge image capture and processing pipeline. Our current MVP is functional and does not yet include the cloud inference step. Therefore, the next verification task is focused on the cloud inference integration, which we began exploring this week through SageMaker and Rekognition.

Eric’s Status Report for 3/29/25

This week, I worked on adding permissions to the front end to handle different users and permissions, but there are still bugs with the login settings that need to be resolved. This is less of a priority for the demo next week, though. I also worked with Richard to continue debugging the Camera Module 3 and concluded that the Raspberry Pi is not detecting the camera due to a hardware issue with the camera module itself. We decided to go with a simple usb-connected webcam for the demo. Additionally, Richard and I tested sending matches to the Supabase database from the RPI and confirmed that matches are successfully added to the tables. Progress is a bit behind schedule due to the broken camera, and debugging the Camera Module 3 took much more time than expected since it wasn’t a software issue. Next week, I plan to fix the login bugs, get replacement cameras, and continue expanding Supabase integration with sagemaker, which isn’t part of the MVP.

Eric’s Status Report for 3/22/25

This week, I intended to continue testing the end-to-end data upload flow from the Raspberry Pi to Supabase. However, testing was temporarily blocked due to some dependency issues on the RPi, which prevented full integration with the latest version of the upload scripts. In the meantime, I focused on other tasks to continue making progress. I refactored the system architecture to use Supabase database event triggers instead of HTTP POST-based Edge Functions. Since the RPi inserts data directly into the possible_matches table, event triggers should be used instead, and worked out more details of how the parts will communicate with Richard to minimize issues we’ll face during integration. I also added functionality on the database side to send active alerts to the RPi.

Although testing on the RPi is slightly behind schedule due to the dependency issues, we plan to resolve that as soon as possible and begin end-to-end testing. Next week, I aim to begin implementing the Sagemaker api with the Supabase code and start full testing.

Eric’s Status Report for 3/15/25

After Receiving feedback on the design report, we decided to focus more on the cloud and front end part of the pipeline this week.  I focused on building the cloud backend pipeline for the CALL ALPR dashcam system. I implemented a very minimal version of the serverless backend using Supabase Edge Functions and PostgreSQL. Specifically, I created the possible_matches table to store incoming data from the Raspberry Pi, including plate text, GPS location, image URL, and timestamps. I also worked on an Edge Function that receives HTTP POST requests from the RPi, parses the incoming data, and inserts it into the database. I also worked on the starter code for calling the edge function that will do ML inferencing. At this point, my progress is still on schedule. The cloud backend MVP has something which we can test and try to integrate with the Rpi. In the coming week, I plan to work with Richard to see if we can start getting actual data from the Rpi to the cloud. I’ll also add additional functionality to the front end to deal with the permissions for respective users.

 

Eric’s Status Report for 3/8/25

This week, I mainly focused on refining and updating the design report, specifically working on the Use-Case Requirements, Architecture and/or Principle of Operation, and Design Requirements sections. Some specific changes I made are:

  • Architecture and/or Principle of Operation: I refined the block diagram and system interactions, ensuring that the data flow from image capture → edge processing → cloud verification → database matching was clearly structured. I also improved the documentation/planning of how the edge processor filters and sends high and low-confidence detections to the cloud, reducing unnecessary bandwidth use.
  • Design Requirements: The biggest change since the Design presentation was updating the cloud reliability target. After reviewing existing cloud service reliability standards, I adjusted the uptime requirement from 95% to 97% to strike a balance between AWS’s guaranteed availability and real-world reliability needs. This change ensures that the system remains operational and responsive in high-demand scenarios, reducing the likelihood of missed detections due to cloud downtime.

I also worked with Richard to further define how the cloud infrastructure handles license plate matching and how that would be implemented, specifically using Supabase and AWS Sagemaker. My progress is on schedule, and we have begun testing timing on the Rpi. Next week I plan to continue working with Richard on testing the models on the Rpi, and hopefully begin testing using images from the camera module.

Eric’s Status Report for 2/22/25

This week, I spent a lot of time working on the design review presentation and practicing since I presented on Wednesday. This involved doing research related to the Amber alert use case, specifically for our timing requirements, since we wanted them to be based on the expected situation. I found that the 60 second requirement was sufficient for an average lane change frequency on the highway (2.71 mi) but not enough for the worst case merging scenario (20 seconds), so I made that change to the requirements.    I worked more with the PaddleOCR testing to continue exploring how it performs under more extreme weather conditions. I also worked with Richard to set up the basic pipeline of YOLOv11 to PaddleOCR, where YOLOv11 crops the image down to the plate, and PaddleOCR uses the cropped image to do OCR.

 

My progress is on schedule. Next week, I plan to continue testing the PaddleOCR with the YOLOv11 model integration, and explore methods to increase performance. I plan to use larger datasets to see how the overall pipeline performs, as well as beginning to check the inference time.

 

Eric’s Status Report for 2/15/25

This week, I focused on researching and testing OCR models for license plate recognition. I experimented with PaddleOCR and EasyOCR, since I saw multiple users saying that TessaractOCR doesn’t perform well. I tested PaddleOCR’s and EasyOCR’s performance on license plates with different orientations and angles. To ensure accurate comparisons, I set up a structured testing workflow and collected sample images from various scenarios. After testing, I found that PaddleOCR consistently outperformed EasyOCR when handling rotated or slanted plates. Based on these results, I decided to move forward with PaddleOCR as the primary OCR engine for the project. I also started looking into ways to eliminate detected text that isn’t from the license plate number.

My progress is on schedule. Next week, I plan to work on integrating PaddleOCR with the YOLOv11 model, and figure out what changes are needed for it to run on the Raspberry Pi. If necessary, I will experiment with different PaddleOCR configurations to further refine accuracy and speed.

The image below shows PaddleOCR’s results on an example plate:

 

Eric’s Status Report for 2/8/25

This week, I worked on the proposal presentation slides, conducted background research on license plate recognition methods, and explored available recognition models like OpenALPR, EasyOCR, and YOLO. I also examined competitors, including Genetec, PLATESMART MOBILE DEFENDER, and Nvidia Metropolis. I experimented with online available solutions and found that current methods usually involve several steps of narrowing down the image to the license plate before running OCR. For example, they would locate the car in the image, then the license plate, and then run the character recognition. In my research, I also discovered that OpenALPR, although free, has not been updated in 5-7 years and seems to have relatively poor performance compared to more modern alternatives.

My progress is on schedule, and next week I plan to work on the design proposal, research available and relevant datasets, and try the baseline yolov11 without fine tuning to see if license plates were already one of the classes in the training set and how it performs. I will also research different preprocessing techniques to improve recognition accuracy under varying conditions such as lighting and motion blur.