Team Status Report for 4/19

As per last week, the camera we ordered still hasn’t arrived. While concerning, our MVP camera will still serve us well, as even though it doesn’t have IR capabilities or a very high resolution sensor, it is still more than good enough to demonstrate the functionality of the project as a whole.

There are no changes to the overall design of the project. However, we have made a slight change with using AWS Rekognition rather than Sagemaker for verifying low confidence matches. This was done since AWS Rekognition is specifically designed for OCR which is our main use case of the cloud model, which also makes it easier to work with. The costs this incurs are minimal since we did not waste time or resources on the Sagemaker portion other than basic testing. As for schedule, we are currently testing both the ML and camera right now, with ML tests ongoing and camera tests taking place tomorrow before the slides are assembled.

Richard’s Status Report for 4/19

This week I worked with Eric on using AWS Rekognition into our pipeline. AWS Rekognition will be called by a Supabase function whenever a low confidence match is uploaded to our database. We decided on Rekognition over SageMaker since Rekognition was better suited for our OCR use case, and we noticed that larger models for cropping cars and license plates had very diminishing returns, and was not worth the extra computation in the cloud. To accommodate this change, the raspberry pi also uploads a cropped image of the license plate as well as the original image. We have also laid out a testing strategy and have begun testing multiple parts of our device, such as the power supply and cooling solution. Since our camera has been delayed numerous times now, we are testing the webcam we are using, which unfortunately does not have the resolution or the IR capabilities for good results in far or night conditions.

My progress is on schedule. By next week, I hope to have finished testing and have our final presentation slides ready to present on time.

As I have implemented my project I have had to learn a lot about training ML models for computer vision and setting up a database. To learn how to train the models I watched videos and looked at the sample code provided by the makers of the model I decided to use, YOLO11. I chose this model for its widespread support and ease of use, so I was able to fine tune the model for detecting license plates relatively quickly. For setting up the database, I read the documentation provided by Supabase, and used tools that integrate / set up parts of the database for me with Supabase, specifically Lovable which we used to make the front-end website.

Tzen-Chuen’s Status Report for 4/19

Final week is coming up, more of the same of putting the finishing touches on our system. On my end this means in person testing of the camera. To this end, I’ve created python scripts for on boot recording and image capture. The major threat to the overall success is the replacement camera not arriving. It should have arrived last week, yet is still not here. Contingencies include using our MVP camera with no IR capability, or using the arducam.

Tomorrow will be going into the real world and getting our tests done, along with presentation work. Progress is on track, aside from the camera there are no concerns.

What I’ve needed to learn for this capstone project was the art of device selection and a shift in mentality to start working and ask questions later. By recognizing that the only way to solidify the knowledge from reading articles on the internet was to simply start working on something, things become so much clearer.

Eric’s Status Report for 4/19/25

This week, I worked with Richard to begin integrating AWS Rekognition into our project via Supabase Edge Functions. Specifically, we worked on a function that, when confidence from the RPi is low, takes an image URL as input, processes it through Rekognition’s text detection API, and returns the detected text with confidence scores. For this, we use a cropped image down to the license plate to avoid Rekognition giving us unrelated text present in the image. We also worked on devising a testing strategy for our system given the unexpected delay in receiving the replacement camera, which should have been here more than a week ago. The temporary camera we’re currently using does not meet our resolution or lens requirements, so we have to adjust and still get data with it to check performance of the system given a limited camera. The goal is to extract as much useful validation data as possible while awaiting the final hardware, and things like timing and power / reliability should not be affected by the camera.

Our progress is mostly on schedule, but partially impacted by the hardware delay, and we are still going ahead with testing what we can with the temporary camera. Next week I plan to work on the final presentation slides, if we can the camera, test as soon as possible, and finalize and test the AWS Rekognition pipeline if testing gets finished.

To implement key parts of our project, I had to gain familiarity with several new tools. This included working with YOLOv11 for object detection and PaddleOCR for text recognition. I also learned how to use Supabase Edge Functions to run serverless backend logic and securely integrate them with AWS Rekognition for cloud-based OCR fallback. I relied on informal learning strategies such as reading documentation, watching short tutorials, and testing on my own.

Team status report for 4/12

During demos, we verified that we already have a working and functional MVP. The risk is that we need to verify that the replacement camera we bought can work with the system properly, and testing for this will begin as soon as the replacement arrives. However, this is a small risk, as the previous one functioned already, and our backup is sufficient for demonstrating functionality. 

There are no changes to the design, as we are in the final stages of the project. Only testing and full Sagemaker integration remain. 

On the validation side, we plan to test the fully integrated system once we receive and install the replacement camera. We’ll run tests using real-world driving conditions and a portable battery setup to simulate actual usage. We will also test in various lighting conditions. 

More specifically, while we have not run comprehensive tests yet, our initial testing on the timing requirements as well as the database matching are all meeting our requirements of 40 seconds and 100% accuracy, respectively. To test these at a more comprehensive level, we will run 30 simulated matches with different images to make sure all are within the timing and match requirements. Once we receive the camera we will use in our final implementation, we will take images of varying distances and weather/lighting conditions, and test the precision and recall of our whole pipeline. These images will also be put into platerecognizer.com, a commercial model, to see if it is our models that need improvement or the camera. The details of these tests are the same as what we have in the design report. Finally, we will either run this system in an actual car, or take a video of a drive with the camera and feed this video into the pipeline to simulate normal use, and make sure it detects all the matches of the video. 

In the last two weeks,  we finalized our MVP and are working on additional features as well as testing.

Richard’s status report for 4/12

This week, I worked with Eric on the verification of low-confidence matches through the AWS SageMaker platform. This included setting up the AWS service and environment and deploying the models to the cloud. During this process, we also learned about Rekognition, an AWS service specifically for image recognition, so we also looked into this service as a possible better option. Last week, I worked on polishing the website so that the interface was clean throughout and bug-free, especially with the interim demo. I also implemented and debugged the code for our GPS in our pipeline, and that aspect is working smoothly for us.

My progress has been on schedule. By next week, I hope to have made a decision and implemented the cloud verification system, and run tests on our entire pipeline.

Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.

Eric’s Status Report for 4/12/25

This week, I collaborated with Richard to research how to implement AWS SageMaker for our cloud-based license plate inference system. We worked through some of the required setup steps in the AWS environment, and figured out how to ensure that it works as a serverless inference endpoint. I also did additional research into Amazon Rekognition as a possible alternative to running our setup with Sagemaker, which Bhavik mentioned as an option. This would simplify deployment by using pre-built services. I thoroughly tested Rekognition’s OCR capabilities on license plate images and evaluated its strengths and limitations in the context of our use case. The results indicated that Rekognition provides general text detection but lacks license plate localization and precision control, which means we would result in a lot of junk text in addition to the desired license plate readings. Still, it’s relatively simple to filter those out, so we will meet to discuss which solution to work with.

 

At this stage, my progress is on schedule, and we already have a working system together for MVP since the cloud step isn’t part of it. We plan to work on integrating the cloud solutions and the replacement camera next week, and test using a portable battery once we get the camera connected. We’ve already completed verification for most subsystems, including the edge image capture and processing pipeline. Our current MVP is functional and does not yet include the cloud inference step. Therefore, the next verification task is focused on the cloud inference integration, which we began exploring this week through SageMaker and Rekognition.

Team Status Report for 3/29

The biggest risk currently is that we found out that the main camera module we purchased seems to be broken, which means we need to get a replacement working as soon as possible. Our backup camera is detecting, but is acting very weird, taking extremely blurry photos of a certain solid color, and the images are completely unusable. We’re currently planning to use a simple usb webcam temporarily for the demo and to get a working replacement.

We have made no changes to our design, or anything major to our schedule, since we expect getting the camera working in our code once we have a working one will be relatively quick. The change that was made to our schedule was mainly working around the faulty cameras, but after our problem is addressed, should be ok.

Outside of the camera issues, we have gotten the rest of the pipeline operational and tested, meaning it is ready for the demo and MVP.

Richard’s status report for 3/29

This week, I worked on debugging the camera with Eric and verifying the functionality of everything else. We tried many troubleshooting methods on both our camera module 3 and the backup arducam camera. However, no matter what we did we could not get the camera module 3 to detect and take pictures. The backup camera does detect, but would only take extremely blurry pictures of weird colors, such as blue and pink despite facing a white wall. Ultimately, we decided to move forward with a usb webcam Tzen-Chuen already had for demos. Outside the camera, mostly everything else was tested to see if it ran smoothly. The whole pipeline from image capture (simulated with just a downloaded image file on the raspberry pi) to being verified as a match was tested and worked within timing requirements. In addition, I fixed some bugs in our website such as images not displaying in fullscreen properly. The link is the same as the previous week. 

Outside of the camera issues, my progress has been on schedule. Since we don’t think putting the camera into our pipeline is much of an issue once we have a working one, we don’t expect this to delay much. By next week, I hope to add a working camera to our pipeline and then work on the sagemaker implementation with Eric.