Team status report for 4/12

During demos, we verified that we already have a working and functional MVP. The risk is that we need to verify that the replacement camera we bought can work with the system properly, and testing for this will begin as soon as the replacement arrives. However, this is a small risk, as the previous one functioned already, and our backup is sufficient for demonstrating functionality. 

There are no changes to the design, as we are in the final stages of the project. Only testing and full Sagemaker integration remain. 

On the validation side, we plan to test the fully integrated system once we receive and install the replacement camera. We’ll run tests using real-world driving conditions and a portable battery setup to simulate actual usage. We will also test in various lighting conditions. 

More specifically, while we have not run comprehensive tests yet, our initial testing on the timing requirements as well as the database matching are all meeting our requirements of 40 seconds and 100% accuracy, respectively. To test these at a more comprehensive level, we will run 30 simulated matches with different images to make sure all are within the timing and match requirements. Once we receive the camera we will use in our final implementation, we will take images of varying distances and weather/lighting conditions, and test the precision and recall of our whole pipeline. These images will also be put into platerecognizer.com, a commercial model, to see if it is our models that need improvement or the camera. The details of these tests are the same as what we have in the design report. Finally, we will either run this system in an actual car, or take a video of a drive with the camera and feed this video into the pipeline to simulate normal use, and make sure it detects all the matches of the video. 

In the last two weeks,  we finalized our MVP and are working on additional features as well as testing.

Richard’s status report for 4/12

This week, I worked with Eric on the verification of low-confidence matches through the AWS SageMaker platform. This included setting up the AWS service and environment and deploying the models to the cloud. During this process, we also learned about Rekognition, an AWS service specifically for image recognition, so we also looked into this service as a possible better option. Last week, I worked on polishing the website so that the interface was clean throughout and bug-free, especially with the interim demo. I also implemented and debugged the code for our GPS in our pipeline, and that aspect is working smoothly for us.

My progress has been on schedule. By next week, I hope to have made a decision and implemented the cloud verification system, and run tests on our entire pipeline.

Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.

Eric’s Status Report for 4/12/25

This week, I collaborated with Richard to research how to implement AWS SageMaker for our cloud-based license plate inference system. We worked through some of the required setup steps in the AWS environment, and figured out how to ensure that it works as a serverless inference endpoint. I also did additional research into Amazon Rekognition as a possible alternative to running our setup with Sagemaker, which Bhavik mentioned as an option. This would simplify deployment by using pre-built services. I thoroughly tested Rekognition’s OCR capabilities on license plate images and evaluated its strengths and limitations in the context of our use case. The results indicated that Rekognition provides general text detection but lacks license plate localization and precision control, which means we would result in a lot of junk text in addition to the desired license plate readings. Still, it’s relatively simple to filter those out, so we will meet to discuss which solution to work with.

 

At this stage, my progress is on schedule, and we already have a working system together for MVP since the cloud step isn’t part of it. We plan to work on integrating the cloud solutions and the replacement camera next week, and test using a portable battery once we get the camera connected. We’ve already completed verification for most subsystems, including the edge image capture and processing pipeline. Our current MVP is functional and does not yet include the cloud inference step. Therefore, the next verification task is focused on the cloud inference integration, which we began exploring this week through SageMaker and Rekognition.

Team Status Report for 3/29

The biggest risk currently is that we found out that the main camera module we purchased seems to be broken, which means we need to get a replacement working as soon as possible. Our backup camera is detecting, but is acting very weird, taking extremely blurry photos of a certain solid color, and the images are completely unusable. We’re currently planning to use a simple usb webcam temporarily for the demo and to get a working replacement.

We have made no changes to our design, or anything major to our schedule, since we expect getting the camera working in our code once we have a working one will be relatively quick. The change that was made to our schedule was mainly working around the faulty cameras, but after our problem is addressed, should be ok.

Outside of the camera issues, we have gotten the rest of the pipeline operational and tested, meaning it is ready for the demo and MVP.

Richard’s status report for 3/29

This week, I worked on debugging the camera with Eric and verifying the functionality of everything else. We tried many troubleshooting methods on both our camera module 3 and the backup arducam camera. However, no matter what we did we could not get the camera module 3 to detect and take pictures. The backup camera does detect, but would only take extremely blurry pictures of weird colors, such as blue and pink despite facing a white wall. Ultimately, we decided to move forward with a usb webcam Tzen-Chuen already had for demos. Outside the camera, mostly everything else was tested to see if it ran smoothly. The whole pipeline from image capture (simulated with just a downloaded image file on the raspberry pi) to being verified as a match was tested and worked within timing requirements. In addition, I fixed some bugs in our website such as images not displaying in fullscreen properly. The link is the same as the previous week. 

Outside of the camera issues, my progress has been on schedule. Since we don’t think putting the camera into our pipeline is much of an issue once we have a working one, we don’t expect this to delay much. By next week, I hope to add a working camera to our pipeline and then work on the sagemaker implementation with Eric.

Eric’s Status Report for 3/29/25

This week, I worked on adding permissions to the front end to handle different users and permissions, but there are still bugs with the login settings that need to be resolved. This is less of a priority for the demo next week, though. I also worked with Richard to continue debugging the Camera Module 3 and concluded that the Raspberry Pi is not detecting the camera due to a hardware issue with the camera module itself. We decided to go with a simple usb-connected webcam for the demo. Additionally, Richard and I tested sending matches to the Supabase database from the RPI and confirmed that matches are successfully added to the tables. Progress is a bit behind schedule due to the broken camera, and debugging the Camera Module 3 took much more time than expected since it wasn’t a software issue. Next week, I plan to fix the login bugs, get replacement cameras, and continue expanding Supabase integration with sagemaker, which isn’t part of the MVP.

Tzen-Chuen’s Status Report for 3/29

With the demo being Monday, we have all systems working right now besides the camera, which is alarming to say the least. The camera that we originally intended to use, the camera module 3 was being programmed and almost done, with pictures being taken and everything. Our backup camera is working though, but the problem with that is that it is a much more advanced camera, needing lots of configuration.

So right now we are working on our third alternative, a usb webcam that is acting as a substitute, which should need much less configuring than the arducam first backup. While hairy, we are on track to have a successful demo.

Team status report for 3/22

A significant risk that could jeopardize the success of the project is currently the camera and the pi. We were able to take images, testing baseline functionality, but when we moved our setup, the camera stopped being detected. Though this can probably be resolved, the impact to our timeline will still be felt. If it is the camera, pi, or cable that is the issue, we need to diagnose soon and order replacements from inventory or the web. 

A minor change was made to the system design by replacing the HTTP POST Supabase edge function with event triggers. This change was necessary because the Raspberry Pi already inserts data directly into the possible_matches table, and may result in more complexity compared to the edge function method.

Right now, our locally run code is finalized outside of the camera problems and can be found here. We also have a working prototype for the website for law enforcement to view matches, and can also be found here.

Richard’s Status Report for 3/22

This week I focused on finalizing the locally run code. I added the queue functionality where if a match cannot be sent up to the database for whatever reason, such as loss of internet connection, the data will be saved locally, and it will be retried later. I have also written a main function that will take a picture and send matches every 40 seconds, check for database updates every 200 seconds, and retry failed entries to the database every 200 seconds. Due to some problems we have with initializing the camera, that is the only aspect of the locally run code we do not have working right now. The code can be found here. In addition to the locally run code, I have made a prototype for the front-end website using lovable for law enforcement to look for matches. They can see the currently active amber alerts, possible matches sent to the database, and verified matches where the larger models on the cloud confirm the result of the possible matches. It also has useful filtering options such as filtering by confidence level or license plate number. The website can be found here.

My progress is on schedule. By next week, I hope to implement the verification process of possible matches into verified matches with Eric, and have an MVP.