Team status report for 4/12

During demos, we verified that we already have a working and functional MVP. The risk is that we need to verify that the replacement camera we bought can work with the system properly, and testing for this will begin as soon as the replacement arrives. However, this is a small risk, as the previous one functioned already, and our backup is sufficient for demonstrating functionality. 

There are no changes to the design, as we are in the final stages of the project. Only testing and full Sagemaker integration remain. 

On the validation side, we plan to test the fully integrated system once we receive and install the replacement camera. We’ll run tests using real-world driving conditions and a portable battery setup to simulate actual usage. We will also test in various lighting conditions. 

More specifically, while we have not run comprehensive tests yet, our initial testing on the timing requirements as well as the database matching are all meeting our requirements of 40 seconds and 100% accuracy, respectively. To test these at a more comprehensive level, we will run 30 simulated matches with different images to make sure all are within the timing and match requirements. Once we receive the camera we will use in our final implementation, we will take images of varying distances and weather/lighting conditions, and test the precision and recall of our whole pipeline. These images will also be put into platerecognizer.com, a commercial model, to see if it is our models that need improvement or the camera. The details of these tests are the same as what we have in the design report. Finally, we will either run this system in an actual car, or take a video of a drive with the camera and feed this video into the pipeline to simulate normal use, and make sure it detects all the matches of the video. 

In the last two weeks,  we finalized our MVP and are working on additional features as well as testing.

Richard’s status report for 4/12

This week, I worked with Eric on the verification of low-confidence matches through the AWS SageMaker platform. This included setting up the AWS service and environment and deploying the models to the cloud. During this process, we also learned about Rekognition, an AWS service specifically for image recognition, so we also looked into this service as a possible better option. Last week, I worked on polishing the website so that the interface was clean throughout and bug-free, especially with the interim demo. I also implemented and debugged the code for our GPS in our pipeline, and that aspect is working smoothly for us.

My progress has been on schedule. By next week, I hope to have made a decision and implemented the cloud verification system, and run tests on our entire pipeline.

Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.

Eric’s Status Report for 4/12/25

This week, I collaborated with Richard to research how to implement AWS SageMaker for our cloud-based license plate inference system. We worked through some of the required setup steps in the AWS environment, and figured out how to ensure that it works as a serverless inference endpoint. I also did additional research into Amazon Rekognition as a possible alternative to running our setup with Sagemaker, which Bhavik mentioned as an option. This would simplify deployment by using pre-built services. I thoroughly tested Rekognition’s OCR capabilities on license plate images and evaluated its strengths and limitations in the context of our use case. The results indicated that Rekognition provides general text detection but lacks license plate localization and precision control, which means we would result in a lot of junk text in addition to the desired license plate readings. Still, it’s relatively simple to filter those out, so we will meet to discuss which solution to work with.

 

At this stage, my progress is on schedule, and we already have a working system together for MVP since the cloud step isn’t part of it. We plan to work on integrating the cloud solutions and the replacement camera next week, and test using a portable battery once we get the camera connected. We’ve already completed verification for most subsystems, including the edge image capture and processing pipeline. Our current MVP is functional and does not yet include the cloud inference step. Therefore, the next verification task is focused on the cloud inference integration, which we began exploring this week through SageMaker and Rekognition.