Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.

Eric’s Status Report for 4/12/25

This week, I collaborated with Richard to research how to implement AWS SageMaker for our cloud-based license plate inference system. We worked through some of the required setup steps in the AWS environment, and figured out how to ensure that it works as a serverless inference endpoint. I also did additional research into Amazon Rekognition as a possible alternative to running our setup with Sagemaker, which Bhavik mentioned as an option. This would simplify deployment by using pre-built services. I thoroughly tested Rekognition’s OCR capabilities on license plate images and evaluated its strengths and limitations in the context of our use case. The results indicated that Rekognition provides general text detection but lacks license plate localization and precision control, which means we would result in a lot of junk text in addition to the desired license plate readings. Still, it’s relatively simple to filter those out, so we will meet to discuss which solution to work with.

 

At this stage, my progress is on schedule, and we already have a working system together for MVP since the cloud step isn’t part of it. We plan to work on integrating the cloud solutions and the replacement camera next week, and test using a portable battery once we get the camera connected. We’ve already completed verification for most subsystems, including the edge image capture and processing pipeline. Our current MVP is functional and does not yet include the cloud inference step. Therefore, the next verification task is focused on the cloud inference integration, which we began exploring this week through SageMaker and Rekognition.