Team Status Report for 4/26

We finally have a decent replacement camera, not the camera module 3 which is still not here, but an Arducam IMX519 Raspberry Pi camera which meets all the requirements we had except the IR functionality which won’t affect our demo. As such, at this point, the biggest risk is the time crunch and to finish all deliverables on time.

No changes were made to the existing design of the system other than the use of this new camera. It was needed since our raspberry pi camera module 3 is not here and our demo/final deliverables are coming up. The costs are minimal since the camera is easy to replace in our system pipeline, and did not cost too much money.

Currently the next things to do would be to integrate the new camera and fully test its capabilities. In addition to that, work is going to be needed in making our poster and final deliverables. 

Testing we have done:

Unit tests:

  • Cooling and power supply
    • We ran the raspberry pi for 55 minutes with the full implementation
    • We saw the temperature stay below 75C, avoiding thermal throttling
    • Power supply kept our device stable the whole time
    • Both met our requirements
  • ML Models
    • We evaluated the model pipeline on images at different distances (5m, 10m) and weather/lighting conditions (day, night, rain, night+rain) of ideal image quality
    • We got a 98.6% precision and 79.1% recall, meeting our 90% precision requirement but not our 85% recall requirement
    • We will adjust with cutoff confidence levels to try and trade some of the precision for recall to meet both requirements
  • GPS Accuracy:
    • We got GPS location at 10 locations we know the true coordinates of outside (ex. CMU Flagpole) and compared it with the GPS reading at these locations
    • All results were in 31m, well within our 200m requirement
  • Edge-Cloud Communication:
    • We ran 15 simulated matches on the raspberry pi, which should show up properly in supabase and 15 updates to the database should be added to the Edge database
    • This should update 100% of the time which we observed in our testing

 

System tests:

  • Timing requirement:
    • We ran the raspberry pi for 55 minutes with the full implementation
    • All images were checked within our 40 second requirement
  • Full system performance test:
    • We evaluated the model pipeline on images at different distances (5m, 10m) and weather/lighting conditions (day, night, rain, night+rain) of our camera’s quality (the old webcam at the time)
    • 64.7% precision and 23.6% recall observed, well below our 90% and 85% targets respectively
    • We have a new camera to test, which should have much better results

 

Tzen-Chuen’s Status Report for 4/26

Last report!

After testing our replacement camera last week, we now need to test our new camera. This past week hasn’t seen much work on my end, as I’ve been swamped with other coursework, but as our system is effectively complete, it was not a big deal.

On top of new camera work, there’s also final deliverable work to be done such as the poster and demo preparation. We’ll need to collect a good representative video to showcase our device working.

Progress is on track.

Richard’s Status Report 4/26

This week I worked with Eric and Tzen-Chuen on the final presentation. I specifically focused on adding the testing metrics/results and some of the design tradeoffs, as well as working on polishing the presentation throughout. I was the presenter this time, so I also spent time practicing my presentation and delivery. In addition, we have finished testing multiple aspects of our system, most notably the webcam and our ML models. Our results when running inference with our ML models were pretty terrible with the webcam, but performed very well in comparison with images of ideal quality (iphone 13 camera). Therefore we knew quantitatively that the webcam was unsuitable, so instead of waiting for the raspberry pi camera module 3, I ordered a replacement camera: the Arducam IMX519 Raspberry Pi camera. This camera is a higher quality at 16MP compared to the camera module 3, has autofocus, and has a sufficient FOV, but does not have IR capabilities since I was not able to find an IR raspberry pi camera with a fast enough delivery time. This camera arrived a few days later. I confirmed that it works, and updated the code to use this new camera.

My progress is on schedule. By next week I hope to have completed all the deliverables for this capstone project as well as finish testing this backup camera that we will use as our main camera going forward.

Eric’s Status Report for 4/26/25

This week, I worked on preparing the slides for our final presentation with Richard and Tzen-Chuen, mainly focusing on testing and verification. I also worked with Richard and Tzen-Chuen on testing the system with the replacement webcam we have been using temporarily. I tested the ML models with Richard on both ideal (phone) quality and webcam photos, and results show that the bottleneck in performance was due to the poor webcam quality, since performance on ideal photos was high, but extremely poor on the webcam photos. Yesterday, we received a new camera, which was the missing piece of hardware and the biggest risk factor. It’s an Arducam model, not the Camera Module 3, which still hasn’t arrived, and doesn’t have IR capabilities, but meets our other requirements for the camera, so this won’t affect our demo and is a pretty suitable replacement. The new camera provides significantly better image quality and focus control, and initial tests show that it will help us better meet our design specifications for license plate capture under real-world conditions..

We are currently basically on schedule, with the delay in receiving the Camera Module 3 slowing our testing down a bit, but we’ve verified functionality and done tests for the other components while waiting, so we stayed mostly on track. Final testing and data collection can now proceed with the correct hardware. By next week, we should be done with capstone deliverables and testing.

Team Status Report for 4/19

As per last week, the camera we ordered still hasn’t arrived. While concerning, our MVP camera will still serve us well, as even though it doesn’t have IR capabilities or a very high resolution sensor, it is still more than good enough to demonstrate the functionality of the project as a whole.

There are no changes to the overall design of the project. However, we have made a slight change with using AWS Rekognition rather than Sagemaker for verifying low confidence matches. This was done since AWS Rekognition is specifically designed for OCR which is our main use case of the cloud model, which also makes it easier to work with. The costs this incurs are minimal since we did not waste time or resources on the Sagemaker portion other than basic testing. As for schedule, we are currently testing both the ML and camera right now, with ML tests ongoing and camera tests taking place tomorrow before the slides are assembled.

Richard’s Status Report for 4/19

This week I worked with Eric on using AWS Rekognition into our pipeline. AWS Rekognition will be called by a Supabase function whenever a low confidence match is uploaded to our database. We decided on Rekognition over SageMaker since Rekognition was better suited for our OCR use case, and we noticed that larger models for cropping cars and license plates had very diminishing returns, and was not worth the extra computation in the cloud. To accommodate this change, the raspberry pi also uploads a cropped image of the license plate as well as the original image. We have also laid out a testing strategy and have begun testing multiple parts of our device, such as the power supply and cooling solution. Since our camera has been delayed numerous times now, we are testing the webcam we are using, which unfortunately does not have the resolution or the IR capabilities for good results in far or night conditions.

My progress is on schedule. By next week, I hope to have finished testing and have our final presentation slides ready to present on time.

As I have implemented my project I have had to learn a lot about training ML models for computer vision and setting up a database. To learn how to train the models I watched videos and looked at the sample code provided by the makers of the model I decided to use, YOLO11. I chose this model for its widespread support and ease of use, so I was able to fine tune the model for detecting license plates relatively quickly. For setting up the database, I read the documentation provided by Supabase, and used tools that integrate / set up parts of the database for me with Supabase, specifically Lovable which we used to make the front-end website.

Tzen-Chuen’s Status Report for 4/19

Final week is coming up, more of the same of putting the finishing touches on our system. On my end this means in person testing of the camera. To this end, I’ve created python scripts for on boot recording and image capture. The major threat to the overall success is the replacement camera not arriving. It should have arrived last week, yet is still not here. Contingencies include using our MVP camera with no IR capability, or using the arducam.

Tomorrow will be going into the real world and getting our tests done, along with presentation work. Progress is on track, aside from the camera there are no concerns.

What I’ve needed to learn for this capstone project was the art of device selection and a shift in mentality to start working and ask questions later. By recognizing that the only way to solidify the knowledge from reading articles on the internet was to simply start working on something, things become so much clearer.

Team status report for 4/12

During demos, we verified that we already have a working and functional MVP. The risk is that we need to verify that the replacement camera we bought can work with the system properly, and testing for this will begin as soon as the replacement arrives. However, this is a small risk, as the previous one functioned already, and our backup is sufficient for demonstrating functionality. 

There are no changes to the design, as we are in the final stages of the project. Only testing and full Sagemaker integration remain. 

On the validation side, we plan to test the fully integrated system once we receive and install the replacement camera. We’ll run tests using real-world driving conditions and a portable battery setup to simulate actual usage. We will also test in various lighting conditions. 

More specifically, while we have not run comprehensive tests yet, our initial testing on the timing requirements as well as the database matching are all meeting our requirements of 40 seconds and 100% accuracy, respectively. To test these at a more comprehensive level, we will run 30 simulated matches with different images to make sure all are within the timing and match requirements. Once we receive the camera we will use in our final implementation, we will take images of varying distances and weather/lighting conditions, and test the precision and recall of our whole pipeline. These images will also be put into platerecognizer.com, a commercial model, to see if it is our models that need improvement or the camera. The details of these tests are the same as what we have in the design report. Finally, we will either run this system in an actual car, or take a video of a drive with the camera and feed this video into the pipeline to simulate normal use, and make sure it detects all the matches of the video. 

In the last two weeks,  we finalized our MVP and are working on additional features as well as testing.

Richard’s status report for 4/12

This week, I worked with Eric on the verification of low-confidence matches through the AWS SageMaker platform. This included setting up the AWS service and environment and deploying the models to the cloud. During this process, we also learned about Rekognition, an AWS service specifically for image recognition, so we also looked into this service as a possible better option. Last week, I worked on polishing the website so that the interface was clean throughout and bug-free, especially with the interim demo. I also implemented and debugged the code for our GPS in our pipeline, and that aspect is working smoothly for us.

My progress has been on schedule. By next week, I hope to have made a decision and implemented the cloud verification system, and run tests on our entire pipeline.

Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.