This week I worked with Eric on using AWS Rekognition into our pipeline. AWS Rekognition will be called by a Supabase function whenever a low confidence match is uploaded to our database. We decided on Rekognition over SageMaker since Rekognition was better suited for our OCR use case, and we noticed that larger models for cropping cars and license plates had very diminishing returns, and was not worth the extra computation in the cloud. To accommodate this change, the raspberry pi also uploads a cropped image of the license plate as well as the original image. We have also laid out a testing strategy and have begun testing multiple parts of our device, such as the power supply and cooling solution. Since our camera has been delayed numerous times now, we are testing the webcam we are using, which unfortunately does not have the resolution or the IR capabilities for good results in far or night conditions.
My progress is on schedule. By next week, I hope to have finished testing and have our final presentation slides ready to present on time.
As I have implemented my project I have had to learn a lot about training ML models for computer vision and setting up a database. To learn how to train the models I watched videos and looked at the sample code provided by the makers of the model I decided to use, YOLO11. I chose this model for its widespread support and ease of use, so I was able to fine tune the model for detecting license plates relatively quickly. For setting up the database, I read the documentation provided by Supabase, and used tools that integrate / set up parts of the database for me with Supabase, specifically Lovable which we used to make the front-end website.