This week, I worked with Richard to begin integrating AWS Rekognition into our project via Supabase Edge Functions. Specifically, we worked on a function that, when confidence from the RPi is low, takes an image URL as input, processes it through Rekognition’s text detection API, and returns the detected text with confidence scores. For this, we use a cropped image down to the license plate to avoid Rekognition giving us unrelated text present in the image. We also worked on devising a testing strategy for our system given the unexpected delay in receiving the replacement camera, which should have been here more than a week ago. The temporary camera we’re currently using does not meet our resolution or lens requirements, so we have to adjust and still get data with it to check performance of the system given a limited camera. The goal is to extract as much useful validation data as possible while awaiting the final hardware, and things like timing and power / reliability should not be affected by the camera.
Our progress is mostly on schedule, but partially impacted by the hardware delay, and we are still going ahead with testing what we can with the temporary camera. Next week I plan to work on the final presentation slides, if we can the camera, test as soon as possible, and finalize and test the AWS Rekognition pipeline if testing gets finished.