Currently, the individual components seem to be functioning adequately, including the OCR, YOLOv8 model, and the camera. However, the camera is working slower than initially anticipated, posing a potential risk of the model’s not running properly during integration. In the upcoming week, we aim to test this hypothesis and make changes to our models or equipment as necessary.
Opalina’s Status Report 3/22
This week, I managed to fine-tune the YOLO model while figuring out the semantics of the OpenCV, YOLOv8 and OCR integration. During the coming week, I hope to get the ML system fully functioning and at least partially integrated, ready to run on the Pi.
Opalina’s Status Report 3/15
The YOLO model is fully trained and functional on large airport datasets (with a variety of images). OCR is proving a little more difficult to integrate into thew software subsystem, but I hope to have that figured out by the end of this week. The only potential issue we see right now, is the model not running fast enough on the Pi, as local tests prove that it might be slower than anticipated.
Team Status Report 3/8
As a team, we are continuing to work on our individual subsystems and we are making sufficient progress. Currently, the most significant challenge we are facing is the lack of documentation on the eYs3D camera we are using, which could make it more difficult to integrate it with the Raspberry Pi and the ML models. Furthermore, the addition of OCR means that more time needs to be allocated for the ML component of the device.
Opalina’s Status Report 2/22
This week I dived deeper into finalizing and testing YOLO v8 for the sake of object detection. I researched ways to fine tune to model to fit our purposes while eliminating potential for error due to crowds, shaky cameras, advertisements, etc.