Team Status Report 3/22

Currently, the individual components seem to be functioning adequately, including the OCR, YOLOv8 model, and the camera. However, the camera is working slower than initially anticipated, posing a potential risk of the model’s not running properly during integration. In the upcoming week, we aim to test this hypothesis and make changes to our models or equipment as necessary.

Opalina’s Status Report 3/22

This week, I managed to fine-tune the YOLO model while figuring out the semantics of the OpenCV, YOLOv8 and OCR integration. During the coming week, I hope to get the ML system fully functioning and at least partially integrated, ready to run on the Pi.

Daniel’s Status Report 3/15

Over the week, I focused on getting OpenCV to work. Now that I’m getting sent the YOLO script for sign detection, I’ll integrate it into what I got, and produce an output that the LLM models can translate. Hopefully, this gets done over the next few days, Tuesday at the latest.

Team Status Report 3/15

Our most significant risk is the RPi not being sufficient enough to handle the camera input, object detection processing, and handling the output. The risk is that there will be latency issues, and the contingency plan as of now is to switch to offline LLM models to reduce the load on the RPi. Currently there have been no changes to the existing design of the system, and no changes to the schedule, as of right now we are on track.

Krrish’s Status Report 3/15

I finally for the camera working on the raspberry pi, however, it doesn’t seem very stable. The AI hat might also not be compatible with it and the depth sensing is not great. I’ve been looking at alternatives like the raspberry pi camera and the intel realsense camera. I will have to rework the CAD files according to the new camera.

Opalina’s Status Report 3/15

The YOLO model is fully trained and functional on large airport datasets (with a variety of images). OCR is proving a little more difficult to integrate into thew software subsystem, but I hope to have that figured out by the end of this week. The only potential issue we see right now, is the model not running fast enough on the Pi, as local tests prove that it might be slower than anticipated.

Daniel’s Status Report 3/8

Spent some time focused on finishing up the tweaks on the LLMs that I mentioned that I was working on last status report. Also started to work with OpenCV to help with the object detection part of the project, since that requires a hefty workload which will need multiple people to get working. So far getting OpenCV to work has been difficult but I should make some solid headway through the next few days.

Team Status Report 3/8

As a team, we are continuing to work on our individual subsystems and we are making sufficient progress. Currently, the most significant challenge we are facing is the lack of documentation on the eYs3D camera we are using, which could make it more difficult to integrate it with the Raspberry Pi and the ML models. Furthermore, the addition of OCR means that more time needs to be allocated for the ML component of the device.

Opalina’s Status Report 3/8

Over the last two weeks, I began training YOLOv8 on one of the online airport datasets. I also realized the need for Optical Character Recognition (to interpret words and numbers in addition to arrows) and delved into ways to implement and integrate it into the software subsystem. By next week, I hope to have a functional YOLO model for our purposes and robust implementation plans for OpenCV and OCR.