Daniel’s Status Report 3/8

Spent some time focused on finishing up the tweaks on the LLMs that I mentioned that I was working on last status report. Also started to work with OpenCV to help with the object detection part of the project, since that requires a hefty workload which will need multiple people to get working. So far getting OpenCV to work has been difficult but I should make some solid headway through the next few days.

Team Status Report 3/8

As a team, we are continuing to work on our individual subsystems and we are making sufficient progress. Currently, the most significant challenge we are facing is the lack of documentation on the eYs3D camera we are using, which could make it more difficult to integrate it with the Raspberry Pi and the ML models. Furthermore, the addition of OCR means that more time needs to be allocated for the ML component of the device.

Opalina’s Status Report 3/8

Over the last two weeks, I began training YOLOv8 on one of the online airport datasets. I also realized the need for Optical Character Recognition (to interpret words and numbers in addition to arrows) and delved into ways to implement and integrate it into the software subsystem. By next week, I hope to have a functional YOLO model for our purposes and robust implementation plans for OpenCV and OCR.

Daniel’s Status Report 2/22

As well as preparing for the slides and presentation for the Design Report, I have been working on the audio feedback part of the system. Audio integration setup has been done, and I’ve been working on implementing VOSK and Google TTS, and testing it myself. I’m in the process of creating a simple script that just repeats whatever I say back to me, which I’ll be able to present to my group on our meeting on Monday.

Team Status Report 02/22

We have decided on the form factor of the device: waist mounted camera system. We have started training the ML model, the CAD of the physical product and UI components. Everyone is working individually right now with a plan to integrate soon.

Opalina’s Status Report 2/22

This week I dived deeper into finalizing and testing YOLO v8 for the sake of object detection. I researched ways to fine tune to model to fit our purposes while eliminating potential for error due to crowds, shaky cameras, advertisements, etc.

Krrish’s Status Report 02/15

I’ve ordered the remaining hardware components, including the AI accelerator board and micro SD card. I also collected the camera and Raspberry Pi. The camera is operational, and I successfully obtained an RGB video stream. However, I’m encountering issues with retrieving depth data and selecting different modes. While I found several Python wrappers for the camera, installation has been challenging. Additionally, I discovered a ROS package that may be necessary to access all its features. I plan to explore these options further in the coming week.

Daniel’s Status Report 2/15

After dividing the work, it was agreed that I would work on the quantitative requirements, testing, verification, validation, and implementation plan. So far, most of my time was spent on the implementation plan. I created a list of items that we would need to buy for our project, as well as detailing the plan on how to implement the project on the software side. Currently leaning towards using YOLO, Google TTS, VOSK, and Python Threading.