Daniel’s Status Report 3/8

Spent some time focused on finishing up the tweaks on the LLMs that I mentioned that I was working on last status report. Also started to work with OpenCV to help with the object detection part of the project, since that requires a hefty workload which will need multiple people to get working. So far getting OpenCV to work has been difficult but I should make some solid headway through the next few days.

Daniel’s Status Report 2/22

As well as preparing for the slides and presentation for the Design Report, I have been working on the audio feedback part of the system. Audio integration setup has been done, and I’ve been working on implementing VOSK and Google TTS, and testing it myself. I’m in the process of creating a simple script that just repeats whatever I say back to me, which I’ll be able to present to my group on our meeting on Monday.

Daniel’s Status Report 2/15

After dividing the work, it was agreed that I would work on the quantitative requirements, testing, verification, validation, and implementation plan. So far, most of my time was spent on the implementation plan. I created a list of items that we would need to buy for our project, as well as detailing the plan on how to implement the project on the software side. Currently leaning towards using YOLO, Google TTS, VOSK, and Python Threading.

Daniel Kim’s Status Report 2/8

After receiving feedback for our project during the presentation, we’ve agreed to meet next week to discuss what we have researched and discovered. As promised in the slides, I started to review existing AI object detection models. So far YOLO v8 model seems to impress. I’ve learned that at least 10k images would be needed, so this is something to keep in mind moving forward.