Daniel’s Status Report 2/22

As well as preparing for the slides and presentation for the Design Report, I have been working on the audio feedback part of the system. Audio integration setup has been done, and I’ve been working on implementing VOSK and Google TTS, and testing it myself. I’m in the process of creating a simple script that just repeats whatever I say back to me, which I’ll be able to present to my group on our meeting on Monday.

Team Status Report 02/22

We have decided on the form factor of the device: waist mounted camera system. We have started training the ML model, the CAD of the physical product and UI components. Everyone is working individually right now with a plan to integrate soon.

Opalina’s Status Report 2/22

This week I dived deeper into finalizing and testing YOLO v8 for the sake of object detection. I researched ways to fine tune to model to fit our purposes while eliminating potential for error due to crowds, shaky cameras, advertisements, etc.

Krrish’s Status Report 02/15

I’ve ordered the remaining hardware components, including the AI accelerator board and micro SD card. I also collected the camera and Raspberry Pi. The camera is operational, and I successfully obtained an RGB video stream. However, I’m encountering issues with retrieving depth data and selecting different modes. While I found several Python wrappers for the camera, installation has been challenging. Additionally, I discovered a ROS package that may be necessary to access all its features. I plan to explore these options further in the coming week.

Daniel’s Status Report 2/15

After dividing the work, it was agreed that I would work on the quantitative requirements, testing, verification, validation, and implementation plan. So far, most of my time was spent on the implementation plan. I created a list of items that we would need to buy for our project, as well as detailing the plan on how to implement the project on the software side. Currently leaning towards using YOLO, Google TTS, VOSK, and Python Threading.

Team Status Report 2/15

In terms of safety, GateGuard mitigates risks associated with getting lost, and accidental trespassing in restricted areas. Traditional methods, such as relying on assistance from airport staff, can be inconsistent and unreliable. In terms of social factors, we are making sure that GateGuard is as inconspicuous as possible for visual appeal. We are also making sure that the device promotes independent mobility, which can inspire confidence to our users. In terms of economics, we have to make sure that for the users, the device provides a cost-effective alternative to other methods, such as hiring a travel assistant.

Opalina’s Status Report 2/15

  • I downloaded and tested YOLO v5 and OpenCV on my local machine, and researched ways to adjust and train the model to suit our purpose.
  • Tested the camera and its field of view, used printed airport signs to test clarity and ease of detection. In these tests, I found that the user’s waist would be an appropriate location for the camera, providing ease of use as well as wide coverage.

Daniel Kim’s Status Report 2/8

After receiving feedback for our project during the presentation, we’ve agreed to meet next week to discuss what we have researched and discovered. As promised in the slides, I started to review existing AI object detection models. So far YOLO v8 model seems to impress. I’ve learned that at least 10k images would be needed, so this is something to keep in mind moving forward.

Opalina’s Status Report 2/8

I presented the project proposal, received feedback, and edited the scope of the project to ensure feasibility. Additionally, I researched OpenCV and methods to train our model to ensure accuracy and usability of the finished product.