Krrish’s Status Report 04/12

I assembled the cameras and raspberry pi into it’s 3d printed housing. I have stereo depth working and am now integrating with the YOLO model. I’m having trouble getting the model to compile so it can run on the accelerator. I need to install a lot of drivers on an x86 machine which is proving difficult as I don’t have one and need to go to campus. The software is also prone to a lot of issues which I am in the process of figuring out

Krrish’s Status Report 3/15

I finally for the camera working on the raspberry pi, however, it doesn’t seem very stable. The AI hat might also not be compatible with it and the depth sensing is not great. I’ve been looking at alternatives like the raspberry pi camera and the intel realsense camera. I will have to rework the CAD files according to the new camera.

Krrish’s Status Report 02/15

I’ve ordered the remaining hardware components, including the AI accelerator board and micro SD card. I also collected the camera and Raspberry Pi. The camera is operational, and I successfully obtained an RGB video stream. However, I’m encountering issues with retrieving depth data and selecting different modes. While I found several Python wrappers for the camera, installation has been challenging. Additionally, I discovered a ROS package that may be necessary to access all its features. I plan to explore these options further in the coming week.

Krrish Jain Status Report for 2/8

I followed up with the Disabilities department at Pitt regarding connecting us with students who could help us in our research for our project. I also reached out to the Pennsylvania Association for the Blind. Our hope is that we can understand how people with visual impairments currently navigate airports so we can build a device that can integrate with their lifestyle. We want to also understand what the hardest part of their journey is and how we can make it easier.