Nathan’s Status Report for 2/24/24

For the first half of this week, from Sunday to Wednesday morning, I was practicing my Design Proposal presentation. I wrote down meaningful talking points I wanted to hit on each slide, and then I made sure to get the slide transitions down smoothly and without pausing or awkwardness. In addition, I practiced my timing and in front of a live audience (my roommates). For the latter half of the week, I dove in-depth into the Luxonis DepthAI documentation to try and figure out which frameworks and functions would be useful for our application. I read numerous examples from Luxonis’ depthai-experiments repo to try and find the relevant depth processing we need. Alongside this experimentation, I was figuring out the nuances behind the dependencies needed to install the required packages. Currently, I’m facing an issue where I am unable to perform RGB Color Camera capture; it doesn’t crash only when I run MonoCamera applications, which is odd. I’ve tried troubleshooting versions and I’m still investigating this issue.  The following photos below show the depth application examples I got to work that do not involve RGB color capture.

In addition, I made a repo to get started on a hello_world.py file where they walk you through how to create a pipeline to then start image capture and image processing, and the Github link to this repo is https://github.com/njzhu/HomeRover

My progress is slightly behind because of this small issue with the rgb camera, but once I figure that out, I hope I am able to understand the examples and apply depth capture to objects in our surrounding areas and rover’s vicinity. Since the examples are made by Luxonis, it should be extremely helpful and informative.

In the next week, I hope to get my hello_world.py up and running successfully and be able to do basic depth capture using the stereo depth perception technology on the Luxonis.

Leave a Reply

Your email address will not be published. Required fields are marked *