Nathan’s Status Report for 2/24/24

For the first half of this week, from Sunday to Wednesday morning, I was practicing my Design Proposal presentation. I wrote down meaningful talking points I wanted to hit on each slide, and then I made sure to get the slide transitions down smoothly and without pausing or awkwardness. In addition, I practiced my timing and in front of a live audience (my roommates). For the latter half of the week, I dove in-depth into the Luxonis DepthAI documentation to try and figure out which frameworks and functions would be useful for our application. I read numerous examples from Luxonis’ depthai-experiments repo to try and find the relevant depth processing we need. Alongside this experimentation, I was figuring out the nuances behind the dependencies needed to install the required packages. Currently, I’m facing an issue where I am unable to perform RGB Color Camera capture; it doesn’t crash only when I run MonoCamera applications, which is odd. I’ve tried troubleshooting versions and I’m still investigating this issue.  The following photos below show the depth application examples I got to work that do not involve RGB color capture.

In addition, I made a repo to get started on a hello_world.py file where they walk you through how to create a pipeline to then start image capture and image processing, and the Github link to this repo is https://github.com/njzhu/HomeRover

My progress is slightly behind because of this small issue with the rgb camera, but once I figure that out, I hope I am able to understand the examples and apply depth capture to objects in our surrounding areas and rover’s vicinity. Since the examples are made by Luxonis, it should be extremely helpful and informative.

In the next week, I hope to get my hello_world.py up and running successfully and be able to do basic depth capture using the stereo depth perception technology on the Luxonis.

Nathan’s Status Report for 2/17/24

For the first half of this week, I managed purchase and rental requests for equipment for our project from ECE Receiving. We initially put in a request for the Intel RealSense l515, but in order to let other teams use the equipment in the interest of fairness, we ended up purchasing the Oak D Short Range camera, which arrived yesterday. Before the new camera came, I spent most of my time doing in-depth research and making an onboarding plan to start using the camera, which included finding setup instructions and finding specific tutorials and examples to start depth perception. In addition, I started research on translating our camera output (coordinates, depth) into kinematic motion and instructions to the arm. This involved preliminary research on kinematics. The second task I did while my teammates were doing CAD design was start putting together our design review presentation. I incorporated ideas and got inspiration from previous projects in determining what to include.

Once the camera came, I briefly started the onboarding process, installing packages and getting acquainted with the software. The camera has a depthai-viewer, which is a GUI interface for viewing the output of the camera, with a preinstalled neural network. The output is shown below:

Since the camera came recently and I have spent the majority of my time on the presentation, I am a little behind on playing around with the camera. To catch up to the project schedule, I believe without the task of preparing the presentation, I should be able to dedicate most of my time towards catching up.

Next week, I hope to complete the deliverables of working together with Hayden to establish the embedded software side of the kinematics. In addition, I hope to be able to output coordinates and a depth and finalize the nominal interface between the camera and the robotic arm.

Nathan’s Status Report for 2/10/24

At the beginning of this week, the majority of my work went into preparing the proposal presentation slides and meeting with Hayden to finalize the presentation and offer feedback. Personally, I made the use case and problem description slides as well as created the Gantt chart and inputted the tasks we need to achieve on Asana, our technology of choice.

After the presentation and after reviewing our TA feedback, I started to research the required technologies for my end – the depth camera or a LiDAR camera. Upon discussing with the team, we decided to acquire one of each since the ECE inventory already has a LiDAR camera that we can access without cutting into our budget. I filled out a request with ECE receiving and I hope to receive the camera sometime next week so I can start playing with it. In the meantime, I am researching how to interface with the camera and any needed technologies that are required for it to function properly. Since my area of focus is mostly on software, I am also starting to research how to translate what the camera sees and the information it gathers to the kinematic motion of the arm. This is still very much a work in progress.

Currently, my progress is on schedule since I am waiting for the technologies to arrive before I can actually start digging into experimentation and research.

In the next week, I hope to be able to receive the materials and play around with them. I hope to be able to write extremely basic code for the camera and RPi and perform basic setup tasks for the two so we can establish a good basis for the coming weeks as further experimentation occurs.