Cynthia’s Status Report 4/12

Accomplishments this week:
This week I spent most of my time debugging the fine-tuning code to make our system faster, which ended up not performing as expected, so further debugging will need to be done (but the current model is still working well, just slightly laggy). I also worked with Kaya to integrate wall detection with our code and get the correct response sent to the user.

Reflection on schedule:
We are on schedule, but because of laggy-ness our project will likely have a lower accuracy than our design requirement.

Plans for next week:
Testing and verification, further debugging, and starting our final report if we have time.

Verification:
I will focus on hazard and stair detection testing.
I will test the model (after removing the display of the frames which has been making the program slower) by analyzing the distance/location accuracy of objects detected, whether hazards vs non-hazards consistency get identified or not identified as expected, and the overall latency of the system from detection to user response with Maya. I will be performing the same analysis for the stairs hazard, with the addition of measuring how accurate the classification of the class stairs is. Note that I will not be testing the accuracy of specific object classifications because the response for different objects which pose as a hazard does not depend on what specific object it is, but on its overall position and size.
For hazard detection, I will perform an equal number of tests on large indoor items (such as tables and chairs), smaller items that should be detected (such as a laptop), and insignificant objects (such as flooring changes) to ensure false positives are not occurring. I will record true positives, false positives, and false negatives (missed hazards), aiming to achieve at least 90% true positive rate and no more than 10% false positive rate across these tests. I will also measure the latency from visual detection to haptic response with Maya, expecting a response time of less than 1 second for real-time feedback.
For stair detection, I will perform tests consisting of different staircases, single-step elevation changes, and flat surfaces used as negative controls (to ensure stairs are not falsely detected). Each group will be tested under varied indoor lighting and angles. The stair classification model will be evaluated on binary detection (stair vs. not stair). I aim to achieve at least 90% stair detection accuracy and 84% accuracy in distinguishing stairs from walls and other obstacles.

Cynthia’s Status Report 3/29

Accomplishments this week:
Finished debugging and integrated my fine-tuned YOLOv8 model that now classifies stairs along with objects (image included below — note that our Jetson has to be plugged in now, so we couldn’t bring it to actual stairs but I emulated stairs with the cart in the image).  Helped Kaya with fixing pyrealsense a little, then once it worked I used his code that gets distance from grid points to make the code that runs the model and creates bounding boxes now get the distance to the center points of the objects too.  I also worked with Maya to get the haptics working from the Jetson.  Lastly, I wrote the code to integrate/trigger the haptics and decide what action to suggest with the object detection model, based on objects detected, their location, and their distance — this currently works but the suggested actions are not all correct yet.

Reflection on schedule:
I believe I did a lot this week and caught up to what we planned.

Plans for next week:
Work with Kaya and get the wall detection working.  Fix the recommended action decision making code.

Cynthia’s Status Report 3/22

Accomplishments this week:
I worked with Kaya after the correct versions of the libraries we needed were installed (after a lot of trouble and many hours spent on this) to get the pre-trained ML model working on the live RGB stream (see photos below).  I additionally had to change the code I previously wrote to work around what we decided does not work (pyrealsense2) on the Jetson, which we were depending on for its depth stream.  Recently, I started working with Kaya to get depth data and figure out how we can use that data in my python scripts instead of just getting it as a terminal command output.

Reflection on schedule:
I think I am slightly behind what our schedule was for my portion, but that is because we switched to using the camera on the Jetson earlier than planned since I am not able to use the library I need on the desktops and cannot use the camera on my laptop, so I was never able to test my code until one day ago and have not included stairs in our model yet.  Additionally, I was sick (and still am) and was able to work less than planned.  Overall, because of the rearranging, we are still on schedule as a group, but I need to continue to make good progress with our model moving forward.

Plans for next week:
Write code to train the model to incorporate a dataset of stairs and work further on getting distance measurements without pyrealsense2.

Cynthia’s Status Report 3/8

Accomplishments:

This week I set up the L515 camera and obtained the depth stream along with the RGB stream. This took longer than expected because of compatibility issues due to my computer version, the SDK version, and the camera being outdated. A picture of the obtained streams is attached below. Additionally, I spent time improving our design report draft and adding more diagrams.

Progress:

We are on schedule now that we have finished the design report and have both our Jetson and L515 camera set up.

Future deliverables:

The week following spring break, I plan on writing code in the RealSense Software Development Kit to be able to obtain a constant stream of the distance from our camera to a grid of points I specify in the frame, and to start writing the depth filtering code.

Cynthia’s Status Report 2/22

Accomplishments:

This week we finished a draft of our written design review. The content I wrote specifically were the design requirements, the software diagram and description, and the software system implementation. I redesigned our software system as I did more research and identified possible risks with our previous plan, and wrote some code to start obtaining depth maps.

Progress:

We are ahead of schedule with the written design report because we will be finished with our draft by tomorrow, the 23rd. We are on schedule with the start of our project and initialization, but risk falling behind this week or next if our tasks take longer than expected or we encounter some errors with our technology. We had a later start than expected with our technology because it took longer to receive a few necessary parts we ordered, such as the microSD card.

Future deliverables:

This week we will be editing our draft of the design proposal, and hopefully by next week we will have our devices initialized and will be successfully able to obtain data from our L515 camera, start on the code to obtain and interpret depth mappings, and start working further past the Jetson initialization.

Cynthia’s Status Report for 2/15

Accomplishments:

This week, I worked mainly on our Design Review presentation.  I worked on the Use Case Requirements, Technical Challenges, and Implementation Plan.  Specifically, I researched more into the differences between using CV for obstacles vs creating a simple algorithm utilizing the depth and distance detection of the L515 LiDAR Camera for object classification and added this differentiation to our presentation.  Additionally, I created the Top Level Design flow chart and the diagram for the cane.  I am additionally preparing to present our slides on Monday.  I also researched the main libraries I will be using to implement the object detection and to gather the distance information needed, specifically the libraries OpenCV, TensorFlow object detection API, RealsenseCamera, and pyRealSense2.

Progress:

Our project is on schedule.  We are meeting on Sunday to start initialization.

Future Deliverables:

I will present this week, then we will be working on the written report for the Design Review.  Additionally, we should have our Jetson Nano initialized by the end of the week and possibly pseudocode for the CV.

Cynthia’s Status Report for 2/8

This week my main two focuses were the quantitative aspects of the proposal presentation slides and the research behind the possible microcontrollers we could use.  For the presentation, I mainly worked on the quantitative Use-Case Requirements, Solution Approach, and Metrics sections, researching what we should base values such as detection range on.  For researching microcontrollers, I specifically did a comparison between Raspberry Pi 4, Raspberry Pi 5, NVIDIA Jetson ORIN Nano, NVIDIA Jetson ORIN Nano, NVIDIA Jetson XAVIER, NVIDIA Jetson Nano 4GB, and NVIDIA Jetson Nano 2GB to determine that the NVIDIA Jetson ORIN Nano is ideal for our project given our performance, weight, power supply, and power efficiency requirements.  We then received this microcontroller from the inventory.

I am on schedule, since our main plan this week was to obtain the individual parts needed for our project.

In the next week I hope to discover how the data from the LiDAR camera is formatted and I hope to determine the libraries I will use to process and analyze the photo stream.  I additionally plan to start researching existing object detection code and determining general pseudocode for my algorithm.