Cynthia’s Status Report 4/26

Accomplishments this week:
Worked on debugging, testing, final presentation/report, and implemented one last feature for detecting a step down.

Reflection on schedule:
On schedule!

Plans for next week:
Working on the poster and final report and our final category of testing (differentiation and latency tests).

Cynthia’s Status Report 4/19

Accomplishments this week:
I retrained our object detection model by changing fine-tuning parameters to improve performance, such as increasing the starting learning rate for the learning rate scheduler and changes to lower memory usage. Additionally, I performed more complex data transformations to augment part of our dataset to work better in different indoor lightings by editing features like saturation, shadows, and rotations/flips. Additionally, debugged with my teammates, helped Maya with woodworking, started testing with Kaya, and started on our final documentation.

Reflection on schedule:
On schedule!

Plans for next week:
Finish testing and poster.

New tools/knowledge:
As I worked on our project, the main knowledge I had to learn was deep learning techniques to fine-tune and improve model performance and also integration knowledge with our peripherals. The learning strategies I used were applying knowledge from the deep learning class I am currently in and going through forum posts such as stack overflow posts and YOLO support posts of similar problems to ours with fine-tuning. Additionally, I learned how to efficiently go through technologies’ documentation and support websites to learn integration techniques for the technology I have not used before.

Team Status Report 4/19

Risks:
Lower accuracy than anticipated, but no large risks!

Changes:
Moved to a fine-tuned model with an augmented portion of the dataset and different training parameters.

Cynthia’s Status Report 4/12

Accomplishments this week:
This week I spent most of my time debugging the fine-tuning code to make our system faster, which ended up not performing as expected, so further debugging will need to be done (but the current model is still working well, just slightly laggy). I also worked with Kaya to integrate wall detection with our code and get the correct response sent to the user.

Reflection on schedule:
We are on schedule, but because of laggy-ness our project will likely have a lower accuracy than our design requirement.

Plans for next week:
Testing and verification, further debugging, and starting our final report if we have time.

Verification:
I will focus on hazard and stair detection testing.
I will test the model (after removing the display of the frames which has been making the program slower) by analyzing the distance/location accuracy of objects detected, whether hazards vs non-hazards consistency get identified or not identified as expected, and the overall latency of the system from detection to user response with Maya. I will be performing the same analysis for the stairs hazard, with the addition of measuring how accurate the classification of the class stairs is. Note that I will not be testing the accuracy of specific object classifications because the response for different objects which pose as a hazard does not depend on what specific object it is, but on its overall position and size.
For hazard detection, I will perform an equal number of tests on large indoor items (such as tables and chairs), smaller items that should be detected (such as a laptop), and insignificant objects (such as flooring changes) to ensure false positives are not occurring. I will record true positives, false positives, and false negatives (missed hazards), aiming to achieve at least 90% true positive rate and no more than 10% false positive rate across these tests. I will also measure the latency from visual detection to haptic response with Maya, expecting a response time of less than 1 second for real-time feedback.
For stair detection, I will perform tests consisting of different staircases, single-step elevation changes, and flat surfaces used as negative controls (to ensure stairs are not falsely detected). Each group will be tested under varied indoor lighting and angles. The stair classification model will be evaluated on binary detection (stair vs. not stair). I aim to achieve at least 90% stair detection accuracy and 84% accuracy in distinguishing stairs from walls and other obstacles.

Team Status Report 3/29

Risks:
The only risk is that sometimes the depth stream from the LiDAR camera gives us the wrong data/doesn’t work in random holes of the frame, so it tells us objects are 0 meters away which will likely lower our accuracy but hopefully won’t end up interfering with too much.

Changes:
We changed our plan back to the original plan of using pyrealsense2.

Cynthia’s Status Report 3/29

Accomplishments this week:
Finished debugging and integrated my fine-tuned YOLOv8 model that now classifies stairs along with objects (image included below — note that our Jetson has to be plugged in now, so we couldn’t bring it to actual stairs but I emulated stairs with the cart in the image).  Helped Kaya with fixing pyrealsense a little, then once it worked I used his code that gets distance from grid points to make the code that runs the model and creates bounding boxes now get the distance to the center points of the objects too.  I also worked with Maya to get the haptics working from the Jetson.  Lastly, I wrote the code to integrate/trigger the haptics and decide what action to suggest with the object detection model, based on objects detected, their location, and their distance — this currently works but the suggested actions are not all correct yet.

Reflection on schedule:
I believe I did a lot this week and caught up to what we planned.

Plans for next week:
Work with Kaya and get the wall detection working.  Fix the recommended action decision making code.

Cynthia’s Status Report 3/22

Accomplishments this week:
I worked with Kaya after the correct versions of the libraries we needed were installed (after a lot of trouble and many hours spent on this) to get the pre-trained ML model working on the live RGB stream (see photos below).  I additionally had to change the code I previously wrote to work around what we decided does not work (pyrealsense2) on the Jetson, which we were depending on for its depth stream.  Recently, I started working with Kaya to get depth data and figure out how we can use that data in my python scripts instead of just getting it as a terminal command output.

Reflection on schedule:
I think I am slightly behind what our schedule was for my portion, but that is because we switched to using the camera on the Jetson earlier than planned since I am not able to use the library I need on the desktops and cannot use the camera on my laptop, so I was never able to test my code until one day ago and have not included stairs in our model yet.  Additionally, I was sick (and still am) and was able to work less than planned.  Overall, because of the rearranging, we are still on schedule as a group, but I need to continue to make good progress with our model moving forward.

Plans for next week:
Write code to train the model to incorporate a dataset of stairs and work further on getting distance measurements without pyrealsense2.

Cynthia’s Status Report

Accomplishments this week: I worked with my team to get our RealSense SDK visualizer working with getting the RGB and depth streams on the Jetson and I wrote the code to get the same streams through a Python script instead of the visualizer.

Reflection on schedule: We are a little behind schedule for the software, but also because we are doing things a different order than on our schedule.  We ended up doing the integration for the LiDAR camera with the Jetson this week, which we got working, but the correct versions that we need to run our python code needs to be worked on further, since our visualizer works for the data stream on the Jetson but not the Python code to view the stream because of the libraries.

Plans for next week: Over the next week I will be catching up on my object detection code by writing the code to obtain bounding boxes and getting it to work on the Jetson Nano.

Cynthia’s Status Report 3/8

Accomplishments:

This week I set up the L515 camera and obtained the depth stream along with the RGB stream. This took longer than expected because of compatibility issues due to my computer version, the SDK version, and the camera being outdated. A picture of the obtained streams is attached below. Additionally, I spent time improving our design report draft and adding more diagrams.

Progress:

We are on schedule now that we have finished the design report and have both our Jetson and L515 camera set up.

Future deliverables:

The week following spring break, I plan on writing code in the RealSense Software Development Kit to be able to obtain a constant stream of the distance from our camera to a grid of points I specify in the frame, and to start writing the depth filtering code.

Cynthia’s Status Report 2/22

Accomplishments:

This week we finished a draft of our written design review. The content I wrote specifically were the design requirements, the software diagram and description, and the software system implementation. I redesigned our software system as I did more research and identified possible risks with our previous plan, and wrote some code to start obtaining depth maps.

Progress:

We are ahead of schedule with the written design report because we will be finished with our draft by tomorrow, the 23rd. We are on schedule with the start of our project and initialization, but risk falling behind this week or next if our tasks take longer than expected or we encounter some errors with our technology. We had a later start than expected with our technology because it took longer to receive a few necessary parts we ordered, such as the microSD card.

Future deliverables:

This week we will be editing our draft of the design proposal, and hopefully by next week we will have our devices initialized and will be successfully able to obtain data from our L515 camera, start on the code to obtain and interpret depth mappings, and start working further past the Jetson initialization.