This week, we made an important modification to our project as a team— after speaking with Professor Mukherjee, we decided to use an iRobot as our base rather than building a robot from scratch. 

 

Mae and I then worked on refining the pipeline for our software system, based on feedback from the TAs and the professor. We planned our overall software system diagram, taking into account the added complexity of obstacle avoidance. With this new addition, we needed to talk through how to detect an obstacle through training the ML model on other types of trash, as well as where to scan for obstacles in the robot’s field of vision, and how we would track a detected bottle once it was labeled so that we don’t confuse it with the obstacle. We also brainstormed how we would calculate the angle between the robot and the object, and I looked into some resources that explain that. 

 

I spent time this week setting up the Nvidia Jetson, and running the realsense SDK on it. I was able to secure scripts to get the camera feed to log and display depth points with the pyrealsense library, and used the RealSense Viewer to get an idea of the parameters available for the camera and how I should modify them in order to best fit our use case. One specific parameter I intend on changing is the range of focus, as we will need to maximize the accuracy of detecting objects within the 5 foot radius. I was also able to start playing with openCV tracking algorithms in order to implement our obstacle avoidance algorithm. I was able to use the CSRT Tracker, which is a Discriminative Correlation Filter, since it is more accurate than other algorithms, to test tracking an object. We’ll be able to use either this tracker or another one chosen for speed in order to track an identified object, since the tracker takes in a bounding box as an input and our model will be outputting a selected bounding box.

I believe I am still a little behind schedule since I had issues with setting up the Jetson and there is no power in Hammerschlag this weekend. Next week I intend to combine the CV tracking algorithm with the RealSense depth points in order to get distance readings for the tracked object. I will also work on extracting more data from the RealSense in order to implement our object avoidance algorithm, where we’d need to identify realsense depth readings that don’t correspond to our tracked target.




0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *