Status Update: Week 7

Joseph:

This week I worked with Karen to fine tune the robots movements. We were able to get a very accurate turning mechanism down but we are now finding that there may be some issue with the distance traveled mechanism. While the robot travels the correct distance, the turns it makes to take the pictures causes it to move backward an inch. We will work to fix this next week. This week I was also able to get file sharing working. I am now able to remotely access files on pi. Next week I will help Karen further fine tune movements and work on the ultrasonic sensors.

Karen:

This Week I finalized the bot’s movements. With the new space that we were given we were able to conduct more precise unit testing. Using normal masking tape we measured out a grid to test for accuracy. The final result is that our bot comes within half an inch of the final destination. For now I think that we will stop here for fine tuning and move on to integrating the sensors. We have also attached the camera and found example code in which we plan on to use as guidance

For next week I plan to incorporate the camera so that when the bot  goes  to the three points of interest it will take 3 still pictures for full coverage. I also want to build the circuit for the sensors so we can start testing code for obstacle avoidance.

Manini

This week we prepared for our demo on Wednesday afternoon. For our demo we presented the roomba’s basic movement to three points entered through the UI and the baseline model working on AWS GPU resources on test images. This week I was also able to speak with Nikhil who helped me identify a way to create a COCO dataset with just the human class so that this newly created data could be used to train our own faster rcnn model.

This weekend and next week I hope to fine tune the model and have our own version up and running by the end of the upcoming week.  We also will need to test the performance of the model and see how it works with real world images. Finally, we need to start planning and writing the script to connect the images taken from the camera module to AWS since the model’s inference will be completed on AWS.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk currently is ensuring that the trained model only identifies human objects and that we make good headway on obstacle avoidance. We are working extra hours outside of class and with regards to the model training, we have spoken to Nikhil who helped us out a lot with this.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

C9

Leave a Reply

Your email address will not be published. Required fields are marked *