Status Update: Week 10

Joseph:

This week I worked on integrating my UI with Manini’s model so that it can now send images to the cloud where inference is happening. I also worked on cleaning up the code for the UI, making it more robust. I also worked with Karen on further fine tuning the movements of the robot as well as work out how we are going to approach demo day. I also worked on the poster and the presentation slides.

Next week the goal is to test everything. There are a few known bugs with movement that can be fixed relatively easily and the hope is that we find all of them before the day of the demo.

Karen:

This week I worked on fine tuning the movement of the bot and dealing with edge cases. There are still a few more that we have to deal with so ideally, by next week we will have addressed all of them.  We also created a 4 x 4 grid in one of the rooms we have been using and measured the accuracy of the movement so far. For the final we anticipate demoing the bot on a 10 x 10 so we realized that we will have to ensure that we have enough time for set up before hand as well as having enough obstacles ready to show obstacle avoidance. The second half the week was spent working on the final presentation slides since our presentation is on Monday.

For next week, I would like to have hit all edge cases and tested the integrated system before the demo. I would also like to test the system with more than 3 people so that we can ensure that we are ready for variations in environment.

Manini:

This week I completed and tested my pipeline script. The script scp’s images that were saved from Joseph’s UI to the aws instance. It then runs the inference and pushes the detection images to the local computer. This week we also had our demo on Wednesday. On Friday, Joseph and I worked to integrate the roomba UI/ script with the model script.  This weekend we will be working on the project poster and the final presentation slides.

Next week we will be meeting to test the integration and to identify edge/trouble cases for the model. From these results, I will determine whether I should train the model for a few more epochs given the budget or if I should simply play around with the bounding box threshold.  I will also be modifying the inference script to produce a count of humans to supplement the detection images.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

As of right now, movement is still a little bit of an issue but hopefully after further fine tuning we will be able to fix everything. We have a list of known bugs and it is now just a matter of sitting down and fixing them. Another issue we are facing is the communication between the robot and the cloud. This is relatively untested right now but we have plans to test it fully.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Leave a Reply

Your email address will not be published. Required fields are marked *