Status Update: Week 9

Joseph:

This week I worked with Karen on obstacle detection and avoidance. We were able to get it to work. Then we were able to make a basic avoidance and we were able to avoid obstacles of size one 1ft by 1ft. The bot was able to successfully avoid the obstacle and continue down it’s original path. We also worked on creating a box for all the stuff to be secured in on the bot.

Next week the goal is to create a smarter avoidance system and account for the edge cases (what happens then the bot needs to avoid but its at the edge of the room, etc) and also further finetune the movement. I also would like to be able to integrate the ui with Manini’s model by then as well.

Karen:

This week I helped Joseph with obstacle detection and avoidance. We were able to borrow bread boards and create the small circuit that was needed to attach the sensors to the pi. From there we moved on to basic detection and fine tuning the best stopping distance. After this, we worked on avoidance which was more difficult because this entailed not only avoiding the object completely but also continuing on the original path. This also brought up many edge cases such as when the avoidance caused the bot to go past the point of interest. Overall we were able to get basic detection and avoidance completed, but we are currently trying to make sure that we have taken care of all cases so that there are no surprises. After this, we made a box to put the peripherals – the pi, circuit, camera and sensors- in, on top of the Roomba. This was made to give the camera a higher position as well as to secure the objects so that they did not slide around when the bot moved.

Next week, I would like to start working on integration. We have the different subsystems working, but we need to make sure the entire pipeline is put together. This includes connecting the pictures taken by the bot to the S3 bucket so that the model can run and produce results. After integration is taken care of I want to go back to fine tuning the different parts.

Manini:

This week I completed re-training our faster rcnn model. I tested the model’s performance and the model identifies only humans in images. After a few iterations of testing, I noticed that many times the model would group together multiple humans into a large bounding box. The individual humans were being identified, but the group was being identified too. One solution that would possibly alleviate this problem was removing the largest anchor box dim. However, this assumption I realized would not work well with our use case. Since our robot takes images from a close angle, the larger anchor box would be needed to identify humans in close proximity. The second solution would be to increase the classification threshold to .75 (originally .5). This solution worked well because it removed the large group classification and also removed erroneous classifications of chair legs and other objects as humans.

This week I also wrote a script to pull images from s3 buckets, connect to an ec2 instance, and dump the resultant images into different s3 buckets. I used boto3 to complete this script. This script completes the pipeline between the hardware and software. This weekend I will be testing the pipeline script to make sure it actually works and finetuning the model if necessary.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

At this point the most important risk is localization in terms of the edges of the grid. As of right now we have the movement in terms of different points down, however when the bot moves to the edges we are having a hard time keeping track of bounds. In order to take care of this, we will have to expand our localization scheme to keep track of edge cases.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have had to put some requirements on which points of interest can be picked on the UI. This is because of the size of the Roomba- it is larger than we thought, so when it avoids obstacles, we have to give it a larger berth than originally thought. This then requires that the points of interest are farther apart from each other so that we do not miss them. This is a minor change- so there are no other changes that need to be made.

Leave a Reply

Your email address will not be published. Required fields are marked *