Status Update: Week 11

Joseph:

This week I worked with Manini to complete the connection between the UI and the AWS instance. It is now able to send and receive images from the cloud and display it for users to see. I also worked with Karen to work out a few bugs with movement such as what to do when an obstacle is at a point of interest. Finally, I made the UI more robust against bugs. I also worked on the poster and presentation.

Next week I plan to work until the demo time and also work on the report.

Karen:

This week I worked on fixing the last of the edge cases associated with the bot’s movement. This included dealing with the situation where an obstacle was on top of the point of interest. I made sure that the protocol stopped beforehand and took pictures and then adjusted the following point so that the related distance was changed. I also had to work on the case where the obstacle had to move off the grid to avoid an obstacle. We also worked on testing the integration of all the subsystems so that every part worked off of one laptop on one script. This week I also worked on the final presentation slides.

Next week I plan on doing last minute testing before the demo and then working on the final report.

Manini:

This week I ran more evaluation tests on my model and worked with Joseph to fix the integration issues between my pipeline script and his UI script. We also transferred all the code and scripts for the robot and UI onto my laptop since the demo will be running on my laptop. This week we also finished the project poster and had our final presentation on Monday.

This weekend we will finish integration and testing. Karen and Joseph will also finish edge case handling and we will run multiple end to end tests to ensure that our project is demo ready for Monday.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

None

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

None

Status Update: Week 10

Joseph:

This week I worked on integrating my UI with Manini’s model so that it can now send images to the cloud where inference is happening. I also worked on cleaning up the code for the UI, making it more robust. I also worked with Karen on further fine tuning the movements of the robot as well as work out how we are going to approach demo day. I also worked on the poster and the presentation slides.

Next week the goal is to test everything. There are a few known bugs with movement that can be fixed relatively easily and the hope is that we find all of them before the day of the demo.

Karen:

This week I worked on fine tuning the movement of the bot and dealing with edge cases. There are still a few more that we have to deal with so ideally, by next week we will have addressed all of them.  We also created a 4 x 4 grid in one of the rooms we have been using and measured the accuracy of the movement so far. For the final we anticipate demoing the bot on a 10 x 10 so we realized that we will have to ensure that we have enough time for set up before hand as well as having enough obstacles ready to show obstacle avoidance. The second half the week was spent working on the final presentation slides since our presentation is on Monday.

For next week, I would like to have hit all edge cases and tested the integrated system before the demo. I would also like to test the system with more than 3 people so that we can ensure that we are ready for variations in environment.

Manini:

This week I completed and tested my pipeline script. The script scp’s images that were saved from Joseph’s UI to the aws instance. It then runs the inference and pushes the detection images to the local computer. This week we also had our demo on Wednesday. On Friday, Joseph and I worked to integrate the roomba UI/ script with the model script.  This weekend we will be working on the project poster and the final presentation slides.

Next week we will be meeting to test the integration and to identify edge/trouble cases for the model. From these results, I will determine whether I should train the model for a few more epochs given the budget or if I should simply play around with the bounding box threshold.  I will also be modifying the inference script to produce a count of humans to supplement the detection images.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

As of right now, movement is still a little bit of an issue but hopefully after further fine tuning we will be able to fix everything. We have a list of known bugs and it is now just a matter of sitting down and fixing them. Another issue we are facing is the communication between the robot and the cloud. This is relatively untested right now but we have plans to test it fully.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Status Update: Week 9

Joseph:

This week I worked with Karen on obstacle detection and avoidance. We were able to get it to work. Then we were able to make a basic avoidance and we were able to avoid obstacles of size one 1ft by 1ft. The bot was able to successfully avoid the obstacle and continue down it’s original path. We also worked on creating a box for all the stuff to be secured in on the bot.

Next week the goal is to create a smarter avoidance system and account for the edge cases (what happens then the bot needs to avoid but its at the edge of the room, etc) and also further finetune the movement. I also would like to be able to integrate the ui with Manini’s model by then as well.

Karen:

This week I helped Joseph with obstacle detection and avoidance. We were able to borrow bread boards and create the small circuit that was needed to attach the sensors to the pi. From there we moved on to basic detection and fine tuning the best stopping distance. After this, we worked on avoidance which was more difficult because this entailed not only avoiding the object completely but also continuing on the original path. This also brought up many edge cases such as when the avoidance caused the bot to go past the point of interest. Overall we were able to get basic detection and avoidance completed, but we are currently trying to make sure that we have taken care of all cases so that there are no surprises. After this, we made a box to put the peripherals – the pi, circuit, camera and sensors- in, on top of the Roomba. This was made to give the camera a higher position as well as to secure the objects so that they did not slide around when the bot moved.

Next week, I would like to start working on integration. We have the different subsystems working, but we need to make sure the entire pipeline is put together. This includes connecting the pictures taken by the bot to the S3 bucket so that the model can run and produce results. After integration is taken care of I want to go back to fine tuning the different parts.

Manini:

This week I completed re-training our faster rcnn model. I tested the model’s performance and the model identifies only humans in images. After a few iterations of testing, I noticed that many times the model would group together multiple humans into a large bounding box. The individual humans were being identified, but the group was being identified too. One solution that would possibly alleviate this problem was removing the largest anchor box dim. However, this assumption I realized would not work well with our use case. Since our robot takes images from a close angle, the larger anchor box would be needed to identify humans in close proximity. The second solution would be to increase the classification threshold to .75 (originally .5). This solution worked well because it removed the large group classification and also removed erroneous classifications of chair legs and other objects as humans.

This week I also wrote a script to pull images from s3 buckets, connect to an ec2 instance, and dump the resultant images into different s3 buckets. I used boto3 to complete this script. This script completes the pipeline between the hardware and software. This weekend I will be testing the pipeline script to make sure it actually works and finetuning the model if necessary.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

At this point the most important risk is localization in terms of the edges of the grid. As of right now we have the movement in terms of different points down, however when the bot moves to the edges we are having a hard time keeping track of bounds. In order to take care of this, we will have to expand our localization scheme to keep track of edge cases.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have had to put some requirements on which points of interest can be picked on the UI. This is because of the size of the Roomba- it is larger than we thought, so when it avoids obstacles, we have to give it a larger berth than originally thought. This then requires that the points of interest are farther apart from each other so that we do not miss them. This is a minor change- so there are no other changes that need to be made.

Status Update: Week 8

Joseph:

This week I worked with Karen to fine tune the robots movements as well as implement the picture taking. We are able to take multiple pictures at each interest point although we are finding that they do not give full coverage of the room that we are in. I was also able to implement software that is able to take the pictures taken and display them on the UI.

Next week I plan on implementing the sensors as well as work with Karen to finish obstacle avoidance. I will also be working with Manu when the model is ready to implement that with the UI

Karen:

As explained above, this week I focused on the image taking portion of the bot. We are able to connect the camera and incorporate the commands need so that a still picture could be taken. Originally, we had specified that 3 images would be taken- however it seems that this does not provide full coverage. So I decreased the turn angle and am now taking six images of the room. The taken images are saved to the raspberry pi Desktop, allowing the remote UI to access the files. The next step regarding the images is to mount a pole of some sorts and place the camera module on this, so that the images taken are elevated to account for varying human heights.

Next week, I would like to aid Joseph in incorporating the sensors on the bot and testing the obstacle detection script. Ideally we would like to get obstacle detection completed by next week so that the following week we can fine tune obstacle avoidance.

Manini

This week I was able to download all of the required COCO data (train, validation) to the ec2 instance. I was also able to modify the script to only identify  humans and background classes (binary classification). I also was able to train the model for one epoch. This weekend I will complete the training for all epochs and evaluate the performance of the model and retrain the model with the bias.

One issue I ran into this week was with the memory capacity of my ec2 instance and the compute time required to retrain the entire model. In order to train the model with 8 GPU’s and have enough memory per GPU for the given model and data size I need to upgrade to a P3.xlarge. However the cost for this machine is very expensive. I am currently doing some calculations to identify a way to serve our needs while staying within budget.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk currently is ensuring that the trained model only identifies human objects and that we make good headway on obstacle avoidance. We are working extra hours outside of class and with regards to the model training, we have spoken to Nikhil who helped us out a lot with this. Another risk is that the multiple pictures we take do not give us full coverage. This can be fixed by just taking more pictures, or at least enough such that no human is missed.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There has been no changes since last update.

Status Update: Week 7

Joseph:

This week I worked with Karen to fine tune the robots movements. We were able to get a very accurate turning mechanism down but we are now finding that there may be some issue with the distance traveled mechanism. While the robot travels the correct distance, the turns it makes to take the pictures causes it to move backward an inch. We will work to fix this next week. This week I was also able to get file sharing working. I am now able to remotely access files on pi. Next week I will help Karen further fine tune movements and work on the ultrasonic sensors.

Karen:

This Week I finalized the bot’s movements. With the new space that we were given we were able to conduct more precise unit testing. Using normal masking tape we measured out a grid to test for accuracy. The final result is that our bot comes within half an inch of the final destination. For now I think that we will stop here for fine tuning and move on to integrating the sensors. We have also attached the camera and found example code in which we plan on to use as guidance

For next week I plan to incorporate the camera so that when the bot  goes  to the three points of interest it will take 3 still pictures for full coverage. I also want to build the circuit for the sensors so we can start testing code for obstacle avoidance.

Manini

This week we prepared for our demo on Wednesday afternoon. For our demo we presented the roomba’s basic movement to three points entered through the UI and the baseline model working on AWS GPU resources on test images. This week I was also able to speak with Nikhil who helped me identify a way to create a COCO dataset with just the human class so that this newly created data could be used to train our own faster rcnn model.

This weekend and next week I hope to fine tune the model and have our own version up and running by the end of the upcoming week.  We also will need to test the performance of the model and see how it works with real world images. Finally, we need to start planning and writing the script to connect the images taken from the camera module to AWS since the model’s inference will be completed on AWS.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk currently is ensuring that the trained model only identifies human objects and that we make good headway on obstacle avoidance. We are working extra hours outside of class and with regards to the model training, we have spoken to Nikhil who helped us out a lot with this.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

C9

Status Update: Week 6

Joseph:

This week I was able to connect make a connection between the UI and the robot. I am able to send coordinates to the raspberry pi via a script that sends commands through ssh. I run the movement script with coordinates as the inputs and the bot is capable of moving.

Next week I hope to have some form of picture transfer from the robot to the laptop working. I will also work with Karen to fine tune movements as well as installing the ultrasonic sensors for obstacle avoidance.

Karen:

This week was mostly spent fine tuning the movement of the bot again. I have noticed that there seems to be an accumulation of error in the angle when we use the turn_angle function from the pycreate module. Because of this, usually our first two coordinates are accurate, but when the bot re-orients itself, it seems to overshoot. The offset is not consistent so we cannot adjust accordingly. I am now thinking of creating my own turn_angle function that will instead use time to control the angle of rotation. Hopefully this will give us more accurate results. However, because we have given ourselves a margin for error in the accuracy of movement, I do not anticipate this being an incredible issue. I have also basically finalized my portion of the movement script with localization and will hand it off to Joseph for him to incorporate sensors.

This week we will be doing our demo. By Wednesday I hope to have the UI completely connected to our script so that when points are clicked on the GUI, this transfers to the command line and runs the movement script. Then, the bot will go to those points and rotate in a manner that if a camera were attached, a picture could be taken. I hope to finalize movement and help Joseph with the sensors this week as well.

Manini

This week I pivoted from the CPU version of the Faster RCNN model to a GPU based version. I was running into too many issues with the integration set up and module builds. I decided that the time I was spending on getting this version of the code to work was not worth it. I therefore launched an AWS instance with GPU capabilities and a Deep Learning AMI. I finally got the new GPU based Faster RCNN up and running and ran multiple tests using both VGG 16 and ResNet101 architectures. I then had to re-add the bias with the new model and figure out how to switch the model from multi-way classification to binary classification. Over the weekend I will be training this new modified model so that for the demo we can hopefully use this new version. Now that the inference will be completed on AWS, we will also need to create a script to send the images from the pi to AWS.  This pivot put me slightly behind schedule, but  I hope I can get back on track in the upcoming week by putting in more hours outside of class.

This weekend I also helped Karen with the bot’s movement. We ran multiple tests and recognized a  fundamental inconsistency with the roomba’s angle turns. In order to accomplish accuracy with the bot’s basic movements, we will need to utilize a different approach (time-based turning) to enable the roomba to turn the amount we would like.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The current risk is that we are slightly behind in schedule. We do have a slack week that we are currently using up but it would be preferable if we could somehow catch up back on schedule incase any other issues arise. Our contingency plan is to meet more outside of class work on the project

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Instead of running inference of the model on the local laptop, we are going to send the images to AWS as we were unable to get the CPU version of the model working properly. This requires us to write a script to send and receive images from the cloud but overall it is not a huge change.

C9

Status Update: Week 5

Joseph:

This week I worked on the user interface for the SOS_bot. I was able to create a grid like system where users can click where they want the bot to go to. The grid system is able to collect the coordinates of where the users click and hopefully when the wifi is set up, send that information to the bot. The UI also has a start and stop button and text that tells the users which points the bot has already visited. Visited points turn the color green and users can click on those points to see images that the bot has taken.

Next week I hope to have the UI communicate with the raspberry pi and fix any bugs that may arise. I also hope to have the ultrasonic sensors working. I have already looked up a bunch of resources on how to appropriately wire and code the sensors and will be working on actually building it.

Karen:

This week was spent setting up the wifi for connecting the raspberry pi to CMU’s local wifi. Because this required registering the device, we had to wait on IT to accept and get it running so this caused a little bit of a delay. Once that was done, I was able to set up ssh-ing on to the pi with Putty. The rest of the week was spent on setting up the secure serial connection between the pi and fine tuning the movement of the roomba . I also started to create the more complex script that would adjust for interrupts from the sensors. The basic movement did not really depend on localization, but with the sensors in place we would need to check on our location based on the room once the obstacle had been avoided and reroute. This is much more complex as we have not set a specific width/size of the obstacle. So the bot will have to keep retrying and this causes a lot of variability in its final position. If this becomes too complex, we may need to hard set the size of the obstacles that we will use.

We also started to layout the grid that the roomba will have to move around on the floor of our cubicle. Most likely we will have to request for more space as it seems pretty confined in our space.

By next week I hope to have the movement of the bot fine tuned so that we can measure its offset from the actual point and adjust from that. I would also like to continue to work on the script to adjust for the sensor input and approach completion. This might require more research on typical localization methods.

 

Manini

This week I was able to get the Faster RCNN model environment set up. I read through the model functions and identified where biasing should occur. There are 2 networks in Faster RCNN, the RPN (Region Proposal Network) and the classification/bounding box network. I determined that the biasing will occur only with regards to the classification. Therefore, after completely understanding the model, I added the biasing towards false positives by adding a constant to the Cross Entropy Loss Function to penalize false negatives.  The new model is ready to be trained and then validated/tested. Hopefully, this weekend I can begin training the biased model to see what further modifications need to be made to achieve our overall goal.

 

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The current risks that may jeopardize the success of the project is both getting the model to work as well as any unforeseen setup issues that may occur. The model still needs to be trained and tested on. We do not know if it meets the accuracy goals that we have set and tuning it might take even  more time. We did not anticipate that setting up the wifi on the pi would take this long and while we were able to find other things to do in the mean time, more issues like this can delay the project significantly.

Our contingency plan for the model is to train it and see how it performs. If it does not meet our standards we will try to bias it further and tune a bunch of hyperparameters to reach our goal. As for the set up concerns, we should not have anything major to set up outside of the pi and we still have a week of slack left in case things go wrong.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There have not been any design changes of the system.

C9

Status Update: Week 4

Joseph:

On Monday I worked on finalizing the Design Review Document. Afterwards I helped Manini evaluate two different GitHub implementations of Faster RCNN. Afterwards, since path planning is still being worked on and it would be inefficient to pass the Raspberry Pi back and forth between Karen and I, I made a shift in the schedule and began working on the user interface for the SOS bot. I have the design laid out and will be coding it out in the weeks to come.

After spring break, I hope to have the user interface completed or near completion. It will be able to upload maps and points of interest to the SOS bot as well as display the bounding boxes around images that the model evaluates. After the user interface is done, I will go back to working on the object detection portion of the model.

 

Karen:

This week was spent juggling between the Design Review Document and working with the path planning algorithm. The majority of Monday was spent finalizing the document so that we could submit at the midnight deadline. In terms of the script, I had originally started to manipulate basic movement by creating my own baseline functions that would require a lot of math. However, after researching I found some libraries that will aid with this process- such as PyCreate. So I am currently integrating these functions with my script so I do not have to fine tune the movement of the bot so much in order to avoid accumulation of error.

When we come back from Spring Break I would like to completely finish the path planning script and start unit testing with the bot. This will include moving to a single point of interest and then a series of points as will happen in the demo. I will also start aiding Joseph with the obstacle avoidance as I have also started researching into how to implement this with the sensors we have now received.

 

Manini

This week I was able to find a Faster RCNN implementation that supports CPU testing and have been working on getting it running on my laptop. I ran into multiple issues with integration, so a majority of my time has been spent debugging and figuring out the set up script included in the repo.  A large portion of my time Monday was spent finishing up the design report.

In two weeks (the week after Spring break) the Faster RCNN model will have baseline results and the first experiment for biasing the model should be ready. In that way, I can use the week to train the model with the COCO sub-dataset.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk right now is still staying on schedule. Since Spring Break is next week we will have a one week gap before we can get baseline results for our model and begin retraining with the biasing cost function.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The user interface is being worked on earlier while the object detection is being pushed back because path planning is still being worked on and it would be difficult to test both path planning and obstacle detection at the same time.

C9