Status Update: Week 6

Joseph:

This week I was able to connect make a connection between the UI and the robot. I am able to send coordinates to the raspberry pi via a script that sends commands through ssh. I run the movement script with coordinates as the inputs and the bot is capable of moving.

Next week I hope to have some form of picture transfer from the robot to the laptop working. I will also work with Karen to fine tune movements as well as installing the ultrasonic sensors for obstacle avoidance.

Karen:

This week was mostly spent fine tuning the movement of the bot again. I have noticed that there seems to be an accumulation of error in the angle when we use the turn_angle function from the pycreate module. Because of this, usually our first two coordinates are accurate, but when the bot re-orients itself, it seems to overshoot. The offset is not consistent so we cannot adjust accordingly. I am now thinking of creating my own turn_angle function that will instead use time to control the angle of rotation. Hopefully this will give us more accurate results. However, because we have given ourselves a margin for error in the accuracy of movement, I do not anticipate this being an incredible issue. I have also basically finalized my portion of the movement script with localization and will hand it off to Joseph for him to incorporate sensors.

This week we will be doing our demo. By Wednesday I hope to have the UI completely connected to our script so that when points are clicked on the GUI, this transfers to the command line and runs the movement script. Then, the bot will go to those points and rotate in a manner that if a camera were attached, a picture could be taken. I hope to finalize movement and help Joseph with the sensors this week as well.

Manini

This week I pivoted from the CPU version of the Faster RCNN model to a GPU based version. I was running into too many issues with the integration set up and module builds. I decided that the time I was spending on getting this version of the code to work was not worth it. I therefore launched an AWS instance with GPU capabilities and a Deep Learning AMI. I finally got the new GPU based Faster RCNN up and running and ran multiple tests using both VGG 16 and ResNet101 architectures. I then had to re-add the bias with the new model and figure out how to switch the model from multi-way classification to binary classification. Over the weekend I will be training this new modified model so that for the demo we can hopefully use this new version. Now that the inference will be completed on AWS, we will also need to create a script to send the images from the pi to AWS.  This pivot put me slightly behind schedule, but  I hope I can get back on track in the upcoming week by putting in more hours outside of class.

This weekend I also helped Karen with the bot’s movement. We ran multiple tests and recognized a  fundamental inconsistency with the roomba’s angle turns. In order to accomplish accuracy with the bot’s basic movements, we will need to utilize a different approach (time-based turning) to enable the roomba to turn the amount we would like.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The current risk is that we are slightly behind in schedule. We do have a slack week that we are currently using up but it would be preferable if we could somehow catch up back on schedule incase any other issues arise. Our contingency plan is to meet more outside of class work on the project

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Instead of running inference of the model on the local laptop, we are going to send the images to AWS as we were unable to get the CPU version of the model working properly. This requires us to write a script to send and receive images from the cloud but overall it is not a huge change.

C9

Status Update: Week 5

Joseph:

This week I worked on the user interface for the SOS_bot. I was able to create a grid like system where users can click where they want the bot to go to. The grid system is able to collect the coordinates of where the users click and hopefully when the wifi is set up, send that information to the bot. The UI also has a start and stop button and text that tells the users which points the bot has already visited. Visited points turn the color green and users can click on those points to see images that the bot has taken.

Next week I hope to have the UI communicate with the raspberry pi and fix any bugs that may arise. I also hope to have the ultrasonic sensors working. I have already looked up a bunch of resources on how to appropriately wire and code the sensors and will be working on actually building it.

Karen:

This week was spent setting up the wifi for connecting the raspberry pi to CMU’s local wifi. Because this required registering the device, we had to wait on IT to accept and get it running so this caused a little bit of a delay. Once that was done, I was able to set up ssh-ing on to the pi with Putty. The rest of the week was spent on setting up the secure serial connection between the pi and fine tuning the movement of the roomba . I also started to create the more complex script that would adjust for interrupts from the sensors. The basic movement did not really depend on localization, but with the sensors in place we would need to check on our location based on the room once the obstacle had been avoided and reroute. This is much more complex as we have not set a specific width/size of the obstacle. So the bot will have to keep retrying and this causes a lot of variability in its final position. If this becomes too complex, we may need to hard set the size of the obstacles that we will use.

We also started to layout the grid that the roomba will have to move around on the floor of our cubicle. Most likely we will have to request for more space as it seems pretty confined in our space.

By next week I hope to have the movement of the bot fine tuned so that we can measure its offset from the actual point and adjust from that. I would also like to continue to work on the script to adjust for the sensor input and approach completion. This might require more research on typical localization methods.

 

Manini

This week I was able to get the Faster RCNN model environment set up. I read through the model functions and identified where biasing should occur. There are 2 networks in Faster RCNN, the RPN (Region Proposal Network) and the classification/bounding box network. I determined that the biasing will occur only with regards to the classification. Therefore, after completely understanding the model, I added the biasing towards false positives by adding a constant to the Cross Entropy Loss Function to penalize false negatives.  The new model is ready to be trained and then validated/tested. Hopefully, this weekend I can begin training the biased model to see what further modifications need to be made to achieve our overall goal.

 

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The current risks that may jeopardize the success of the project is both getting the model to work as well as any unforeseen setup issues that may occur. The model still needs to be trained and tested on. We do not know if it meets the accuracy goals that we have set and tuning it might take even  more time. We did not anticipate that setting up the wifi on the pi would take this long and while we were able to find other things to do in the mean time, more issues like this can delay the project significantly.

Our contingency plan for the model is to train it and see how it performs. If it does not meet our standards we will try to bias it further and tune a bunch of hyperparameters to reach our goal. As for the set up concerns, we should not have anything major to set up outside of the pi and we still have a week of slack left in case things go wrong.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There have not been any design changes of the system.

C9

Status Update: Week 4

Joseph:

On Monday I worked on finalizing the Design Review Document. Afterwards I helped Manini evaluate two different GitHub implementations of Faster RCNN. Afterwards, since path planning is still being worked on and it would be inefficient to pass the Raspberry Pi back and forth between Karen and I, I made a shift in the schedule and began working on the user interface for the SOS bot. I have the design laid out and will be coding it out in the weeks to come.

After spring break, I hope to have the user interface completed or near completion. It will be able to upload maps and points of interest to the SOS bot as well as display the bounding boxes around images that the model evaluates. After the user interface is done, I will go back to working on the object detection portion of the model.

 

Karen:

This week was spent juggling between the Design Review Document and working with the path planning algorithm. The majority of Monday was spent finalizing the document so that we could submit at the midnight deadline. In terms of the script, I had originally started to manipulate basic movement by creating my own baseline functions that would require a lot of math. However, after researching I found some libraries that will aid with this process- such as PyCreate. So I am currently integrating these functions with my script so I do not have to fine tune the movement of the bot so much in order to avoid accumulation of error.

When we come back from Spring Break I would like to completely finish the path planning script and start unit testing with the bot. This will include moving to a single point of interest and then a series of points as will happen in the demo. I will also start aiding Joseph with the obstacle avoidance as I have also started researching into how to implement this with the sensors we have now received.

 

Manini

This week I was able to find a Faster RCNN implementation that supports CPU testing and have been working on getting it running on my laptop. I ran into multiple issues with integration, so a majority of my time has been spent debugging and figuring out the set up script included in the repo.  A large portion of my time Monday was spent finishing up the design report.

In two weeks (the week after Spring break) the Faster RCNN model will have baseline results and the first experiment for biasing the model should be ready. In that way, I can use the week to train the model with the COCO sub-dataset.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk right now is still staying on schedule. Since Spring Break is next week we will have a one week gap before we can get baseline results for our model and begin retraining with the biasing cost function.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The user interface is being worked on earlier while the object detection is being pushed back because path planning is still being worked on and it would be difficult to test both path planning and obstacle detection at the same time.

C9

Status Update: Week 3

Joseph:

This week I worked on creating the presentation for the design review as well as the design document that followed it. I also began looking for different Faster-RCNN repositories on Github as we decided that reimplementing the entire model ourselves would be too tedious of a task and we would not gain much from that experience. Our main task is to have an existing model to tune and bias. I was able to come across a model that seemed promising as it provided the option to run on both gpu and cpu, which is exactly what we need.

For next week, I would like to have this model running and then I will begin helping Karen with obstacle avoidance when the sensor come in.

 

Karen:

This week I focused on the design review and putting together  the official write up for this. After the presentation, we reviewed the immediate feedback and have tried to incorporate this into our future plans. I am currently trying to come up with more benchmarks for the path planning and obstacle avoidance part of the project. This week we also just received the SD card and reader, so now we can finally upload to the pi. The time it took for this to arrive has caused a significant amount of delay and we hope to address that by putting in more time outside of class. I have downloaded the Raspbian image onto my laptop and have started setting up the pi for use.

Hopefully, now that we ownership of all the different parts, we can finally start getting code uploaded onto the pi and have it running on the roomba. I do think that there might need to be some adjustments with how the commands are serially sent, so I will have to add some additional testing to see if they are being received properly. Currently, the networking protocol I have implemented is based off of example scripts. However, I think we might need adjustments, regarding time and frequency. So, once the code is uploaded, I will have to adjust for that.  Overall, I think there has been overestimation in how quickly we would be able to get the Roomba to move. So ideally I would like to have successful movement by the end of next week- assuming all class time will be used for working.

Manini

This week I focused on the design review and my presentation.  Joseph, Karen and I had to fine tune some design specifications including metrics and implementation decisions.  During our design review, the Professor suggested that it might be a good idea to try using both YOLO and Faster RCNN and compare the performance of both. Thus, for the later part of the week I began searching for a good YOLO implementation and learning more about the internals of the model. In terms of Faster RCNN, I downloaded one version on my laptop and am currently trying to get it up and running with Pytorch.  I am also currently trying to sift through the COCO dataset and limiting the size of the data we will be using. This was another suggestion by Professor Savvides to improve the training time for our model and to stay on track.

In this upcoming week I would like to begin training our Faster RCNN model and get some baseline performance metrics so we can plan for how we want to bias/ fine tune the Faster RCNN model. The design review took up a little bit more time than we had anticipated so this week we plan on putting in extra hours outside of class to catch up and finish our first sprint before spring break.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk right now is still staying on schedule. We did not account for the time it takes to make presentations and reports which has really limited the amount of time we have to do other things.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

One change that we have made is that we will also be trying to use the YOLO model for human detection. This change is so that we can compare the performance of both Faster RCNN and YOLO to achieve the best combined accuracy and inference time.

C9