Status Update: Week 11

Joseph:

This week I worked with Manini to complete the connection between the UI and the AWS instance. It is now able to send and receive images from the cloud and display it for users to see. I also worked with Karen to work out a few bugs with movement such as what to do when an obstacle is at a point of interest. Finally, I made the UI more robust against bugs. I also worked on the poster and presentation.

Next week I plan to work until the demo time and also work on the report.

Karen:

This week I worked on fixing the last of the edge cases associated with the bot’s movement. This included dealing with the situation where an obstacle was on top of the point of interest. I made sure that the protocol stopped beforehand and took pictures and then adjusted the following point so that the related distance was changed. I also had to work on the case where the obstacle had to move off the grid to avoid an obstacle. We also worked on testing the integration of all the subsystems so that every part worked off of one laptop on one script. This week I also worked on the final presentation slides.

Next week I plan on doing last minute testing before the demo and then working on the final report.

Manini:

This week I ran more evaluation tests on my model and worked with Joseph to fix the integration issues between my pipeline script and his UI script. We also transferred all the code and scripts for the robot and UI onto my laptop since the demo will be running on my laptop. This week we also finished the project poster and had our final presentation on Monday.

This weekend we will finish integration and testing. Karen and Joseph will also finish edge case handling and we will run multiple end to end tests to ensure that our project is demo ready for Monday.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

None

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

None

Status Update: Week 7

Joseph:

This week I worked with Karen to fine tune the robots movements. We were able to get a very accurate turning mechanism down but we are now finding that there may be some issue with the distance traveled mechanism. While the robot travels the correct distance, the turns it makes to take the pictures causes it to move backward an inch. We will work to fix this next week. This week I was also able to get file sharing working. I am now able to remotely access files on pi. Next week I will help Karen further fine tune movements and work on the ultrasonic sensors.

Karen:

This Week I finalized the bot’s movements. With the new space that we were given we were able to conduct more precise unit testing. Using normal masking tape we measured out a grid to test for accuracy. The final result is that our bot comes within half an inch of the final destination. For now I think that we will stop here for fine tuning and move on to integrating the sensors. We have also attached the camera and found example code in which we plan on to use as guidance

For next week I plan to incorporate the camera so that when the bot  goes  to the three points of interest it will take 3 still pictures for full coverage. I also want to build the circuit for the sensors so we can start testing code for obstacle avoidance.

Manini

This week we prepared for our demo on Wednesday afternoon. For our demo we presented the roomba’s basic movement to three points entered through the UI and the baseline model working on AWS GPU resources on test images. This week I was also able to speak with Nikhil who helped me identify a way to create a COCO dataset with just the human class so that this newly created data could be used to train our own faster rcnn model.

This weekend and next week I hope to fine tune the model and have our own version up and running by the end of the upcoming week.  We also will need to test the performance of the model and see how it works with real world images. Finally, we need to start planning and writing the script to connect the images taken from the camera module to AWS since the model’s inference will be completed on AWS.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk currently is ensuring that the trained model only identifies human objects and that we make good headway on obstacle avoidance. We are working extra hours outside of class and with regards to the model training, we have spoken to Nikhil who helped us out a lot with this.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

C9

Status Update: Week 5

Joseph:

This week I worked on the user interface for the SOS_bot. I was able to create a grid like system where users can click where they want the bot to go to. The grid system is able to collect the coordinates of where the users click and hopefully when the wifi is set up, send that information to the bot. The UI also has a start and stop button and text that tells the users which points the bot has already visited. Visited points turn the color green and users can click on those points to see images that the bot has taken.

Next week I hope to have the UI communicate with the raspberry pi and fix any bugs that may arise. I also hope to have the ultrasonic sensors working. I have already looked up a bunch of resources on how to appropriately wire and code the sensors and will be working on actually building it.

Karen:

This week was spent setting up the wifi for connecting the raspberry pi to CMU’s local wifi. Because this required registering the device, we had to wait on IT to accept and get it running so this caused a little bit of a delay. Once that was done, I was able to set up ssh-ing on to the pi with Putty. The rest of the week was spent on setting up the secure serial connection between the pi and fine tuning the movement of the roomba . I also started to create the more complex script that would adjust for interrupts from the sensors. The basic movement did not really depend on localization, but with the sensors in place we would need to check on our location based on the room once the obstacle had been avoided and reroute. This is much more complex as we have not set a specific width/size of the obstacle. So the bot will have to keep retrying and this causes a lot of variability in its final position. If this becomes too complex, we may need to hard set the size of the obstacles that we will use.

We also started to layout the grid that the roomba will have to move around on the floor of our cubicle. Most likely we will have to request for more space as it seems pretty confined in our space.

By next week I hope to have the movement of the bot fine tuned so that we can measure its offset from the actual point and adjust from that. I would also like to continue to work on the script to adjust for the sensor input and approach completion. This might require more research on typical localization methods.

 

Manini

This week I was able to get the Faster RCNN model environment set up. I read through the model functions and identified where biasing should occur. There are 2 networks in Faster RCNN, the RPN (Region Proposal Network) and the classification/bounding box network. I determined that the biasing will occur only with regards to the classification. Therefore, after completely understanding the model, I added the biasing towards false positives by adding a constant to the Cross Entropy Loss Function to penalize false negatives.  The new model is ready to be trained and then validated/tested. Hopefully, this weekend I can begin training the biased model to see what further modifications need to be made to achieve our overall goal.

 

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The current risks that may jeopardize the success of the project is both getting the model to work as well as any unforeseen setup issues that may occur. The model still needs to be trained and tested on. We do not know if it meets the accuracy goals that we have set and tuning it might take even  more time. We did not anticipate that setting up the wifi on the pi would take this long and while we were able to find other things to do in the mean time, more issues like this can delay the project significantly.

Our contingency plan for the model is to train it and see how it performs. If it does not meet our standards we will try to bias it further and tune a bunch of hyperparameters to reach our goal. As for the set up concerns, we should not have anything major to set up outside of the pi and we still have a week of slack left in case things go wrong.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There have not been any design changes of the system.

C9

Status Update: Week 4

Joseph:

On Monday I worked on finalizing the Design Review Document. Afterwards I helped Manini evaluate two different GitHub implementations of Faster RCNN. Afterwards, since path planning is still being worked on and it would be inefficient to pass the Raspberry Pi back and forth between Karen and I, I made a shift in the schedule and began working on the user interface for the SOS bot. I have the design laid out and will be coding it out in the weeks to come.

After spring break, I hope to have the user interface completed or near completion. It will be able to upload maps and points of interest to the SOS bot as well as display the bounding boxes around images that the model evaluates. After the user interface is done, I will go back to working on the object detection portion of the model.

 

Karen:

This week was spent juggling between the Design Review Document and working with the path planning algorithm. The majority of Monday was spent finalizing the document so that we could submit at the midnight deadline. In terms of the script, I had originally started to manipulate basic movement by creating my own baseline functions that would require a lot of math. However, after researching I found some libraries that will aid with this process- such as PyCreate. So I am currently integrating these functions with my script so I do not have to fine tune the movement of the bot so much in order to avoid accumulation of error.

When we come back from Spring Break I would like to completely finish the path planning script and start unit testing with the bot. This will include moving to a single point of interest and then a series of points as will happen in the demo. I will also start aiding Joseph with the obstacle avoidance as I have also started researching into how to implement this with the sensors we have now received.

 

Manini

This week I was able to find a Faster RCNN implementation that supports CPU testing and have been working on getting it running on my laptop. I ran into multiple issues with integration, so a majority of my time has been spent debugging and figuring out the set up script included in the repo.  A large portion of my time Monday was spent finishing up the design report.

In two weeks (the week after Spring break) the Faster RCNN model will have baseline results and the first experiment for biasing the model should be ready. In that way, I can use the week to train the model with the COCO sub-dataset.

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk right now is still staying on schedule. Since Spring Break is next week we will have a one week gap before we can get baseline results for our model and begin retraining with the biasing cost function.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The user interface is being worked on earlier while the object detection is being pushed back because path planning is still being worked on and it would be difficult to test both path planning and obstacle detection at the same time.

C9