Status Update: Week 2

Karen:

This week I was able to completely finish writing and debugging the script for basic movement. But we realized that the raspberry pi did not come with a SD card. So I was not able to set up the pi and upload the code onto it. Because of this, I put in an order for an SD card and reader- hopefully this will come in by Monday so that we can get that part moving. With this delay, I started writing wrapper functions to control the Create2. This way, instead of having to deal with calculating the angle of rotation so that it is scaled or matching speed to distance, I can call a function to move a certain angle or amount of meters. If time permits, I would like to add to these wrapper functions so that interaction with the Create2 will not be highly involved every time a command is sent.  My next goals for the next week are to get the pi set up, once the SD card arrives and then get the bot moving. I also would like to find easily connectable sensors to the pi that will help with detection of obstacles and order them. We also have our Design review presentation this week, so hopefully we will be able to add to our design after we receive feedback.

In terms of our initial schedule, we are slightly behind. I had hoped to already have the roomba moving by now. However, I do not think this is as large of an issue because most of the delay is because of missing parts. All the set up is ready, it just needs to be uploaded. If however, this ends up being an issue, I will just have to spend more time outside of what has already been allocated for the project and resolve this.

Joseph:

This week Manini and I finalized the model that we are using. After running more tests and research into Yolo, we decided that the drop in accuracy was not worth switching away from the RCNN model. Manini and I have begun the implementation process of the model on Pytorch and are looking into various methods of biasing it towards false positives as we believe it is important for our robot to never miss any humans. My goal next week is to help finish the initial implementation of the model so we can load a pre-trained model in and see the general results of how the model works. Afterwards, I will move on to working on the ability to wirelessly send images between the pi and my computer.

As for our initial schedule, we are slightly behind as we would have liked to finish the model by this week and started retraining/experimenting with biasing next week. However, we do have an extra week of slack built into our schedule just for the deep learning implementation so this should not affect our timeline too much.

Manini

This week Joseph and I finalized the deep learning model we will be using for human detection. After some more researching, we came to the conclusion to stick with Faster RCNN since it produces a better accuracy score and since we are not trying to achieve real time detection or trying to run our model on a pi (inference will be done on a server). Joseph and I also began the implementation of the Faster RCNN model using PyTorch. According to our initial schedule we are slightly behind as we hoped to have a basic implementation of the model completed by the end of this week. However, with the time spent on finalizing the model we were not able to do so. In order to get back on track, we plan on working extra hours outside of class to get the model implemented by the end of the week.

By next week, I would like to have the initial model implemented so we can begin our first round of training and experimentation. Once we can establish an initial baseline for the model performance, we can start integrating biasing, hyper-parameter tuning, and specialized training. This iterative process should also take around a week and a half and will also include getting our cloud services up and running.

 

TEAM STATUS:

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Right now, one of the biggest risks is simply staying on schedule. We have run into issues of not having parts ready because they needed to be ordered.  Now that we have realized that this could potentially affect our timeline, we are reviewing any and all parts that might need to be purchased and making sure we have all of them by the end of this coming week.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have pivoted back to Faster-RCNN as we believe that the accuracy tradeoff from YOLO was not worth it’s inference speed. This does not affect out project too much although we did take a bit too long to decide between the models and now we are slightly behind schedule.

C9

Status Update: Week 1

 

Karen Johnson

This week was based on setting up our environments and getting the tools necessary for our respective parts on the SOSbot. Since I am more focused on the hardware side of the project most of my effort was spent setting up everything needed for the Roomba (Create2). I was able to order and receive the Create2 and the raspberry pi 3 to control it. From there I started researching the interface of the Create2, taking notes on the ISA and other projects that we could learn from. Most of the Create2 projects that have been posted are based on live instructions from a local computer to the bot. While this is different from what we are trying to achieve, I was able to use that as a good starting point. Generally, we need to create a connection to the bot and then serially send the opcodes of commands that we want to complete. I have started scripting in Python to create a connection between the host and client servers and control movement. Currently, I am focusing on a single point of interest and direct movement to the point, with no obstacles. This involves setting the speed of Create2, adjusting the turn radius of the wheels, and letting it run for a calculated amount of time so that it reaches the location accurately.  Even with this progress, I am slightly behind schedule because ideally, we would have had the bot moving by now. However, there was a delay in actually receiving the hardware components, since they only came in on Thursday.

For next week, I would like to get the Create2 moving. This might be a stretch goal, because it requires completely setting up the pi, uploading the Raspian OS onto the pi, finishing the script and uploading it, and then attaching the pi to the Create2. If I am able to successfully control the movements of the Create2 with m script, I would like to fine tune the distance that it moves. Currently, I am running arbitrary values for the speed and the turn radius for right and left movements. But I would like to test how these speeds affect the Create2 and then adjust it so it moves an evenly measured space. This way we can make sure that the bot moves according to our grid points and can spin the entire 360 degrees of the room without overlap.

Joseph Wang

This week I was on schedule with my task of finding the correct deep learning architecture to use. While we were originally planning on using Faster-RCNN, it came to our attention that YOLOv3 may be better suited for our project as it is a lighter framework but still relatively accurate. Therefore, Manini and I split up the work on the architecture search with I working on identifying if YOLOv3 is indeed better for our situation.

I was able to get a Pytorch version of YOLOv3 and modified it to only detect people. It is capable of running on my laptop at an fps of around 2.3. What I found is that YOLOv3 is indeed very suitable for our project as it is capable of recognising people even if only a body part, such as a hand, is visible. The detection range seems to be around 20 feet and it appears to be possible to extend that range if we increase the resolution of the image that is sent in. 

Next week Manini and I will decide on what architecture we are using and if we do go with YOLOv3 I will experiment with running it on a raspberry pi on still images as well as increasing the resolution fed into the model. I will also work on setting up the raspberry pi and establishing a connection between the pi and the computer.

Manini Amin

This week I focused on the research for the implementation of our deep learning object detection algorithm. I researched the options for which detection model we could use (single shot YOLO detection model vs. region based RCNN). Our schedule is currently on track. By next week, Joseph and I will decide if we are going with the faster single shot approach or the more accurate region based approach. Below I have attached some notes and research I did comparing the different algorithms. I outlined the architecture of the deep learning model and the different phases of the pipeline. Although, we will probably use an existing implementation of either model, it is important to understand the underlying workings of the model. This is necessary in order to tweak the model and train the model to detect humans instead of general objects. I also completed some research on how to bias our model towards false positives. As discussed in our project proposal presentation, given the use case of our project (disaster zone human detection), it is important that our model is trained to overcount vs undercount the number of humans in a given image. There are two approaches that can be used to achieve this task. One approach is a data driven one while the other is based on making adjustments to the cost function used in the model.

Next week I hope to download the necessary tools needed to build this model and begin implementing it. I also hope to better understand biasing deep learning models so I can work this into the implementation of the human detection

TEAM STATUS

 

Team C9: Manini Amin, Karen Johnson, Joseph Wang

February 16th Team Status Update

 

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk we have is the roomba not being to localize itself so that it can navigate the room properly. We hope the the roomba’s internal system of keeping track of how far it has gone will be good enough for use to derive its location but we also hope to plan on using the ultrasonic sensors that detect obstacles to help with the effort of localization. Autonomous movement is a big part of our project so we plan to spend a lot of time on this and have already started writing basic code for movement and localization.

Another major issue that may occur is our deep learning model not being able to recognize people in our obstacle course. We plan on tuning our model to recognizing people in disaster areas as well as in our obstacle course to combat this situation. We have also worked quite hard on trying to identify what could be the best model for our project.

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

 

One possible change that may occur is the switch from Faster RCNN to YOLOv3 as the primary deep learning algorithm. Since we are early in development phase this pivot would not affect the overall process too much. YOLO is lighter than Faster RCNN and seems to have a similar accuracy so if we do make the pivot then we might be able to do inference directly on the Pi instead of having to send it over to our laptop.

Another pivot we might make is to use the COCO dataset instead of the Imagenet because it is a detection dataset while Imagenet is a classification dataset. This also will not affect our process as much too as we do not have a set model yet.