Team Status Update for 02/15 (Week 1)

We started off this week going over what materials we needed to purchase for our project. We compared pros and cons of different product options, looking at their price and their specs. We ordered:

  • Roomba 671 – documentation for our Roomba API showed it worked with this model
  • Cable to connect roomba to USB – recommended by documentation of Roomba API
  • Raspi – To receive data from system and relay commands to the Roomba
  • Raspi power supply – To power the Raspi
  • Wide angle camera – We found other people using wide angle cameras with OpenPose for gesture recognition.

We built a spreadsheet to track our costs and manage our budget.

We continued the design process by finalizes what gestures our system will handle. We decided on:

  • Tele op control with 4 gestures – Tele op commands set with left arm on right elbow
    • Forward – Right arm with elbow tilted forward
    • Backward – Right arm with elbow tilted back
    • Rotate left – Right arm with elbow tilted left
    • Rotate right – Right arm with elbow tilted right
  • Save home – Right hand up
  • Go home – Left hand up
  • Drive to pointed location – Right hand point while left hand up
  • Drive to user – Both hands up

We also drew out a design for what we want the robot to look like. (with cookies!) This helped us finalize how many more parts we need to buy to build the robot.

We also wanted to experiment with possibly using multiple cameras for our project, with one to identify gestures mounted at desk level and one overhead camera used for localization of user and robot. We want to experiment with OpenPose before finalizing this design. For similar issues, we created an issues tracking doc for project risks. We put up solutions we thought of now, but we want to experiment more with our parts before we can make plans around them.

We started testing with installing OpenPose locally. It ran super slow on our machine (without GPU), 30s per frame. Additionally, we ran on a Google Cloud instance with a NVIDIA Tesla K80 GPU (5000 CUDA cores), which took 0.42s per frame. We don’t need to process every frame our camera gets for our system, but we believe 0.42 is still slow. The example video we ran it on had up to 12 people to run OpenPose on, which could have also increased the latency of it. We hope we can get the runtime down running it on our Xavier board. (500 cores) There are specific methods to install OpenPose on the Xavier board, so we hope we can install an optimized OpenPose that runs faster.

Later in the week, we received our Xavier board, so we have been working to try to run OpenPose on it. The default method for Xavier setup was to use a Ubuntu 16 or 18 machine with 20GB free space, but our computers did not have that much space or the specific linux version. We tried many different ways to install CUDA packages on it, and ended up finding success installing the OS with L4B and using SSH to install jetpack on it. We are still working to get OpenPose to run on the board.

Rama’s Status Update for 02/15 (Week 1)

Progress

Installed OpenPose on my laptop to test out what kind of information we can expect to receive. It ran very slow with around 30 seconds per frame, which was around what we expected from CPU-only execution.

Started the installation progress on the Xavier board, and after much difficulty trying to operate within the strict confines of NVIDIA’s JetPack SDK installer. I ended up creating an Ubuntu VM on my laptop through VirtualBox, and we were able to flash the OS and install CUDA, OpenCV, and other dependencies.

Installing OpenPose was very difficult and is not yet completed. All of the provided installation scripts are outdated and the process required extensive hacking. Unfortunately, there were immediate runtime errors so we will likely have to do some research. From a cursory investigation, I suspect our CUDA versions are to blame, so a first step will be a clean reinstall of CUDA.

Schedule

On schedule with the board, but gesture recognition will take longer than expected.

Jerry’s Status Update for 02/15 (Week 1)

Progress

I started this week by researching what components we need to buy, figuring out what hardware the Roomba SDK and OpenPose supported. I submitted the purchase requests by Tuesday, and hopefully we can get our components ASAP so we can start testing and finalizing our design.

Since our hardware components have not arrived, I spent more time with the team finalizing what gestures we were going to have in the system and discussing what pathing and localization strategies we were going to use for the system.

OpenPose on Google Cloud

In addition, I worked with Rama to begin testing with OpenPose. I got OpenPose to run on Google Cloud GPU (Tesla K80 5000 CUDA cores), getting 0.4s per frame of processing. This is too long for the requirements of our system, but I believe alot of time is in the file IO of writing the sample results onto video. OpenPose also has documentation the website for how to optimize the time of OpenPose. AWS will serve as a backup for our system if OpenPose does not run fast enough on the Xavier board.

Running OpenPose also helped us get a better understanding of what keypoint information to get to identify gestures. We want to start with a heuristic classification approach for gestures, and use a classifier like SVM or neural net if necessary.

OpenPose on Xavier board

When we got the Xavier board, I worked with Rama to get OpenPose running on it. We faced a roadblock since we did not have a computer that fit the requirements from the SDK, so we had to use a VM to flash the L4T OS onto the Xaivier board. We also installed the Jetpack SDK (a collection of NVIDIA CUDA software for ML) onto the board.

We faced a few issues getting OpenPose to run on the board. We had to modify their existing scripts, fix C++ code headers, and add paths to library path to load. At the end, we ended up with multiple Caffe, CUDA, OpenCV versions and an installed OpenPose. However, OpenPose had CUDA errors during runtime, so we are trying to do another clean install.

Schedule

We are on schedule to get OpenPose to run on the board. However, we are more behind schedule on the gesture recognition front since our local computer cannot run OpenPose at a reasonably fast rate.

Sean’s Status Update for 02/15 (Week 1)

Progress

This week’s progress for me was mostly doing researches. Since the Roomba hasn’t arrived yet, I had enough time to give some thoughts to the implementation of the robot.

Path finding algorithm:

Since the goal is fixed throughout the path finding, and the environment is for the most part unchanged, Any-time search such as Anytime A*/D* won’t be necessary. Instead, we can incrementalize the C-space and perform some variation of a simple A* search. We will test different algorithms to see how much path-finding computation RPi can handle. If the computation takes too long, we can perhaps take advantage of the Xavier board.

Robot configuration:

We have previously found some SDK for mapping and Python wrapper library for Roomba Open Interface. These seem promising, but after some close examination of the Python wrapper library, I have some doubts about it working accurately as-is. We will begin testing these by implementing a simple motor controls.

Deliverables next week

Hopefully we will get the Roomba next week, then we will have opportunities to test our ideas on the actual robot. As of the moment, my goal next week is to:

1. Configure the Roomba with RPi
2. Import SDK/Libraries
3. Implement simple motor control (turning, moving back and forth)

Schedule

On Schedule.