Rama’s Status Update for 04/18 (Week 10)

Progress

I got speed up to 17 fps by not drawing the map; tkinter was a huge bottleneck. The data is formatted and I started planning integration to display point data on the map.

Deliverables next week

I will finish the integration with the point information before working on integration.

Schedule

On schedule.

Rama’s Status Update for 04/11 (Week 9)

Progress

I finished the map visualization. Rendering map updates in real-time is really slow at around 1.5 fps after all of the optimizations I could use. The bottleneck is tkinter drawing the dots for the user and robot each frame, and there is nothing more I can do to speed that process up beyond what I have already tried. The inaccuracies were also fixed; I mostly needed to tighten the acceptable color range for the robot.

Deliverables next week

I will finalize the 2D to 3D mapping system and its outputs, and start work on the webserver integration in preparation for the final demo.

Schedule

On schedule.

Rama’s Status Update for 04/04 (Week 8)

Progress

I wrote a script to identify the position of a user and the robot in the camera view. The user identification uses OpenPose and it is easy to recognize if the user is not present in the image. The robot identification was inaccurate because of the color of the robot, but after Sean changed the color to red the identification became a lot more accurate. The position overlap of the user and robot needs more work to recognize when they are on the same map coordinates, and the code for the 2D to 3D mapping between image location and map location need more work. It is difficult to line up position in video frames with movement coordinates of the robot, and it might be necessary to modify the data output from the robot encoders.

Deliverables next week

I will fix the overlap inaccuracies and work on a visualization for the demo.

Schedule

On schedule.

Rama’s Status Update for 03/28 (Week 7)

Progress

I got the Xavier board delivery this week, and set it up to work on my home network and also allow Jerry remote access to it. We don’t have a static IP but as long as our router stays on the IP shouldn’t change and nothing will break. I started work on using OpenCV to recognize the Roomba because we have OpenPose to recognize user position already and there is still some more setup that needs to be done before I can get videos of the Roomba moving around the room for the mapping to begin.

Deliverables next week

I will finish the Roomba recognition and connect it with the user recognition in preparation for mapping.

Schedule

On schedule.

Rama’s Status Update for 03/21 (Week 5, 6)

Progress

We had to make some changes to the project to deal with remote work. Taking into consideration Marios’ suggestions, we will be proceeding with the three major sections but will be decoupling the parts and communicating between them by sending prerecorded data around. I spent some time this week trying to get Webots to run on the Xavier board but was unable to get it to work with the unsupported ARM processor. After our decision to work with our original hardware, I pivoted to work on recognizing the Roomba and user. There is limited testing that can be done without videos from the new environment, but I am moving forward making large assumptions that can be tailored to fit our actual hardware once it is delivered.

Deliverables next week

I want to finish recognizing the Roomba and user and start to create a mockup of how the map will be represented.

Schedule

Readjusted expectations for remote work; waiting on hardware delivery.

Rama’s Status Update for 03/07 (Week 4)

Progress

This week I finished work on the reliability of our webserver and started work on structuring our data to handle JSON for passing information for the other requests that will be made. This involved straightening out the details of the communication between the three components of this project and the interfaces that they will implement and expect. The dashboard has been shelved since it is not as critical as laying server communication groundwork.

Deliverables next week

Continuing work on the APIs.

Schedule

On schedule, moving dashboard to a later date.

Rama’s Status Update for 02/29 (Week 3)

Progress

I worked on the WebSocket communication between the gestures and the robot. There were some issues with the gestures crashing when the connection dropped, and I wrote a connection wrapper in python for the gestures and robot to use that should attempt to maintain a long-term connection. I’m also changing the format of messages passed to JSON to set up groundwork for passing more structured data around (i.e. mapping information).

Deliverables next week

1. Testing the WebSocket again to make sure there are no crashes

2. Planning WebSocket API spec

3. Starting working on the dashboard

Schedule

On Schedule.

Rama’s Status Update for 02/22 (Week 2)

Progress

We got OpenPose running on the Xavier board with the USB camera, and got a static IP for the Xavier board so we can develop on it remotely as needed. I started work on the webserver that the Xavier board will be running to bridge the gap between the gesture recognition and Roomba control. We also got a lot of the specifics of implementations regarding gestures nailed down.

Rama’s Status Update for 02/15 (Week 1)

Progress

Installed OpenPose on my laptop to test out what kind of information we can expect to receive. It ran very slow with around 30 seconds per frame, which was around what we expected from CPU-only execution.

Started the installation progress on the Xavier board, and after much difficulty trying to operate within the strict confines of NVIDIA’s JetPack SDK installer. I ended up creating an Ubuntu VM on my laptop through VirtualBox, and we were able to flash the OS and install CUDA, OpenCV, and other dependencies.

Installing OpenPose was very difficult and is not yet completed. All of the provided installation scripts are outdated and the process required extensive hacking. Unfortunately, there were immediate runtime errors so we will likely have to do some research. From a cursory investigation, I suspect our CUDA versions are to blame, so a first step will be a clean reinstall of CUDA.

Schedule

On schedule with the board, but gesture recognition will take longer than expected.