Jerry’s Status Update for 04/18 (Week 10)

Progress

This week, I kept working on getting the point to work with a 3×5 bin larger room. With the new dataset, the multitask model was getting 0.94 validation x accuracy and 0.95 validation y accuracy, but the results were not smooth when running it on a test video and panning the point around the room.

So, I tried using a regression output rather than a categorical output, such that the model would be able to relate information about how close the bins are instead of treating the bins as independent. This gave a much smoother result, but still has issues in the panning video.

To address the smoothness in the testing videos, I added additional data of moving my arm around while pointing to a bin. This adds noise to the data and lets the model learn that even though I am moving my arm, I am still pointing to a bin. This worked well when I tried it on pointing to one side of the room. I will continue to try this method to cover more parts of the room.

I currently am manually evaluating the test videos, if I have time I will try to add labels to it so I can quantatitively test it.

Deliverables next week

Next week, I will have gotten the point to work on the entire room.

Schedule

On schedule.

Sean’s Status Update for 04/18 (Week 10)

Progress

Path-finding

Path-finding algorithm, along with the driving algorithm is pretty much complete. It is able to drive to the center of the cell the goal (x,y) position is located in. More testing would be necessary to see if there are any edge-cases, but the robot can now drive to the user/point given the xy-coordinate.

Deliverables next week

Next week, I plan to complete integrating the object/gesture detection work done by Jerry and Rama to test the functionalities. Hopefully, we will also have time to work on the video presentation and the report.

Schedule

On schedule.

Team Status Update for 04/11 (Week 9)

Progress

Point recognition

The multitask model works the best for the small 3×3 room point environment. I will work on collecting a dataset for the 5×5 larger point dataset next week.

Path finding

The path-finding algorithm is in development. It will be a variation of A* algorithm, using 8-connectivity grid representation of the room. With a robust implementation of path-finding, driving to user/robot will be fairly easily done. The implementation will be complete by next week.

2D to 3D Mapping

The mapping has been optimized as far as possible, and the inaccuracies were cleaned up.

Deliverables next week

Next week, we will continue working on our individual systems in preparation for integration.

Schedule

On schedule.

Rama’s Status Update for 04/11 (Week 9)

Progress

I finished the map visualization. Rendering map updates in real-time is really slow at around 1.5 fps after all of the optimizations I could use. The bottleneck is tkinter drawing the dots for the user and robot each frame, and there is nothing more I can do to speed that process up beyond what I have already tried. The inaccuracies were also fixed; I mostly needed to tighten the acceptable color range for the robot.

Deliverables next week

I will finalize the 2D to 3D mapping system and its outputs, and start work on the webserver integration in preparation for the final demo.

Schedule

On schedule.

Jerry’s Status Update for 04/11 (Week 9)

Progress

This week, I trained a few new models on the new dataset for the point. With evaluating validation accuracy on seperate videos, I found that the SVM method gave around 92% accuracy. So, I also tried a simple categorical neural net that gave around 93% validation accuracy. Additionally, I tried a multitask model that predicted the x and y categories individually. This gave the best results, with 97% validation accuracy for the x axis and 94% on the y axis.

Installing tensorflow on the Xavier board took some time, but I have recorded the installation steps for future reference.

I am also collecting the larger point dataset, with a room larger than just 3×3 bins.

Deliverables next week

Next week, I will apply some SVM tuning suggestions Marios gave us. I am also going to train the models on the larger point dataset. I will also work on a trigger for the dataset.

Schedule

On schedule.

Sean’s Status Update for 04/11 (Week 9)

Progress

Midpoint-Demo

This week, I presented the work I’ve done with the robot so far. The presentation included a video of the Roomba mapping the room and snaps of the generated maps. I spent some time recording the video and data so it is clear how the robot maps the room. Overall, I think the presentation went well.

Path-finding

I also spent some time developing the path-finding algorithm for the robot. It was helpful that the map already presents a discretized grid version of the space.  That way, we can fairly easily implement 8-point connectivity representation of the motion of the roomba. I plan to implement the A* (probably weighted A*) algorithm to traverse grids. However, this introduces another concern. The odometry is particularly susceptible to the turning of the roomba. If we use the 8-point connectivity, that means we need to carefully measure the turning angles. Hopefully the error isn’t big enough to accumulate to a significant value, but we will have to see.

Deliverables next week

Next week, I plan to complete the implementation of path-finding and the consequent driving.

Schedule

On schedule.

Team Status Update for 04/04 (Week 8)

Progress

We made good progress on pointing, mapping, and 2D to 3D mapping for our demo this week.

Deliverables next week

Gestures

Pointing is coming along. It worked on a toy dataset of 2700 poses, but now we want to build more robust models and collect a larger dataset to do so.

Mapping

The 2D mapping is finally complete. It generates a grid map (and a txt representation of it) that can be used in other functionalities. This will be demonstrated at the interim-demo in the coming week.

Deliverables next week

We will show demo videos during the midpoint demo.

Schedule

On schedule.

Rama’s Status Update for 04/04 (Week 8)

Progress

I wrote a script to identify the position of a user and the robot in the camera view. The user identification uses OpenPose and it is easy to recognize if the user is not present in the image. The robot identification was inaccurate because of the color of the robot, but after Sean changed the color to red the identification became a lot more accurate. The position overlap of the user and robot needs more work to recognize when they are on the same map coordinates, and the code for the 2D to 3D mapping between image location and map location need more work. It is difficult to line up position in video frames with movement coordinates of the robot, and it might be necessary to modify the data output from the robot encoders.

Deliverables next week

I will fix the overlap inaccuracies and work on a visualization for the demo.

Schedule

On schedule.

Jerry’s Status Update for 04/04 (Week 8)

Progress

This week, I experimented with more methods to do the point. I wanted to get another webcam, but surge pricing due to the pandemic has caused webcams to go from $17 to $100. So, I am exploring solutions of using my phone or laptop webcam.

With pointing to 8 squares in a room, I was able to achieve 98% test accuracy and 98% validation accuracy with a SVM OVR model trained on 2700 poses. However, there were still error running it on a test video. I believe this is because I calculated the validation metrics from a held out percent of frames from the original video for training. So, I  collected separate videos for train and validation. Also, I wanted to collect new data for the point to include pointing with both left and right hands.

The SVMs currently do not relay the fact that bins closer together are more similar, so I hope to try other model architectures like one neural net to predict the x and y coordinate of the bin separately. I could not have done this with the old dataset because there was not enough data.

I also built a visualization for the point to show in the demo.

Deliverables next week

I will train some new models with the dataset that I collected, and hope to get better results for the point on the test videos.

Schedule

On schedule.

Sean’s Status Update for 04/04 (Week 8)

Progress

2D Mapping

This week, I finished developing the 2D mapping algorithm. It is a combination of edge-following and lawn-mowing, which effectively maps every possible position of the robot in the room. The robot records its XY-coordinates throughout the phase and at the end of the exploration, a map is generated based on the record. The algorithm itself isn’t overly complicated, but there were many real-world issues (unstable serial connection, inaccurate encoder values, different light/floor condition, etc. ) which made making the algorithm robust very difficult. As I mentioned before, doing edge-following in both direction is necessary to map any corner within the room. Similarly, it turns out that doing lawn-mowing pattern in both horizontal and vertical direction was also necessary. The total length of the mapping would take about 5 minutes for a small sized room. It was also necessary to carefully iterate through the resulting list of coordinates to generate a useful map. The map is basically a discretized grid representation of the room.  Currently, the cell-size of the map is set to 5cm. i.e., one cell in the map would represent 5cmx5cm square area of the room. If the robot was able to travel through the cell multiple times, it is safe to conclude the cell is safe, especially with small enough cell size. In addition, if an unexplored cell which is within the range(the radius of the robot) of two or more already explored safe cells, it is reasonable to conclude the unexplored cell is safe as well.

Driving to Home

I also implemented a simple version of driving-to-home function. It will record the “home” location at the beginning of the mapping. This location is different from the actual charging station. To utilize the “dock” functionality of the roomba, it has to be a certain distance straight away from the charging station; or else, the IR sensor will fail to locate the charging station, and the robot will begin travelling in a wrong direction. Thus, the robot will initially move straight back for a certain period of time and record that position as the “home,” so it can try to dock afterwards. Currently, the robot will simply turn to the “home” location and drive straight until it thinks it is within an tolerable error range with the goal. Obviously, it doesn’t take account for the map of the room or any obstacle it might face on the path, so I will improve this functionality in the coming week.

Deliverables next week

Next week, I will improve the driving-to-home functionality and implement a path-finding algorithm that can be used in the driving-to-user functionality.

Schedule

On schedule.