Yufei’s Status Report for 04/27/2024

What I did the week before

Wrote RRTComms and worked on camera calibration.

What I did this week

I improved camera calibration and solved the inversed mapping from perception data to real-world 3d coordinates by using pcl‘s transformation matrices.

I also worked with Matt to come up with a communication protocol between the laptop and the Ultra96v2 board. This is because we tried UART but it was not always working and sometimes we get garbled data. So we designed a straightforward synchronization state-machine between the laptop and the board for transferring data asynchronously. The state machine was implemented as a shell script for now, but we will need to port it to ROS.

The rest of the time was spent on ROS improvements (adding services, messages, etc. instead of hardcoding parameters).

What I plan to be doing next week

Finish porting the shell script to ROS. Work with Chris to port the kinematics module to ROS. Help with overall integration. And fine-tune the perception module for our demo scene.

Yufei’s Status Report for 04/20/2024

What I did the week before

ROS integration and perception mapping calibration.

What I did this week

Since we ditched the Kria board, the setup now includes a laptop that needs to talk to the Ultra96 board. So having the laptop talking to Ultra96 in addition to all ROS communications became our top priority last week.

I wrote a RRTComms ROS node that sends 3d grid perception data to peripheral via UART for this purpose. It is able to subscribe to the perception node’s output channels, convert perception data to integer arrays, and send that out via UART.

As for perception — I 3D printed a 1cmx1cmx1cm calibration block for calibrating camera mapping. I have also finished coding the whole perception front end. The only thing left to do is to tune the parameters of the perception module once we have a fully set-up test scene.

What I plan to be doing next week

Since perception and RRT comms are both ROS nodes and they are already working and talking to each other, I don’t expect much to do to integrate them with the whole system. However, one uncertainty is getting back the RRT output from Ultra96 and sending that to the kinematics module. Since the kinematics module is not yet packaged as a ROS node, I have yet to code the interface for kinematics yet. So this will be the top priority for next week.

One other thing I want to do is to improve the RRT algorithm’s searching for the nearest neighbor.

Extra reflections

As you’ve designed, implemented, and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Well, I’ve learned quite a few things.

ROS, for one, since I have never used it before and it is necessary for our robotic project. It was much harder than I initially anticipated to learn to use others’ ancient ROS code, port it to ROS 2, and write my own ROS code that works with it. This involved many hours of reading ROS/ROS 2 documentation and searching on GitHub/Stackoverflow/ROS Dev Google Groups.

There are also some other things that I learned and I appreciate. For example, I learned about octomap, a robust and commonly used mapping library. I also learned more about FPGAs and HLS by reading 18-643 slides and setting up a Xilinx Vitis development environment on my local machine and playing with it.

During the journey, I found that the most helpful learning strategy is actually to not get deep into the rabbit holes I encounter. Rather, getting a high-level idea of what a thing is and starting to build a prototype to see if it works as soon as possible proved to be a good idea.

Team’s Status Report for 04/06/2024

Status Report

This week was spent on getting all the sub-components of the project to work and setting up the physical components of the project.

We’ve assembled a test scene setup with the robot arm mounted in the middle of it. We’ve gotten software RRT* to work on a 1024 x 1024 x 1024 scene (albeit very slowly). We’ve gotten the perception system to map the test scene into a 3d voxel grid.

We are still improving the sub-components this week and next week. Then we will move on to overall integration (assuming we are using the Kria board, everything should be packed as ROS 2 node so there isn’t much physical integration required) and optimizing RRT* on FPGA.

Testing and Validation

We have already set up a scene that will be used for all full system testing. We have the dimensions of the scene measured and fixed and plan to have two sub-grids, one for item pick-up, and the other for item drop-off. There will be an obstacle in the middle of the sub-grids. Accuracy, correctness, speedup, and power efficiency will be measured using the same methods detailed in our design-review report using this scene.

Yufei’s Status Report for 04/06/2024

What I did the week before

My last week was spent on getting started on developing the mapping software.

What I did this week

I finished implementing the mapping from 3d point cloud to 3d voxels. Here’s an example:

The mapping software is able to map a 2m x 2m x 2m scene to a voxel grid precisely (+/- 1cm).

The mapping module is named kinect2_map and will be made publicly available as an open-source ROS 2 node.

I then looked into calibrating the camera. So far everything is manual, especially the part that transforms the mapped grid into the RRT* world coordinates. So we need to come up with a solution that can do that semi-automatically (since we aim to support various scenes, not just a static one, it will require a human operator to guide the mapping module).

What I plan to be doing next week

I’ve already started implementing calibration. I expect to get a prototype of it done within the next week. Then the perception module is complete and I could help with porting software RRT to FPGA and optimizations.

Testing and Verification

I have been testing the perception module when developing it and plan to keep on doing it as I add more features to it. The verification is done by measuring the dimensions of the real-world objects and comparing that to the voxelized mapping. Currently, each voxel has a dimension of 1cm x 1cm x 1cm so it is relatively easy to manually verify by using a tape measure and comparing the measured dimensions of the real-world object and the mapped object. The same applies to verification of mapping calibration.

Yufei’s Status Report for 03/30/2024

What I did the week before

My last week was spent on getting kinect2_bridge to work with the camera.

What I did this week

This week was spent on developing the mapping software for mapping from 3d point cloud to 3d voxel grid.

I started with installing octomap and developing the mapping software on the Kria board. However, after spending some time doing this, I realized that this is not a good idea due to Kria’s limited processing capability — it’s scalar core is rather slow, which becomes a limiting factor in development, and due to Kria’s graphics driver issues (that I have noted from last week’s report) I could not use octovis to easily visualize the mapping and debug it visually.

So then I moved to my own Linux workstation for software development. Since I have already spent some time learning octomap, everything went smooth initially. Then, as I attempted to integrate calibration for scene mapping, I realized that it is much harder than we expected. We have talked about this in our Wednesday meeting and I have taken the advise from our faculty mentor and am now in the processing of using grids to calibrate mapping.

The mapping software isn’t done yet, but I plan to get the software running stand-alone (i.e. on my workstation at least, or on Kria if I get the chance) before our interim demo.

There are also some issues with packing the software as a ROS 2 node and getting it to receive point cloud data via ROS pipes. I’ll look into it as well.

What I plan to be doing next week

As mentioned above, try to finish the initial developmeng of the mapping software and get it to work on my workstation. Then, figure out how to package it as a ROS 2 node and start integrate it with RRT* (this will probably take another week or so).

Yufei’s Status Report for 03/23/2024

What I did the week before

My last week was spent on setting up the Kinect One camera.

What I did this week

This week was spent on debugging the software bridge issue for the Kienct One camera. I was able to successfully get the software bridge to work. However I had some trouble getting visualization of the camera feed to work while running rivz on the Kria board.

I spend a couple days looking into getting visualization to work since we needed visualization for camera calibration. However, after days of digging around, it appeared to me that the Kria board has some graphics driver issues and hence cannot initialize a X display. Specifically, the armsoc driver is not shipped with the board, and OpenGL is not happy with the board defaulting to use swrast.To get around with it, I brought the camera home and calibrated it by setting up the same ROS environment on my own Linux workstation. This calibration should only be done once so we shouldn’t need to use OpenGL when we deploy our code to the board in the future.

What I plan to be doing next week

Now that we have the software bridge, the next step is to use octomap to convert the raw camera feed into voxel mappings. I have already started working on the implementation and anticipate to get it done in the next week. After that, I could try to integrade it with our existing RRT implementation.

Yufei’s Status Report for 03/16/2024

What I did the week before

My last week was spent testing the camera.

What I did this week

The week was spent on fixing issues with establishing the software bridge to the kinect one camera, as promised in the last report.

On Monday we got our Kria260 board, thanks AMD/Xilinx! That also means that we’ll need to adjust the frameworks we use to the versions compatible with kria. Hence we decided to use ROS 2 and I went ahead to set up an ROS 2 environment and began building the kinect libraries in it. However, the iai_kinect2 library is no longer being actively maintained and does not work with ROS 2. So I had to find an incorrectly and partially ported-to-ROS 2 version of it (`kinect2_ros2`) and spent a couple of nights fixing build issues and bugs related to it.

On Thursday, I was able to establish the software bridge between kria and the kinect one camera. Then I started to calibrate the camera, which involves using a visualization framework. However, due to AMD’s OpenGL driver issues, I wasn’t able to fix it until I wrote this post.

What I plan to be doing next week

The next step will be getting the visualization framework up and running so that we can use it to calibrate the camera.

I will also be building the octomap receiving part of the perception module.

Yufei’s Status Report for 03/09/2024

What I did the week before

My last week was spent learning and installing the vision and mapping tools.

What I did this week

About half of this week was spent on solidifying our design ideas and writing and design report.

Apart from report writing, I was able to test the Kinect One camera as the adapter arrived. The camera worked fine. To get the camera work I had to find my old RaspberryPi and install the libraries on it again since that’s how the perception module should be implemented according to our design. Then I had a bit of problem getting the software link between the camera and the RaspberryPi working although libfreenect seemed to work fine. I spent the rest of my time troubleshooting this issue but had not completely solved it yet.

What I plan to be doing next week

Fix the software bridge issue and finish converting raw sensing data into point cloud representation.

If there’s time after that, I also plan to get octomap working on the point cloud data.

Yufei’s Status Report for 02/24/2024

What I did the week before

My last week was focused on designing the frontend perception system.

What I did this week

The first half of this week was design review presentations. I presented our design to all of section C.

In the middle of the week, I contributed to assembling the robot arm.

The Kinect One camera did arrive mid-week, but I found out that it requires an adapter to work with USB ports after it arrived. Although I put in a purchase order as soon as I can, I wasn’t able to get to actually testing the camera by the end of this week. So I’ll have to do that next week.

Despite the sad fact mentioned above, I was able to dig deeper into the perception libraries and spent a fair amount of time learning how the conversion from the raw depth sensing data to point cloud representation in 3D space works.

What I plan to be doing next week

Testing the Kinect One camera.

Building the initial prototype of the perception system based on the kinect libraries.

Team’s Status Report for 02/17/2024

We are making solid progress. This past week we focused on the overall design and algorithm implementation.

We have also solidified the design of the perception unit we plan to use.

We’ve placed order requests for the robot arm, an Ultra96v1 FPGA board, and a Kinect One camera.

We had a bit change in the division of labor — Chris will be working on dynamics and Yufei will be handling perception. Matt is currently setting up the HLS environment for the FPGA.

 

ABET responses:

A: There are many ways in which robotics can improve public health, safety, and welfare. Some of the most common applications for robotics include disaster response, search and rescue, medical robotics, and manufacturing. In all of these applications, there are scenarios where responsiveness and low latency are critical. By providing fast and efficient motion planning acceleration, our product solution aims to improve the quality of robotics in such applications. It is important to note that while accelerating motion planning is our goal, we also want to make sure that our motion planning is correct—a robot moving fast is not safe if it makes many collisions, especially when there are other people nearby. As such, it is also our responsibility to make sure that our solution is accurate with respect to a correct reference implementation of a motion planner.

B: The product solution we are designing will meet the critical demand in the automation and robotics industry — fast and efficient motion planning. This can be applied to various organizations in the manufacturing, healthcare, and logistics industries. In these industries where repetitive tasks are common, robots can take on the more physically demanding or dangerous jobs, reducing the risk of injuries and improving overall worker welfare. Hence, having accelerated accurate motion planning would further benefit the social groups working in, and are related to, these industries.

C:  Our project is centered around allowing robots to do fast and efficient motion planning.  We are specifically focusing on robotic arms with high degrees of freedom which have become ubiquitous in the manufacturing of goods.  Allowing these robots to operate more quickly and at decreased energy levels has massive economic incentives.  In theory this should decrease the cost of production for many goods as well as the price of the goods themselves.  Using our work as a backbone, robots could be trained to do even more complex tasks and work in dynamic scenarios.  This would result in the use of robots in an even wider array of applications.  An increase in robotics and automation will lead to a increase in the quality of life and a decrease in the expense of living.

A was written by Matt, B was written by Yufei, and C was written by Chris.