Yufei’s Status Report for 04/06/2024

What I did the week before

My last week was spent on getting started on developing the mapping software.

What I did this week

I finished implementing the mapping from 3d point cloud to 3d voxels. Here’s an example:

The mapping software is able to map a 2m x 2m x 2m scene to a voxel grid precisely (+/- 1cm).

The mapping module is named kinect2_map and will be made publicly available as an open-source ROS 2 node.

I then looked into calibrating the camera. So far everything is manual, especially the part that transforms the mapped grid into the RRT* world coordinates. So we need to come up with a solution that can do that semi-automatically (since we aim to support various scenes, not just a static one, it will require a human operator to guide the mapping module).

What I plan to be doing next week

I’ve already started implementing calibration. I expect to get a prototype of it done within the next week. Then the perception module is complete and I could help with porting software RRT to FPGA and optimizations.

Testing and Verification

I have been testing the perception module when developing it and plan to keep on doing it as I add more features to it. The verification is done by measuring the dimensions of the real-world objects and comparing that to the voxelized mapping. Currently, each voxel has a dimension of 1cm x 1cm x 1cm so it is relatively easy to manually verify by using a tape measure and comparing the measured dimensions of the real-world object and the mapped object. The same applies to verification of mapping calibration.

Matt’s Status Report for 04/06/2024

This week I wrapped up implementing RRT for the FPGA. Most of this was simply porting the C code we wrote and adapting it slightly for HLS, but we had big changes in our code wherever dynamic memory allocation was done (since memory can’t be dynamically allocated in hardware). The main change had to do with A*, since A* uses a queue to keep track of a frontier of nodes to search next. For the FPGA we swapped out A* for SMA* (Simple Memory Bounded A*), which requires a fixed-size queue. To do this, I implemented a circular, bounded generic queue.

However I was not able to get this implementation done before interim demo, as the documentation for the algorithm is poor. In fact, I am not sure if it is a good idea to pursue this algorithm due to how small its presence is on the internet (the algorithm comes from one paper; the Wikipedia page has the paper as its only reference).

Regarding the Kria board, I also met with Andrew this past week to debug why I am not able to build with Vitis for the Kria. He gave me some tutorials to try, and I connected him with ECE IT to debug the issue further.

Verification & Validation

During our interim demo, running our software RRT* on a real-sized test case (roughly the size of a cube with sides 4 feet long) took 20 minutes. Our current RRT* is heavily unoptimized, and we expect it to run much faster after some changes.

We don’t want to over-optimize because a) the focus of our project is the FPGA version and b) a slower SW version means more speedup from the FPGA version. We will have to verify the correctness of our paths—i.e. that the path generated on the FPGA is close enough to the one generated by our SW version. This will be done by calculating deltas between both paths and making sure they are marginal. For validation, we will be measuring time elapsed for the full robotic arm pipeline, as well as the time elapsed for running just RRT on each system. We can then do comparisons to calculate speedup.

Yufei’s Status Report for 03/30/2024

What I did the week before

My last week was spent on getting kinect2_bridge to work with the camera.

What I did this week

This week was spent on developing the mapping software for mapping from 3d point cloud to 3d voxel grid.

I started with installing octomap and developing the mapping software on the Kria board. However, after spending some time doing this, I realized that this is not a good idea due to Kria’s limited processing capability — it’s scalar core is rather slow, which becomes a limiting factor in development, and due to Kria’s graphics driver issues (that I have noted from last week’s report) I could not use octovis to easily visualize the mapping and debug it visually.

So then I moved to my own Linux workstation for software development. Since I have already spent some time learning octomap, everything went smooth initially. Then, as I attempted to integrate calibration for scene mapping, I realized that it is much harder than we expected. We have talked about this in our Wednesday meeting and I have taken the advise from our faculty mentor and am now in the processing of using grids to calibrate mapping.

The mapping software isn’t done yet, but I plan to get the software running stand-alone (i.e. on my workstation at least, or on Kria if I get the chance) before our interim demo.

There are also some issues with packing the software as a ROS 2 node and getting it to receive point cloud data via ROS pipes. I’ll look into it as well.

What I plan to be doing next week

As mentioned above, try to finish the initial developmeng of the mapping software and get it to work on my workstation. Then, figure out how to package it as a ROS 2 node and start integrate it with RRT* (this will probably take another week or so).

Team’s Status Report for 03/30/2024

This week was spent working on individual modules of our system.  Time was spent debugging the Kria setup and porting our RRT implementation to the Ultra96.  Progress has been made with regards to the perception system and we should be able to begin working with real perception data in the next week.  The inverse kinematics module is being calibrated and integrated with the arm controller and the perception simulator.

This coming week we have the interm demo.  This means we will need to set up our test area.  We plan on doing so this weekend and on Monday.  We should be able to demonstrate significant portions of our modules although we might not be able to finish integration on time.

Chris’ Status Report for 03/30/2024

What I did last week

Watched and read lectures on inverse kinematics

Did initial calculations for inverse kinematics implementation

Wrote a script to communicate with Arduino and send it commands over UART to control the arm

What I did this week

This week I worked on getting inverse kinematics working for the arm.  In the process of doing this I ran into some trouble because the arm we are working with has limited capabilities which makes it incompatible with many existing tools.  In order to handle this I have decided to implement the inverse kinematics module myself.  This will allow us significantly more control over our set up and should increase our ease of use in the long term.

Our robotic arm has 6 servo motors.  One of these is used to open and close the gripper and another is used to rotate the gripper.  These degrees of freedom are not relevant to how the arm is actually moved from position A to position B in the state space.  This leaves 4 servos that are responsible for the movement.  One of these servos is located at the base and causes the arm to rotate around the Z axis.  This rotation around the base means it is trivial to align the arm into the 2D plane of the point we want to reach.  This reduces the dimensional of the problem from 3D to 2D.  From here we can use the Rule of Cosines to find the angles of the final three servo motors.

Implementing this has been my main focus over the past week.  Doing this ourselves will give us significantly more control over the state space and will allow us to easily interface with the RRT accelerator.

What I plan to do next week

My work on this inverse kinematics module and its calibration will be continued in this next week.

SMA* is still in progress

Matt’s Status Report for 03/30/2024

This past week I continued porting RRT to HLS. There are some constructs in our dense RRT that are not portable—namely the use of dynamic memory allocation in our implementation of A* search. Since all buffers need to have known size at compile time for hardware, we cannot have unbounded queues/stacks that one would normally use in conventional search algorithms. Thus, we have been trying to implement SMA*, a memory-bounded version of A*. We are doing this in software, and once that is done I will port it to HLS.

Again this week I have not been able to meet with the AMD rep, and so our plan for the interim demo is to either integrate everything with the Ultra96, or simply demonstrate the stages of our system separately if we are not able to integrate.

If we cannot get integration done, then at least for the accelerator we can get it to take in a file containing perception data, process it, and then have it write a file with a motion plan; this would be a very rough demo, showing off only the individual components. We would then have to manually transfer the motion plan and do inverse kinematics on it separately (on our laptops for example) and then manually send the commands to the robot. For interim demo at least, this should suffice.

Yufei’s Status Report for 03/23/2024

What I did the week before

My last week was spent on setting up the Kinect One camera.

What I did this week

This week was spent on debugging the software bridge issue for the Kienct One camera. I was able to successfully get the software bridge to work. However I had some trouble getting visualization of the camera feed to work while running rivz on the Kria board.

I spend a couple days looking into getting visualization to work since we needed visualization for camera calibration. However, after days of digging around, it appeared to me that the Kria board has some graphics driver issues and hence cannot initialize a X display. Specifically, the armsoc driver is not shipped with the board, and OpenGL is not happy with the board defaulting to use swrast.To get around with it, I brought the camera home and calibrated it by setting up the same ROS environment on my own Linux workstation. This calibration should only be done once so we shouldn’t need to use OpenGL when we deploy our code to the board in the future.

What I plan to be doing next week

Now that we have the software bridge, the next step is to use octomap to convert the raw camera feed into voxel mappings. I have already started working on the implementation and anticipate to get it done in the next week. After that, I could try to integrade it with our existing RRT implementation.

Team’s Status Report for 03/23/2024

This week we had the Ethics Lecture during class on Monday.  On Wednesday we met as a team during class time.  This past week has mainly been spent troubleshooting tasks in parallel. Yufei has been working on the perception system and was able to calibrate the camera sensor.  This means we are able to get the point cloud data we need from the camera and he is now working on passing that data to octomap.  The end goal is to do some preprocessing of the data in octomap and then pass it to our RRT implementation.  Matt has been working on setting up the AMD Kria and has been in communication with an engineer from AMD.  Chris’ has made progress on the inverse kinematics module and will be testing it starting next week.

Our goal is to complete our tasks within the next week and to begin integration as soon as possible.  This integration task will likely take a significant amount of troubleshooting and calibration.  That being said we are close to having a baseline implementation of our full system completed and are still on track for the Interm demo.

Chris’ Status Report for 03/23/2024

What I did last week

Wrote dense matrix A*

What I did this week

This week I worked on inverse kinematics and arm control.  I was previously unfamiliar with the intricacies of inverse kinematics so the majority of my time this week was spent on getting up to speed.  I found a course at MIT on Robotic Manipulation and watched a significant number of the lectures.  The course notes are also available which helped me do some initial calculations on how convert the path we generated into servo motor angles.  These calculations involve initializing the arm into a known state and then creating rotation matrices that correspond to the steps in the path.

I also wrote a python script which allows us to communicate with the Arduino that controls the robotic arm.  This script in conjunction with some code I wrote for the Arduino should allow us to send the servo motor angles we generate during inverse kinematics from my laptop to the arm.

I plan on testing these this script and the calculations I have done in class on Monday.  I anticipate there being a good amount of calibration and troubleshooting.

As we begin to work on porting our code to the FPGA, I have started to look at writing a SMA* implementation.  This is a memory bounded version of A* that would be necessary on the FPGA due to its inability to dynamically allocate memory.

What I plan to do next week

Test and calibrate inverse kinematics and arm control.

Implement SMA*

Matt’s Status Report for 03/23/2024

This past week I had planned on meeting with the AMD representative to get help with setting up the AMD Kria board. However, we were not able to schedule a meeting this week, but we were able to get one scheduled for the upcoming Monday. However, this means that no progress was made on setting up the Kria board.

Thus we decided that it we be best to move forward with HLS development by starting to code using the Ultra96v2. The code we write for the Ultra96 should be mostly the same as the code for the Kria (just some parameters changed for the different board size/increased power of the Kria). I have gotten a vector addition example to run on the Ultra96, and I plan on modifying it so that it can run our application (RRT) instead.