Chris’ Status Report for 04/27/2024

What I did last week

Worked on kinematics and system infrastructure.

What I did this week

This week I gave our final presentation.  We spent time preparing the slides and prepping our project.  I spent time practicing the presentation and writing speaker notes.

I finished the rewrite of forward kinematics and I am very close to being done with inverse kinematics.  I took sometime to read through our HLS RRT kernel as Matt asked for some clarification during debug.  I noticed some discrepancies with the software implementation and we will be meeting tomorrow to debug and optimize the kernel.

We are on the final stretch and I am optimistic about completing the project in time for the demo.

What I plan to do next week

Finish system testing.  Prepare poster and demo video.

Team’s Status Report for 04/27/2024

This past week was spent again on integration, as well as on the final presentation slides. We hope to be done with integration in the next two days, leaving us some time do testing and benchmarking before the poster deadline.

Testing & Validation

The tests that are highest in priority have to do with correctness (finding a valid and optimal path) and performance (achieving some speedup). Path comparison is simpler, and is done via comparison with our correct C implementation of RRT + A*. Performance will be benchmarked via timing of the RRT kernel (and compared against the software version). We will also measure end-to-end performance to see how our improvements affect the system overall.

Matt’s Status Report for 04/27/2024

This past week I worked on 1) debugging the HLS version of RRT, 2) system integration, mainly the communication between the Ultra96 and the laptop, and 3) the final presentation slides.

I was able to compile using HLS last week, but the tree that RRT generated at first glance seemed right but instead had some errors. Namely, RRT seemed to converge towards a configuration of the state space that wasn’t even a valid RRT tree that could be used for a motion plan. After K iterations of RRT in an empty state space (no obstacles), I saw two subtrees being grown from the initial start and end points, but the two subtrees did not connect. For some reason, further RRT iterations past K did not result in changes to the tree. I suspect the reason for this error is due to one of two possible causes—either I made an error when refactoring the code from being modular with many functions into one main function, or I made an error when altering the code to work in hardware (e.g. replacing uses of the C rand() function with a LFSR for random numbers. I discussed this problem with Chris, and we plan on meeting tomorrow to debug further.

While the UART library I found last week worked for transferring small number of bytes, I was not able to transfer the RRT data without losing info. I was not able to find a solution, and thus decided to move away from using UART and swapped to a more reliable but slower method of transferring the data: by sending the file over the network via scp.

Yufei’s Status Report for 04/27/2024

What I did the week before

Wrote RRTComms and worked on camera calibration.

What I did this week

I improved camera calibration and solved the inversed mapping from perception data to real-world 3d coordinates by using pcl‘s transformation matrices.

I also worked with Matt to come up with a communication protocol between the laptop and the Ultra96v2 board. This is because we tried UART but it was not always working and sometimes we get garbled data. So we designed a straightforward synchronization state-machine between the laptop and the board for transferring data asynchronously. The state machine was implemented as a shell script for now, but we will need to port it to ROS.

The rest of the time was spent on ROS improvements (adding services, messages, etc. instead of hardcoding parameters).

What I plan to be doing next week

Finish porting the shell script to ROS. Work with Chris to port the kinematics module to ROS. Help with overall integration. And fine-tune the perception module for our demo scene.

Chris’ Status Report for 04/20/2024

What I did last week

Worked on kinematics and set up testing environment.

What I did this week

My work this week was centered around finishing the kinematics module and installing Linux on an old laptop I have.

Due to some roadblocks, we have had to switch which FPGA we are using in our system.  Originally we were using the AMD Kria which has the ability to run ROS and has reasonably powerful embedded scalar cores.  We have transitioned to using the AMD Ultra96v2 which does not have as powerful scalar cores and does not have the same ROS functionality.  Therefore in order to make the transition we need additional scalar cores and have chosen to use a laptop for this purpose.

The laptop we are using is an x86 Mac which we needed to install Linux on in order to run ROS.  We had previously attempted to use a KVM but the hypervisor calls add too much overhead when we are handling perception data.  Installing Linux on Macs can be non-trivial as Apple includes a T2 security chip which will prevent naive attempts at installing another OS.  Luckily there exists open source Linux distros that have been tweaked to circumvent this.  After some debug related to the WIFI and Bluetooth drivers, I was able to install Linux and the packages we need for our system onto the laptop.  This laptop will be the center of our system integration in the following week.

This week I also continued work on kinematics.  This involved a partial rewrite of the forward kinematics module in order to use a more robust version of rotation and translation matrices.  The inverse kinematics problem is solved analytically by computing angles for 2 link systems and then computing the difference in angles to find the angle of the third link.  This module will be calibrated beginning tomorrow.

What I plan to do next week

System integration, calibration, and debug.

Learning Reflection

Before this project I knew some basic principles of robotics but not to the level of depth that I do now.  In order to learn about this I have read many academic papers on robotics in general as well as in more specific areas like motion planning and kinematics.  In order to implement RRT and A* I was able to use my learning from the research papers as well as pseudocode that is publicly available on Wikipedia.

I have also found Wikipedia to be an extremely helpful tool for reviewing the mathematical and geometric equations that are used in kinematics.  Luckily I am currently taking a course on Discrete Differential Geometry so my general knowledge is up to date.  Textbooks and literature from this course have proved to be invaluable general knowledge and significantly reduced the learning curve.

Kinematics is the area in which I have learned the most.  This was necessary as I have implemented a custom forward kinematics solver, analytic inverse kinematics solver, and a graphical simulation environment.  In order to do so I watched a series of lectures at MIT as well as reviewed the available course notes.  I also found the lecture notes from a robotics class at CMU and this robotics textbook helpful.

Finally I was able to review the lecture notes and other resources from 18-643 in order to get up to speed on FPGAs.

Team’s Status Report for 04/20/2024

This past week was spent continuing the integration effort, this time with the Ultra96 instead of the Kria. After discussing with Professor Kim, we decided to pivot to the Ultra96 after it became clear through discussions with AMD that setup for the Kria would require many more steps. Thus, we decided to stick with what we know better given the short timeline that we have left.

Pivoting to the Ultra96 means we will no longer be doing perception nor kinematics on the same device as motion planning/RRT. Perception will be done on a laptop, and perception data will be sent to the Ultra96 over UART. Only RRT will be done on the Ultra96, and the tree data will be sent back to the laptop, will develop a motion plan using A* (our prior troubles with A* will not be an issue here since it will be done in software). The motion plan will then be passed to our kinematics module.

Matt’s Status Report for 04/20/2024

The week prior to last week, I been working with an AMD representative, Andrew, and an ECE PhD student at CMU, Shashank, to get the Kria board working. He had sent me three tutorials to work through, all of which I did, and I ran into errors when running two of them. After discussing with Shashank and sharing the logs with him, we determined a few things wrong with how I was using Vitis. First, the scripts that I was using to run Vitis 2022.2 had some minor bugs that Shashank fixed. He also pointed out that I cannot work on Vitis projects on AFS, and so I moved my work to the local scratch directory of the ECE machine that I was working on. After this I was able to run Vitis and all three of the tutorials without failures.

At this stage, Andrew sent me a tutorial on how to specify and build a platform for the Kria. However, after discussion with Professor Kim about the pacing of our project, we decided to fall back to the Ultra96, which had more knowns and a smoother development path than the Kria, which still had a few unknowns, the main one being exactly what modules provided by the Kria we wanted to use. The tutorial that Andrew had sent was required to create a platform file that would specify to Vitis what resources were available to the accelerator that we were building. Doing this would require Vivado, and while I was able to follow the tutorial, I was not confident in my ability to adapt the tutorial and develop my own hardware platform that would suit our project. I did not originally expect having to do this step when planning to use the Kria—I had taken a lot of setup steps for granted, all of which were done by Shashank and the 18-643 staff for the course labs. Thus, that week we decided to move away from the Kria, which sadly tosses out a lot of work that my partners did to set up the Kria as an end-to-end robotics system.

This past week I easily got a working hardware version of RRT built. Due to the complications with A* search that we experienced right before interim demo, we have separated RRT and A* so that RRT alone will be done on the FPGA. Adapting the C version of RRT into a synthesizable hardware version that could be understood by the HLS compiler was difficult—I was running into this cryptic and completely undocumented error “naming conflict of rtl elements” from Vitis. Even after thoroughly searching through Google I could not find anything, so I resorted to reshaping my HLS code in different ways. Namely, I refactored our RRT code so that it essentially lives entirely in one function (this is better for HLS anyways), and I forced many automatic optimizations off (namely loop pipelining). Eventually I got a working version that would compile and give the correct results. What’s left for me to work on is now figuring out which optimizations I can turn back on so that our accelerator can be as performant as possible.

Lastly, on top of working on the RRT kernel itself, I also worked on defining how the FPGA board/SoC would communicate with the laptop (which we are using in place of the Kria board for perception+kinematics). After trying some libraries out, I settled on this very simple UART library (literally) which seemed to suit our needs. With it we are able to send bytes over UART and read/write them into/from C buffers. More importantly, it is very easy to use, consisting of only a .c and .h file pair. This is important because it means I can simply drop it into my Vitis project and compile the host code with the UART library.

Learning Reflection

During this project, I experienced for the first time what it was like to learn how to use a new tool and device (Vitis and the Kria board) by walking through online tutorials as well as through guidance from an expert (Andrew). I had prior experience with Vitis and the Ultra96 through a well-written step-by-step document given by the 643 course staff, but the online tutorials are not written with the same clarity and thoroughness. Thus, I found it useful to ask Andrew many questions, which he was more than happy to answer.

Yufei’s Status Report for 04/20/2024

What I did the week before

ROS integration and perception mapping calibration.

What I did this week

Since we ditched the Kria board, the setup now includes a laptop that needs to talk to the Ultra96 board. So having the laptop talking to Ultra96 in addition to all ROS communications became our top priority last week.

I wrote a RRTComms ROS node that sends 3d grid perception data to peripheral via UART for this purpose. It is able to subscribe to the perception node’s output channels, convert perception data to integer arrays, and send that out via UART.

As for perception — I 3D printed a 1cmx1cmx1cm calibration block for calibrating camera mapping. I have also finished coding the whole perception front end. The only thing left to do is to tune the parameters of the perception module once we have a fully set-up test scene.

What I plan to be doing next week

Since perception and RRT comms are both ROS nodes and they are already working and talking to each other, I don’t expect much to do to integrate them with the whole system. However, one uncertainty is getting back the RRT output from Ultra96 and sending that to the kinematics module. Since the kinematics module is not yet packaged as a ROS node, I have yet to code the interface for kinematics yet. So this will be the top priority for next week.

One other thing I want to do is to improve the RRT algorithm’s searching for the nearest neighbor.

Extra reflections

As you’ve designed, implemented, and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Well, I’ve learned quite a few things.

ROS, for one, since I have never used it before and it is necessary for our robotic project. It was much harder than I initially anticipated to learn to use others’ ancient ROS code, port it to ROS 2, and write my own ROS code that works with it. This involved many hours of reading ROS/ROS 2 documentation and searching on GitHub/Stackoverflow/ROS Dev Google Groups.

There are also some other things that I learned and I appreciate. For example, I learned about octomap, a robust and commonly used mapping library. I also learned more about FPGAs and HLS by reading 18-643 slides and setting up a Xilinx Vitis development environment on my local machine and playing with it.

During the journey, I found that the most helpful learning strategy is actually to not get deep into the rabbit holes I encounter. Rather, getting a high-level idea of what a thing is and starting to build a prototype to see if it works as soon as possible proved to be a good idea.

Chris’ Status Report for 04/06/2024

What I did last week

Wrote script to control and communicate with robotic arm

Worked on kinematics

What I did this week

This week I implemented forward kinematics for the arm as well as implemented a simulator that allows us to visualize the arm in the 3D plane.  This involves calculating rotation matrices and a lot of hand drawings and geometry to ensure correctness.

I went to Home Depot and bought two 1/4” thick MDF boards.  There boards are about 2′ x 4′ and we were able to use some wood clamps I had to hold them together, creating a 4′ x 4′ state space.  We mounted the robotic arm in the middle and are done with our baseline setup.

Using this setup, I was able to correlate the arm in the simulator with the arm actual arm.  This allowed me to verify that axis, angles, and measurements int he simulator align with what we have in reality.

Next steps include transitioning from forward kinematics to inverse kinematics.  I believe this transition should be relatively smooth because I have already built the simulator and ensured its correctness.  I have already done a lot of the math for the inverse kinematics on paper already as well.

What I plan to do next week

Complete inverse kinematics.

Matt has taken over SMA* but run into some bottlenecks.  When I am done with kinematics I will transition to code optimization.

Testing and Validation

The majority of the work I have done so far has been with regards to the software implementation of our algorithms, the robotic arm control, and the kinematics.

The RRT and A* algorithms are implemented in C but I have created python bindings for them that make it significantly easier to run and test.  Because of this we have been able to run these algorithms on different state spaces and check their ability to converge and to find a correct path.  Now that we have have created our actual setup we have decided on a state space that is 128 * 128 * 128 voxels (~2 million total).  Further testing has showed we must optimize the software implementation more if we want to converge on paths in a reasonable time.  The number of iterations needed for RRT to converge increases as we increase the size of the state space.  With a 64 * 64 * 64 voxel state space we achieved paths in 10,000 iterations but in the 128 * 128 * 128 state space it requires 100,000 (note that each iteration takes two steps and that not every iteration is successful in adding a node to the RRT graph).  This first example completes in roughly 4 minutes while the second example takes about 20 minutes.  We must increase the speed of a single iteration significantly.

Testing with regards to the arm controller and the kinematics has been done via a graphical simulator I developed.  This simulator allows us to compare the supposed angles of the arm in the state space with the positions they are in in actuality.  This testing was completed before the interm demo and was done to ensure the correctness of our implementation.  Things appear to be working for now and there are minimal concerns with regards to the speed of these operations.

Team’s Status Report for 04/06/2024

Status Report

This week was spent on getting all the sub-components of the project to work and setting up the physical components of the project.

We’ve assembled a test scene setup with the robot arm mounted in the middle of it. We’ve gotten software RRT* to work on a 1024 x 1024 x 1024 scene (albeit very slowly). We’ve gotten the perception system to map the test scene into a 3d voxel grid.

We are still improving the sub-components this week and next week. Then we will move on to overall integration (assuming we are using the Kria board, everything should be packed as ROS 2 node so there isn’t much physical integration required) and optimizing RRT* on FPGA.

Testing and Validation

We have already set up a scene that will be used for all full system testing. We have the dimensions of the scene measured and fixed and plan to have two sub-grids, one for item pick-up, and the other for item drop-off. There will be an obstacle in the middle of the sub-grids. Accuracy, correctness, speedup, and power efficiency will be measured using the same methods detailed in our design-review report using this scene.