Yufei’s Status Report for 03/16/2024

What I did the week before

My last week was spent testing the camera.

What I did this week

The week was spent on fixing issues with establishing the software bridge to the kinect one camera, as promised in the last report.

On Monday we got our Kria260 board, thanks AMD/Xilinx! That also means that we’ll need to adjust the frameworks we use to the versions compatible with kria. Hence we decided to use ROS 2 and I went ahead to set up an ROS 2 environment and began building the kinect libraries in it. However, the iai_kinect2 library is no longer being actively maintained and does not work with ROS 2. So I had to find an incorrectly and partially ported-to-ROS 2 version of it (`kinect2_ros2`) and spent a couple of nights fixing build issues and bugs related to it.

On Thursday, I was able to establish the software bridge between kria and the kinect one camera. Then I started to calibrate the camera, which involves using a visualization framework. However, due to AMD’s OpenGL driver issues, I wasn’t able to fix it until I wrote this post.

What I plan to be doing next week

The next step will be getting the visualization framework up and running so that we can use it to calibrate the camera.

I will also be building the octomap receiving part of the perception module.

Chris’ Status Report for 03/16/2024

What I did last week

Wrote dense matrix RRT implementation.

What I did this week

This week I wrote the dense matrix A* implementation and its python bindings.  This completes our baseline implementation of motion planning.  We are currently capable of generating simulated perception data, running RRT to search the state space for viable paths, and then run A* to find and return the shortest path.

The next step I will be working on is the inverse kinematics module that takes the path returned by A* and generates control signals for the robotic arm.  I have already found an open source implementation and have begun to run some tests.  I do not have a complete understanding of inverse kinematics and have therefore found some resources to help strengthen my knowledge.  The inverse kinematics module is currently under development and I am targeting its completion by the end of next week.

What I plan to do next week

Work on inverse kinematics module

Setup serial communication to send commands to the arm

Team’s Status Report for 03/16/2024

Over spring break we were able to finish most of our baseline motion planning module.  Our current system is capable of accepting perception data and generating motion plans for the arm to follow.  Our focus is now shifted onto optimizing this implementation and porting it onto the FPGA.

During our lecture times, we focused on integration, getting our environments set up and able to communicate with each other. With perception and motion planning substantially underway, all that’s left is inverse kinematics and system integration.  These will be our main focus as we approach the interm demo.  Our goal is to have the full system functioning in some capacity by this point.

Matt’s Status Report for 03/16/2024

This past week I spent most of my time trying to get an Vitis HLS environment set up for our new AMD Kria KR260 board. While I have a working environment set up for the Ultra96v2 (our backup FPGA board), I was not able to get it working on the Kria. We want to use the Kria because it is more powerful, and it was gifted to us by AMD to use for robotics-related experiments. The Ultra96v2 was already set up by 18-643 staff for use in the labs, but since the Kria is a new board, I have to configure the environment myself, a process that I am not familiar with. To get help with the setup of the Kria, I was able to get in touch with someone from AMD who will guide me on the setup. We plan on meeting sometime early next week.

Our RRT implementation is done for the most part, and so once the HLS environment is set up, we should able to start writing HLS code to build the accelerated version of RRT. Next week I plan on doing HLS development in preparation for the interim demo. This will probably my largest task for our project this semester. I am aiming for at least >1 speedup with no/few optimizations (i.e. hopefully not a slowdown).

Chris’ Status Report for 03/09/2024

What I did last week

Built and tested the robotic arm

Worked on RRT octree implementation

What I did this week

This week I finished the RRT implementation on the octree.  Upon finishing this implementation and further discussions with my teammates, we decided that a dense matrix implementation would be ideal for the FPGA.  For this reason I developed functions to compress the octree into a dense matrix.  I then implemented RRT on the dense matrix.  These backend implementations are abstracted away in the perception simulator.  Swapping out the octree and dense matrix backends is as simple as toggling the “COMPRESSED” macro in the header file.

While the current RRT implementations are operational, they are not optimized.  My current focus has been on getting a baseline implementation working.  I have added some notes to our code base and have developed some ideas on future optimizations.  I believe that the largest speedup will be found in optimizing search for the nearest neighbor to a specific voxel.  This computation is highly parallelizable and there are some heuristics about the order in which the state space is traversed that should be followed.

I have begun implementing A* for the dense matrix backend and  I am targeting completing it by the end of the weekend.  When I finish A*, the baseline implementation of our motion planning module will be complete.  From here I will transition to work on inverse-kinematics and arm control modules.

What I plan to do next week

Finish implementing A* on dense matrix

Continue work on inverse kinematics and arm control

Yufei’s Status Report for 03/09/2024

What I did the week before

My last week was spent learning and installing the vision and mapping tools.

What I did this week

About half of this week was spent on solidifying our design ideas and writing and design report.

Apart from report writing, I was able to test the Kinect One camera as the adapter arrived. The camera worked fine. To get the camera work I had to find my old RaspberryPi and install the libraries on it again since that’s how the perception module should be implemented according to our design. Then I had a bit of problem getting the software link between the camera and the RaspberryPi working although libfreenect seemed to work fine. I spent the rest of my time troubleshooting this issue but had not completely solved it yet.

What I plan to be doing next week

Fix the software bridge issue and finish converting raw sensing data into point cloud representation.

If there’s time after that, I also plan to get octomap working on the point cloud data.

Matt’s Status Report for 03/02/2024

On Sunday, Monday, and Tuesday, I spent some time trying to get our FPGA working with the HLS development environment. At first I was working with the Ultra96v1, and I soon realized that much of the configuration for developing on our FPGAs (Ultra96v2) in 18-643 were already set up by the TAs. Thus, for all other FPGAs, I would have to figure out how to set up the board for development. To temporarily remedy this, I asked permission from Professor Hoe (who taught 18-643) if I could borrow an Ultra96v2 kit from his class. He agreed, and so we will be using this board as our backup. We are still waiting for the Kria KR260 from AMD. I hope we will be given guidance on how to set up the board (and other boards in general), because I’ve been struggling to follow the online guides.

Aside from environment setup, I also worked on our RRT implementation. While we are almost done with a sparse version built on an octree, I thought of a problem with using this sparse version, and thus decided that we should have a dense version, represented simply using a 3D array/matrix. We will be implementing this in the coming week.

Team’s Status Report for 03/02/2024

On Monday and Wednesday, we were wrapping up our simulation environment and finishing the baseline implementation of RRT. Finishing up this led us to realize that we may also need a dense-matrix implementation of our RRT, which is currently in a sparser form using octrees. On Thursday and Friday, we got together to write the design report.

We won’t be meeting over spring break until the weekend after. We will work on the dense version of RRT,  and start implementing the accelerator architecture using HLS for the FPGA.

 

ABET responses:

A: The product solution we are designing, which uses FPGAs to improve motion planning in robotics, addresses a global need for advanced robotics capabilities. In a rapidly evolving technological landscape, robotics is becoming increasingly integrated with various global challenges and opportunities. By utilizing FPGAs to accelerate motion planning algorithms, our solution not only contributes to the advancement of robotics technology but also addresses broader global concerns such as automation in industries, disaster response, healthcare, and environmental monitoring. Improved motion planning efficiency can lead to safer and more efficient automation processes in manufacturing, potentially reducing labor-intensive tasks and enhancing productivity on a global scale. Moreover, in scenarios such as disaster response and healthcare, where time-sensitive decision-making is important, faster and more accurate motion planning enabled by our solution can aid in faster and more effective response efforts, potentially saving lives. Additionally, in environmental monitoring applications such as autonomous exploration and data collection in remote regions, the improved capabilities of robotic systems can contribute to better understanding and management of global environmental challenges. Thus, by advancing robotics capabilities, our product solution can benefit the world and society at large as robotics becomes a more ubiquitous technology.

B: Does not apply. We are designing an FPGA-accelerated motion planning solution product. Our design focuses on the technical and functional aspects of the product—namely, its speed, efficiency, power consumption, and cost-effectiveness. These features are universally appreciated across various cultural backgrounds due to their direct impact on performance and operational cost savings. The cultural considerations, such as beliefs, moral values, traditions, language, and laws, while critically important in many contexts, do not directly apply to the core functionality and design principles of this specific technology solution.

C:  The main goals of FPGA-AMP are to accelerate motion planning and reduce its energy and power costs.  Motion planning is one of the most compute intensive steps of the robotics pipeline and therefore consumes a significant portion of the total energy and power of the robot.  As the use of robots in industrial and domestic settings becomes more commonplace,  the reduction in their energy and power consumption is of paramount importance due to environmental factors.  A majority of the worlds electricity is produced from non-renewable resources.  This means there is a finite amount of these natural resources and it would be wise to limit our consumption of them.  Burning of fossil fuels for electricity production emits CO2 into the atmosphere.  Significant bodies of research have shown that this excess amount of CO2 results in climate change which can in turn have a detrimental effect on living organisms.  In conclusion, the targeted energy and power reduction of robots that will result from FPGA-AMP will have a positive impact on the environment.

A was written by Matt, B was written by Yufei, and C was written by Chris.

Chris’ Status Report for 02/24/2024

What I did last week

Last week I focused on designing the backend inverse-kinematics system.

What I did this week

The robotic arm came this week and we met to build it.

The arm has 6 servo motors and is controlled by an Arduino through a motor driver shield.  I did some reading on the API that controls the arm and was able to run a test script that moved it around.  The plan is to have paths generated by the RRT module converted into control signals by the inverse kinematics module.  These paths will then be communicated to the Arduino over UART.

Now that we have access to the arm, I can begin to implement and test the inverse-kinematics system and arm control.  We had some discussions on how to set up our test scene and are planning on building a shelf.  This can be used to test our systems ability to reach into confined spaces.  We believe this will be a good initial test of our system in real world scenarios.

I continued work on the perception simulator and the baseline RRT implementation.

What I plan to do next week

Finish the initial baseline RRT implementation.

Finish the initial perception simulator.

Continue work on inverse kinematics and arm control.

 

 

Yufei’s Status Report for 02/24/2024

What I did the week before

My last week was focused on designing the frontend perception system.

What I did this week

The first half of this week was design review presentations. I presented our design to all of section C.

In the middle of the week, I contributed to assembling the robot arm.

The Kinect One camera did arrive mid-week, but I found out that it requires an adapter to work with USB ports after it arrived. Although I put in a purchase order as soon as I can, I wasn’t able to get to actually testing the camera by the end of this week. So I’ll have to do that next week.

Despite the sad fact mentioned above, I was able to dig deeper into the perception libraries and spent a fair amount of time learning how the conversion from the raw depth sensing data to point cloud representation in 3D space works.

What I plan to be doing next week

Testing the Kinect One camera.

Building the initial prototype of the perception system based on the kinect libraries.