Matt’s Status Report for 02/24/2024

This past week, I contributed to the design slides. I focused on the design requirements and the implementation details, specifically the hardware optimizations that we can implement on the FPGA for motion planning. The creating of these slides also led me to think about the format we’ll put our collision data in for transfer to the FPGA.

This week I also worked on getting the Vitis HLS environment set up by running an example project on the Ultra96 FPGA. This example project was a kernel that summed up two vectors that were 1024 elements long each. I built the hardware target, and loaded the disk image onto the micro SD card. The FPGA did not boot up, and I am suspicious of the FPGA itself—I had set up the HLS environment for the Ultra96-v2, which we used in 18-643, but this FPGA model is the Ultra96-v1. Because of this, I’ll have to do some digging to figure out what needs to change about the development environment so that I can build for the Ultra96-v1.

This upcoming week, I will try to wrap up setting up the HLS environment. I will also try to finish the octree implementation, so that it has a complete interface that will be useful later when we are actually using the FPGA for acceleration. I will be working on the design report as well with the rest of the team.

Team’s Status Report for 02/24/2024

This past week we worked on our design presentation together. We received many of the parts we will be using, such as the Xbox Kinect, the Ultra96-v1 (temporary FPGA), and the robotic arm. On Thursday, we met to assemble and test out the robot arm. We assembled it together, and Chris was able to power on the arm and issue basic commands to it. During the meeting, I was also setting up the Vitis HLS environment.

Development on the simulation environment has paused, but we plan on meeting tomorrow to assess where we are at and where we need to make progress. Because of this, we are slightly behind schedule.

This upcoming week, we plan on making lots of progress on the simulation environment, hopefully having our baseline implementation of RRT running. We will also work on the design report together, which has already been shared with each of us as an Overleaf document.

Team’s Status Report for 02/17/2024

We are making solid progress. This past week we focused on the overall design and algorithm implementation.

We have also solidified the design of the perception unit we plan to use.

We’ve placed order requests for the robot arm, an Ultra96v1 FPGA board, and a Kinect One camera.

We had a bit change in the division of labor — Chris will be working on dynamics and Yufei will be handling perception. Matt is currently setting up the HLS environment for the FPGA.

 

ABET responses:

A: There are many ways in which robotics can improve public health, safety, and welfare. Some of the most common applications for robotics include disaster response, search and rescue, medical robotics, and manufacturing. In all of these applications, there are scenarios where responsiveness and low latency are critical. By providing fast and efficient motion planning acceleration, our product solution aims to improve the quality of robotics in such applications. It is important to note that while accelerating motion planning is our goal, we also want to make sure that our motion planning is correct—a robot moving fast is not safe if it makes many collisions, especially when there are other people nearby. As such, it is also our responsibility to make sure that our solution is accurate with respect to a correct reference implementation of a motion planner.

B: The product solution we are designing will meet the critical demand in the automation and robotics industry — fast and efficient motion planning. This can be applied to various organizations in the manufacturing, healthcare, and logistics industries. In these industries where repetitive tasks are common, robots can take on the more physically demanding or dangerous jobs, reducing the risk of injuries and improving overall worker welfare. Hence, having accelerated accurate motion planning would further benefit the social groups working in, and are related to, these industries.

C:  Our project is centered around allowing robots to do fast and efficient motion planning.  We are specifically focusing on robotic arms with high degrees of freedom which have become ubiquitous in the manufacturing of goods.  Allowing these robots to operate more quickly and at decreased energy levels has massive economic incentives.  In theory this should decrease the cost of production for many goods as well as the price of the goods themselves.  Using our work as a backbone, robots could be trained to do even more complex tasks and work in dynamic scenarios.  This would result in the use of robots in an even wider array of applications.  An increase in robotics and automation will lead to a increase in the quality of life and a decrease in the expense of living.

A was written by Matt, B was written by Yufei, and C was written by Chris.

Yufei’s Status Report for 02/17/2024

What I did the week before

Researching motion planning frameworks and algorithms.

What I did this week

I started the past week by first spending some time defining the API of an octal tree (octree) implementation and actually implementing it.

Then I moved on to setting up the simulation environment based on my plan from last week. I’ve managed to set up a ROS 1 environment with Catkin build tools as the barebone. On top of that, I’ve built and installed packages and libraries that we plan to use (libfreenect2, iai_kinect2 for perception, and octomap for mapping). This took a major amount of time since integrating multiple components with different build chains and versions was non-trivial.

I then looked into the perception libraries that we plan to use, read their documentation, and their example code. I haven’t fully finished this part yet and I plan to continue doing it over the next week.

I’ve also spent a day or two finding existing solutions to the problem that we are trying to tackle and read a few related papers that might be helpful when we start testing, performance debugging, and verifying our project.

Ah, also — the design review slides. I worked on the system specification and diagrams. I’ll also be presenting it next week.

What I plan to be doing next week

Preparing for design review presentation.

Keep learning about the perception libraries.

Testing the Kinect One camera once it arrives.

Matt’s Status Report for 02/17/2024

On Sunday we met to begin our implementation of the simulator. We began work on the C implementation of the octree data structure for representing 3D spaces. I implemented some of the methods, and also reworked our build system to use a Makefile.

On Monday and Tuesday, I made some further contributions to the octree data structure. I also reorganized our main RRT repo and modified the Makefile to match the project structure.

On Wednesday, I shifted my focus to thinking about how the state space (representation of the environment) will be compressed and sent to the FPGA. Our solution has the FPGA doing only the acceleration of motion planning, with the host CPU doing all other work. Thus, the CPU is responsible for perception, and also sending the perception data to the FPGA. The perception data will be stored in an octree, but we are not sure if an octree is the best data structure for the FPGA.

On Thursday, I met with Chris and Yufei to contribute to the design slides. I also spent some time thinking about how motion planning can be parallelized for the FPGA. Some ideas came from a paper I read on parallelizing RRT. I also picked up an FPGA that we ordered, and so I am in the middle of getting a Vitis HLS environment set up so that we can build our accelerator.

Next week, Chris and Yufei will hopefully have their ends of the simulation environment set up, so I might be able to strap them together with our octree to see if we can get something running. I will also continue with the HLS environment, and try running an unoptimized software version of RRT running.

Yufei’ Status Report for 02/10/2024

What I did the week before

Proposing project use cases, finalizing proposal, and initializing this site.

What I did this week

My time this week is spent on figuring out how to implement motion planning algorithm (Rapid-exploring Random Trees, or RRT) and what infrastructure we should have to support the simulation of a RRT use case.

I’ve spent a good amount of time revisiting the RRT paper in detail and the rest of the time probing for a potential framework that we could use to construct 3D scenes. In the end I found that octomap maps well to our use case, and I’m planning to confirm this with my teammates in our meeting tomorrow (Sunday, Feb 11th).

What I plan to be doing next week

Building the basics of the simulator.

Chris’ Status Report for 02/10/2024

This week we created our Project Proposal slideshow.  Matt gave the presentation on Monday.  We met on Thursday and discussed the general design of our motion planning accelerator and how it will interface with the simulator and with a future perception system and robotic arm.  I reviewed the paper on RRT, found possible robotic arm options, and read about open source perception solutions.

We have generally defined the interface between the accelerator and the simulator and will meet on Sunday (2/11/24) in order to begin implementation.  The general idea is to partition the state space into a grid of 3D coordinates.  Using the representation we can easily distinguish the free vertices, boundary vertices, and the vertices where object exists / collisions may occur.  The RRT implementation will then take in the coordinates of the start and goal position as well as updated collision data.  Only communicating the delta in collision data will allow for minimized data transfer.

I did work on the initialization of our git repository.  I have initialized C files for the baseline RRT implementation and Python files for the simulator.  Using the ctypes library I have enables the C RRT implementation to be called by the python simulator.  Doing this allows us to standardize the interfaces between the modules of our design and allows the simulator to be in Python.  This greatly simplifies the work of visualization and testing.  Along with this I wrote a simple shell script and started writing a README to document the repository.

I am beginning to consider the actual representation of the state space.  The matrices we will be working with we be quite large and sparse.  How we implement this is an important design decision.  I am considering using a Compressed Sparse Row Format but this will complicate the accelerator and software we write.  I am also considering using a more coarsely grained partition of the state space.  This will decrease accuracy and make the path smoothing more complicated.  Using a coarsely grained partition of the state space will be more computationally efficient but will likely lead to less optimal paths.

Matt’s Status Report for 02/10/2024

Earlier this week, I spent a lot of time practicing for my presentation. I ended up giving the presentation on Monday. My team then met on Thursday to lay the groundwork for our project. Together, we did some research on the hardware we will order. Afterwards, I did some more reading on RRT and a microarchitecture for accelerating collision detection.

From reading the collision detection microarchitecture paper, collision detection is the main bottleneck in motion planning (takes up “99%” of compute time in motion planning). The proposed microarchitecture seems pretty easy to implement in RTL, let alone HLS. The harder part will definitely be data movement and I/O.

Our team is meeting tomorrow (Sunday) to work on putting together a simulation environment. My next task will be to put together a Vitis HLS project.