Gordon’s Status Report for 11/2

After attending the ethics session on monday, we split off and worked on our own areas for wednesday. I was working on the second of three guides necessary to setup the HLS on the KRIA. For this second guide, it was installing Peta Linux, which is the OS that is necessary to run files that we send from a laptop. Going through the guide took a bit of time, but I learned how to send files from my laptop to the KRIA via ethernet connection. There were a lot of settings to configure, and it took a while for the system to build. Here is a link to a picture of me working on the setup. (was too big to upload)

While the system was building, Jimmy was next to me workin on setting up a Kalman filter on the camera code he was working on. This sparked a conversation about the role and usage of the KRIA, because previously we had assigned the KRIA to be doing the Kalman filter. I wrote about the detailed reasoning  and discussions we had in the team status report, but to summarize we essentially knew that there was going to be uncertainty regarding the camera and KRIA connection, and didn’t expect the camera to be this powerful. As a result, we are going to be looking into how we can use a raspberry pi to substitute for the KRIA. I had met up with Varun the FPGA TA guru, and explained to him our concerns with the KRIA. He agreed that we could potentially have an easier solution in the raspberry pi, and we also talked through the limitations of the KRIA a little more.

 

Specifically, the KRIA is a platform that has an SoC and FPGA housed together, capable of running programs and also accessing the FPGA for hardware acceleration via High Level Synthesis (HLS). How HLS works is that you feed in a C program, and it translates it to RTL that the hardware would be able to read. However, the FPGA on board does not have many floating point units, which makes dealing with floating points challenging. Since we are doing the trajectory calculation and hope to use precise floating point numbers for position estimation, the hardware might actually have a tough time dealing with all those floats. Another challenging component is that we need a file in C for the HLS to translate. This means rewriting the Kalman filter algorithm in C that is compatible with HLS, which is a non-negligible workload. 

 

Considering that the Raspberry Pi gets the same job done but just slower, we decided that it’s worthwhile to evaluate if the Pi is fast enough to work. As I touched upon in the team status report, the timeline is now even more stretched, as we need to reconfigure things to work with a Pi now. However, this should still be way less work than using a KRIA, as there is just way less setup and knowledge required. I will be looking into acquiring the Pi next week, and getting that integrated ASAP. The team status report has a more detailed plan of attack that I wrote.

Jimmy’s Status Report 10/26

Accomplishments

Following this week’s meeting, a big issue was raised on which camera angle would work best to get accurate data information for making predictions. As such, this week was mainly getting the custom YOLO model to run inference on the camera, since the pre-built recognition models are not robust enough for our applications. I successfully compiled the detection model onto the camera from the .pt weight file to ONNX and then to a BLOB format. I was also able to get some depth data output by looking at the examples in the Luxonis documentation. 

Schedule and deliverables

I have caught up to schedule by making a lot of progress on the camera API area. More work needes to be done on the algorithmic side for the Kalman filter though. Since integration of the kalman filter and camera system is still in the works, I can work on the two components independently by making up some simulated points in space and time to use in the kalman filter, whilst working on the point generation on the camera side at the same time.

Gordon’s Status Report for 10/26

 

Besides working through the ethics assignment, I completed one of the setup guides on Varun’s git page. It was the platform setup, and it involved quite a few hours of reading and understanding what it’s doing and setting up the software. Through the guide, I was able to setup Vivado on my own machine, and worked through more of how to use and interact with that software. I also started on the next guide, which is setting up PetaLinux, another OS system that I will need for programming. Here’s a few pictures of what I did through the guide and what I set up. 

Admittedly, I didn’t get as far as I hoped, as this week I had a few job interviews as well that I needed to prepare for and get through. However, I have good news and reason to believe that there won’t be any more time used for job hunting, so now I can fully focus on getting progress in this project. For next week, I am hoping to finish the PetaLinux guide and also finish the last guide on setting up the HLS kernel. Hopefully after these next ones are setup, I can fully understand how to use the KRIA for our project specifically, and proceed with writing code for our own project.

Team Status Report for 10/26

Following the conclusion of the design review report, we kicked back into gear with putting our project together. Ordered parts arrived and we began digging in. A review meeting raised several points of interest to keep our eyes on, including the role of the KRIA board in our trajectory projection pipeline, as well as testing plans/metrics, among others.

 

For the XY robot, all of the Amazon-ordered parts have arrived and were fully accounted for, only leaving the 3D printed parts before assembly can begin. Josiah quickly ordered PETG plastic filament (which is recyclable!) and picked it up from the receiving desk on Thursday, and all parts were printed by Friday. Additionally, a custom cup-holding mount was created and also printed, and so everything is now ready to be put together come Monday next week.

 

One large risk mitigation also raised in this week’s meeting was the problem with loading the YOLO model onto the camera. Fortunately, that has been fixed and the camera can accurately track the ping pong ball. A big priority now is to determine whether the camera being placed where the ball is thrown is adequate to generate accurate depth data to make predictions.

 

For the KRIA portion, we had a discussion where the interaction between camera and KRIA was looked into. Upon discovering that the camera is able to run CV models, our original usage of the KRIA to do video processing was put into question. Originally we had went with KRIA and then discovered that the camera was so powerful, but by then we had started work with the systems, and found it comforting that although there is a slight overlap in functionality, having the KRIA able to still do video processing is a good backup in case the camera is not good enough. Gordon continued work with setting up the KRIA environment, making steady progress.

Josiah’s Status Report for 10/26

Accomplishments

With the arrival of the ordered parts, I switched gears to 3D printing the STL files we had on hand. As the original XY robot was designed for drawing images converted to GCODE, it was necessary to create a mount that would hold a standard plastic cup rather than a pen or pencil. I quickly ordered some PETG plastic filament at the recommendation of my roommate who owns his own 3D printer, and got to printing the parts. As a bonus, PETG is recyclable! 


By inspecting the stl file for the pen mount, I could determine the screw hole diameters and width between them, so that I could replicate the screw holes in a new mount. I created a rough cone component that captured the general shape of a standard plastic cup, and performed a join cut to create the opening in the part. After some additional cleanup and adding some structural support, I was left with the final part. The cup should be held above the table by around ~10mm, but if too high or too low, it should be easy to move the screw holes and adjust.

 

Progress

With all of the parts printed, assembly will properly begin next week. I intend to follow the original guide and ensure that it works before replacing parts with our custom pieces. Once this is done, I will look into generating GCODE that tells the XY robot to move the holder to a specific location.

Team Status Report 10/19

This week’s main focus was the design review report. This was honestly more work than we anticipated, and we all had hoped to get more time to work on the separate components of our project, but we did a good job getting the report done while also continuing to get the ball rolling with our individual components. Writing out the report was also a great way for us to reflect on what we’ve done and flesh out all the details. It’s important to know exactly what we’ve done, what our goals, requirements, and responsibilities are, and what steps we need to take to get there. Working through the design doc was a great way to iron out all those important details, and we are all ready to resume work after fall break.

For Gordon’s KRIA updates, was able to successfully flash Ubuntu onto a MicroSD card, connect the KRIA system onto a monitor, and initialize the Ubuntu home page. Pictures are included in my own status report. There have not been many changes to the design, and after fall break I will be ready to try to run an example project that Varun has on his github guide, to get a good idea of how things run. Slightly behind schedule, but given everything going on both in this class and outside, it’s understandable and more importantly recoverable. More details listed in my personal post.

For the camera component, a design trade off was done between hough circles and YOLOv8 as outlined in the designed report, settling for the YOLOv8 model. Some preliminary implementations are also in the works for the Kalman Filter. A risk is regarding Kalman Filter – there is a possibility that the provided openCV modules may not be accurate / robust enough, and a in-house implementation is required. A contingency plan for this is to first get the OpenCV prototype working as soon as possible to understand the limitations of this approach.

On Josiah’s end, besides the design review report, construction of the robot is to begin following the end of fall break. Ordered materials will have arrived, and 3D printing of the housing will commence. The custom cup holder will be designed through a CAD software and printed as well.

Our production solution will meet these specific needs… 

Part A (Gordon): Our product focuses on the  very specific use case of training in the game of cup pong. Yet even though it is a super specific use case, it can be applied globally because of the international popularity of the game of cup pong. This is a party game that is not limited to just college students, as people of all ages all over the world will play cup pong at social gatherings. Our product helps the user without discrimination, as no matter who uses it, Splash will do the same thing: catch the thrown ping pong ball. Therefore, our product will satisfy the cup pong training needs of cup pong players globally.

Part B (Jimmy): Our product has a very relevant cultural aspect in aiding players who want to improve their skill at cup pong. Since this is a popular game among college students and other young adults typically in a social setting, it has large cultural relevance among this demographic. By allowing players to get better at the game, it will aid people to feel more confident about their skill level when they are competing against others, allowing them to potentially engage in more social activities. It should also be universally known and appreciated by other older demographic groups, and our engineering approach to streamline practicing the game will be appreciated by everyone who knows this game.

Part C (Josiah): Due to the nature of the product we are developing and the purpose we designed it for, it’s difficult to claim that Splash meets many needs in the space of environmental considerations. As an aid to training a very particular skill used in games, our scope is hyperconcentrated and has little overlap with environmental concerns. At the least, our product is non-disposable and reusable, and does not contribute to electronic waste unlike other frequently trashed items such as charging bricks, cables, old appliances, among others.

Jimmy’s Status Report 10/19

Accomplishments

Over the previous week and fall break, to aid writing up the trade studies in the design document, I wanted to experiment with the hough circle implementation to test my belief that it would not be accurate to generate the bounding boxes. After implementing and testing the hough circle detection (using OpenCV library) and tuning the parameters, I was able to get the hough circle detection working to a somewhat accurate level, however it was still lacking as it would pick up artifacts which it seems as a circle. This was not acceptable, as it would introduce a lot of noise to our data.

I also looked into the initial implementation of the kalman filter trajectory estimations. I am going to begin using OpenCV library to get it working on video examples, and then transfer that to our OAK-D implementation.

Schedule

I am slightly behind schedule as I still have yet to compile the detection model into MyriadX Blob format, which is something I will aim to do by next week. However, I am on track for the rest of the deliverables from last week.

Deliverables

Alongside the compilation of detection model, I will also aim to get a more robust Kalman filter tracking model working, as well as fine tuning the detection model some more.

Josiah’s Status Report for 10/19

Accomplishments

In the week before fall break, our team completed the design review report. I put in substantial work into the following sections: Abstract, II. Use-Case Requirements, IV. Design Requirements, V-E. Design Trade Studies (Robotics), VI-D. System Implementation (Subsystem – XY Robot), and VII. Test, Verification and Validation. I also oversaw editing and revision for a number of sections I didn’t directly write for. 

Progress

Being one of the final major housekeeping tasks, we’re off to the races after having completed the report. Ordered parts will have arrived once we return from fall break, and I plan on construction immediately of the robotics subsystem. Besides the assembly of the robot, I intend to 3D print the remaining components that have already been designed, and design up the custom cup-holding mount which will be compatible with the original design in an Autocad software. Progress is on schedule.

Gordon Status Report for 10/19

We worked through writing our report, which took quite some time to fully flesh out every component. While working through it all and receiving more feedback from the design presentations from last week, we were able to iron out everything we’ve done so far. It was gratifying to see all the progress that we’ve made so far, and very helpful to map out exactly what we need to accomplish next. For me personally, my direction is pretty clear and will elaborate on that next.

While also working on the report, all the components needed to boot up the KRIA were finally here. Now I could finally start the setup process, which first entailed flashing the Ubuntu OS system onto a MicroSD card. I thought that my computer had a port to put in a MicroSD, and it does but for it to actually read on my computer I had to permanently be pressing the MicroSD card into the slot with my thumb. It was challenging to do for the entirety of the duration it took to flash the 10 GB Ubuntu OS onto the card, so I had to use Jimmy’s laptop (which had a wider MicroSD card adapter slot that didn’t require me to manually hold it down to connect) to successfully flash the system on. Once that was done, I could put the MicroSD into the slot, connect the display port to a monitor, connect a USB keyboard and mouse, and plug in the power to start the bootup. The first picture below showcases the bootup sequence when I first plugged it in.

(The picture is too large to include, here is the picture in google drive)

The Ubuntu OS was getting set up automatically, and it was going smoothly. Once it loaded in, I inputted the default username  and password (both were “ubuntu”) and was able to login. A picture of the home screen after logging in is included below.

(Again picture too large. Second picture google drive link here)

Now that I could actually login, the next step is to continue to start the setup. Due to the fact that we were mainly working on the design report during the week before fall break, after getting this setup I focused most of the energy there. 

For next steps on setup, Varun’s guide on setup should let me continue working. Will be consulting that guide and following along after fall break. Specifically in the guide there should be an example project which I want to try to run. Getting something to run on the board will give me the direction that I need to continue, as right now I am still a little unsure with exactly how everything runs. I know that I will be writing code on my own laptop and transferring files over via ethernet cable, and I know that there also needs to be a second MicroSD card with PetaLinux flashed onto it. Exactly how those all work together is to be figured out as next steps. Although I am a little behind where I wanted to be, I think it’s understandable and more importantly recoverable, given everything else that’s been going on for this class with the proposals and design reports, as well as out of class.

Jimmy’s Status Report 10/05

Accomplishments

This week the Oak-D Pro camera arrived, so I was able to get the camera set up and running with the DepthAI Python API, and launched a few examples to test out the capabilities of the camera. Since the example object detection models werenot very specific to custom objects, I trained my own YOLOv8 detection model on ping pong ball examples based on the ImageNet dataset. I chose YOLOv8 as it supported training on Mac M3 Silicon using MPS (Metal Performance Shaders) for GPU acceleration. This was already good enough to be able to detect a white ping pong ball that we had, however the bounding boxes would have artifacts and would not be able to accurately detect fast movements.

See YOLOv8 Tracking Model

Schedule

Based on last week’s deliverables, I am very pleased with my progress and am on track with my schedule as I have trained the initial detection model, although there is still much work to be done in the detection and estimation areas.

Deliverables 

Next week, more work would need to be done on converting and compiling my trained model onto the Oak-D camera, as it takes a specific MyriadX Blob format which is suitable for the onboard camera model acceleration processor. The performance of the model will also be an issue, and more testing will need to be done I will also aim to take the bounding box information and extract the depth information from the depth sensor. Another project part that I will start to spearhead is working on a Kalman filter estimation model for the ball trajectory.