Josiah’s Status Report for 10/19

Accomplishments

In the week before fall break, our team completed the design review report. I put in substantial work into the following sections: Abstract, II. Use-Case Requirements, IV. Design Requirements, V-E. Design Trade Studies (Robotics), VI-D. System Implementation (Subsystem – XY Robot), and VII. Test, Verification and Validation. I also oversaw editing and revision for a number of sections I didn’t directly write for. 

Progress

Being one of the final major housekeeping tasks, we’re off to the races after having completed the report. Ordered parts will have arrived once we return from fall break, and I plan on construction immediately of the robotics subsystem. Besides the assembly of the robot, I intend to 3D print the remaining components that have already been designed, and design up the custom cup-holding mount which will be compatible with the original design in an Autocad software. Progress is on schedule.

Gordon Status Report for 10/19

We worked through writing our report, which took quite some time to fully flesh out every component. While working through it all and receiving more feedback from the design presentations from last week, we were able to iron out everything we’ve done so far. It was gratifying to see all the progress that we’ve made so far, and very helpful to map out exactly what we need to accomplish next. For me personally, my direction is pretty clear and will elaborate on that next.

While also working on the report, all the components needed to boot up the KRIA were finally here. Now I could finally start the setup process, which first entailed flashing the Ubuntu OS system onto a MicroSD card. I thought that my computer had a port to put in a MicroSD, and it does but for it to actually read on my computer I had to permanently be pressing the MicroSD card into the slot with my thumb. It was challenging to do for the entirety of the duration it took to flash the 10 GB Ubuntu OS onto the card, so I had to use Jimmy’s laptop (which had a wider MicroSD card adapter slot that didn’t require me to manually hold it down to connect) to successfully flash the system on. Once that was done, I could put the MicroSD into the slot, connect the display port to a monitor, connect a USB keyboard and mouse, and plug in the power to start the bootup. The first picture below showcases the bootup sequence when I first plugged it in.

(The picture is too large to include, here is the picture in google drive)

The Ubuntu OS was getting set up automatically, and it was going smoothly. Once it loaded in, I inputted the default username  and password (both were “ubuntu”) and was able to login. A picture of the home screen after logging in is included below.

(Again picture too large. Second picture google drive link here)

Now that I could actually login, the next step is to continue to start the setup. Due to the fact that we were mainly working on the design report during the week before fall break, after getting this setup I focused most of the energy there. 

For next steps on setup, Varun’s guide on setup should let me continue working. Will be consulting that guide and following along after fall break. Specifically in the guide there should be an example project which I want to try to run. Getting something to run on the board will give me the direction that I need to continue, as right now I am still a little unsure with exactly how everything runs. I know that I will be writing code on my own laptop and transferring files over via ethernet cable, and I know that there also needs to be a second MicroSD card with PetaLinux flashed onto it. Exactly how those all work together is to be figured out as next steps. Although I am a little behind where I wanted to be, I think it’s understandable and more importantly recoverable, given everything else that’s been going on for this class with the proposals and design reports, as well as out of class.

Jimmy’s Status Report 10/05

Accomplishments

This week the Oak-D Pro camera arrived, so I was able to get the camera set up and running with the DepthAI Python API, and launched a few examples to test out the capabilities of the camera. Since the example object detection models werenot very specific to custom objects, I trained my own YOLOv8 detection model on ping pong ball examples based on the ImageNet dataset. I chose YOLOv8 as it supported training on Mac M3 Silicon using MPS (Metal Performance Shaders) for GPU acceleration. This was already good enough to be able to detect a white ping pong ball that we had, however the bounding boxes would have artifacts and would not be able to accurately detect fast movements.

See YOLOv8 Tracking Model

Schedule

Based on last week’s deliverables, I am very pleased with my progress and am on track with my schedule as I have trained the initial detection model, although there is still much work to be done in the detection and estimation areas.

Deliverables 

Next week, more work would need to be done on converting and compiling my trained model onto the Oak-D camera, as it takes a specific MyriadX Blob format which is suitable for the onboard camera model acceleration processor. The performance of the model will also be an issue, and more testing will need to be done I will also aim to take the bounding box information and extract the depth information from the depth sensor. Another project part that I will start to spearhead is working on a Kalman filter estimation model for the ball trajectory.

Team Status Report for 10/5

This week we had our design review presentations, and were presenting as well as giving feedback. It was nice to be able to see the progression of everyone’s projects, and we took note of good things that other teams did, such as having very specific risk mitigation plans per use case requirement and being very detailed in scheduling and project management. 

Besides the presentations and feedback, we started to split off into each of our own sections and continued work for that. For Gordon’s KRIA portion, a few parts ordered from last week arrived, and work was done to verify that they connected and worked well. Research was done to confirm more about exactly how each part would be used, confirming details with Varun as well. Extensively searched around HH 1300 wing for a previously existing display port to display port cable, but couldn’t find it and a new one was ordered. Unfortunately couldn’t do the desired testing due to the missing part, opted to do more research into how setup would work and what can be done as soon as the part arrives. 

The camera also arrived so Jimmy was able to get the DepthAI python API set up and running with a simple object detection model. Jimmy was also able to get the custom ball detection model running on the webcam. One risk that arose as part of experimenting with the camera was that the object detection model may not be able to track the ball fast enough with the simple object detection model that was used. However, we are training a better model specifically to detect ping pong balls, and can also use a higher contrast colours between the ping pong ball (bright orange) and the background (solid white or black colour). There also may be promising results once the model is loaded onto the camera rather than experimenting with a laptop webcam.

Regarding the physical portion of this project, Josiah created and completed a team Bill of Materials document, and placed an order for the majority of the components necessary to begin construction of the XY robot. A few parts will need to be 3D printed, but the STL files are readily available for download, which will expedite the process. These components should arrive quickly, being ordered over Amazon Prime, and so construction should begin swiftly. Porting over the controls from the Arduino to the KRIA may prove tricky, as the design calls for a CNC shield over the arduino for stepper motor control. I will need to look into whether the KRIA supports CNC-esque controls, and if not, a proper communication protocol between the devices, such as UART. Realistically, only a single packet of data will need to be sent at a time: the location the robot must move to (aka, projected landing location of the ball).

Josiah’s Status Report for 10/5

Accomplishments

This week saw the conclusion of the design review presentations and reviews, and I’m happy with how our project is shaping up. Alongside the completion of the design review presentation, I created our team’s Bill of Materials in a Google Sheet, and populated my page with the materials required for the XY robot. I put in an order form for the majority of the materials, and will look into 3D printing parts to house the robot provided in the Autodesk Instructable guide.

As I become more familiar with the design, the more I expect that having the KRIA take the place of the Arduino may prove difficult. In this case, UART would be a good communication protocol between the two devices–we’d only actually need to send the landing coordinates to the Arduino to handle motor controls. In other words, just one piece of data.

Progress

This next week, I hope to have the materials arrive and begin construction of the actual robot. I’ll need to do a bit of 3D printing, but the models are already available for download so this shouldn’t take long. Additionally, I’ll be working on helping to complete the design review report, due October 11th.

Gordon’s Status Report for 10/5

Besides helping write the design review presentation and listening to everyone’s presentation while giving feedback, this week parts that I ordered last week arrived, and I worked through understanding how all of them work together and tried to setup the KRIA on a machine so I could get started. Last week I found a display port to display port connection that would connect to the spare monitor in HH1307, but unfortunately I couldn’t find it again this week. Looked around in the 1300 wing, went to A104 and ECE receiving as well but to no avail. Thus I had to send in another order for that cord, and unfortunately could’t progress with actually setup the KRIA. What I could do was test the cords that did arrive, and I verified that I had everything besides the display. 

 

The display port will put the output of the KRIA onto a monitor so you can see what’s happening, so since I didn’t have that output cord I couldn’t really continue. But what I could do was make sure I fully understood everything else that was interfacing with the KRIA, which there were a few. I consulted Varun for any clarifications I needed and compiled this table below that showcases all the wiring coming in and out of the KRIA.

 

Port connections I need Do I have it? Port Usage
Power Supply to outlet Yes Powers Board
Ethernet M2M Yes Transfer files to run on KRIA, all coding will be done on my laptop and be sent over
MicroUSB to USB Yes Unknown usage, confirmed by Varun it’s probably unnecessary.
Micro SD cards Arrived Flash PetaLinux and Ubuntu onto two separate mircoSD cards, so upon plugin the KRIA SOC can read and use the OS
USB Mouse @ home Interface I/O
USB Keyboard Yes Interface I/O
Display Port M2M Ordered To display KRIA output onto monitor

 

I’ve identified everything that’s needed, read up more on Varun’s guide as well as the AMD KR260 setup guide, and determined how I will proceed next week once all the components are together. Unfortunately not being able to do much today puts me a little behind on schedule, but that works out a little as this week has been quite busy with other work. Once the parts arrive next week, my workload should also have cooled down a little and I will be able to catch back up. I’m aiming to have the first thing I do to catch up be to connect up the KRIA and run an example project. During this time I can also study up more on Varun’s guide on how to setup the Vitis and Vivado software.

Jimmy’s Status Report 2 (09/28)

This week I accomplished establishing a pipeline to interpreting the camera data and calculating the coordinates of where the gantry system needs to move. After some discussion and research, I also changed the decision to use the OAK-D Pro depth camera rather than the D435. I initially started by helping Josiah with running experiments to test whether a simple kinematic model would suffice to accurately estimate the landing trajectory. However, we figured that too small deviances in the measurement would result in very large estimation errors, so I had to do more research on other more complex models. My work this week for the camera system pipeline mainly revolved around literature review for the Kalman Filter to predict the trajectory of the ball based on it’s past movements. I also looked at hough circle or YOLO models as feature extraction based vs CNN based approaches to implementing ball tracking. 

I mainly accomplished what I set out to do last week, although I would like to get some preliminary implementation done with the CV system, even if it’s on a rudimentary camera with no depth information. I will also need to consult with more experienced sources in this field to finalize and recognize the trade offs between each of my implementations, perhaps faculty in field of robotics or computer vision. Next week I will mainly aim to work on a first implementation of the ball tracking mechanism whilst I wait for the camera to arrive, as I do not need any depth information for that to be functional.

Team Status Report 2 (09/28)

This week we started our trajectory testing, a proof of concept that we can estimate the ball trajectory from just the first few frames of our camera capture. To do this, we measured and drew up a grid system on a whiteboard, threw a ping pong ball in an arc, and used a phone camera to record and manually track the position of the ball through the first few frames. Then we used classic kinematics equations to predict the landing spot. This was a simplified experiment, as not only did we have to estimate position by hand, but it was also converted to a 2D plane. During the actual project, the camera will be doing the location pinpoint and there is also a third dimension, but both those factors could be removed as we were simply testing to see if the kinematic equations would be enough. 

Unfortunately we discovered that the simple kinematics equations ended up consistently overestimating the ball’s landing spot. We found a python model online that took air resistance into account for trajectory calculations, and with that model it gave a better prediction, but still not the best. Under ideal conditions, our estimate was around 6-7cm off the mark. Following the experiments with our physics model we looked into potentially implementing more complex prediction systems into our camera or KRIA system that will give us better results. Additionally, with further testing (automated), we can determine if our predictions are accurate or precise. If it proves to be more precise in that we overshoot frequently, we could explore a basic heuristic (such as a linear shift) that aligns our precision to the center of the target.

In other news, we also worked through creating our design presentation slides. Since we now knew more about each component, we also locked down more specifics about what hardware we would be using. The Luxonis OAK-D PRO camera was picked, the AMD KRIA KR260 was ordered and retrieved from ECE inventory, and we decided to go with the gantry system. Another change with the system was that we moved the CV from being on the KRIA into the camera, as we discovered the OAK-D PRO has those capabilities. This makes it easier to transfer data between camera and KRIA, as now the camera doesn’t have to send whole frames through, and can just send coordinates for the KRIA to do calculations.

Specifically for the KRIA board, I met with Varun the FPGA TA and he was super helpful in letting me know what I would need to work through the KRIA setup, and I sent out order requests to fulfill the missing wire connections I needed to operate the KRIA.

 

 

Week 2 Specific section: Part A written by Gordon, Part B written by Josiah Miggiani, Part C written by Jimmy

Part A: Public health, safety, or welfare.

Our project solution doesn’t really have an aspect for public health, safety, or welfare. The only potential aspect is a stretch, and that is how by catching the thrown ball and thus preventing it from rolling around in the room. This minimizes the chances that anyone slips/trips on a loose ball and minimizes the need to go lift up furniture to get stray balls. Again, this is a stretch. 

 

Our project solution doesn’t really apply for this part, because our project’s use case is simply not targeted towards public health, safety, or welfare. Our project focuses more on the social aspect. We are trying to solve a pretty specific problem so we’ve considered everything and public health, safety, or welfare is not applicable.

 

Part B: Social factors

Splash has limited social factors as motivation for our project. It can be postulated that because cup pong is a social game, splash is a social aid because it helps users improve at cup pong. I would expect Splash to be most relevant to college students or young adults who are likely the largest demographic of cup pong players. 

 

Part C: Economic Factors

With regards to production and distribution of our project, we hope that our product will be designed in a way so that it can be viably be reproduced either for potential sale or by similar enthusiasts. Our system is relatively modular and easy to produce once we finish the prototype, and our ball-catching system should be consistent enough such that our functionality is not only limited to the demo portion – it should ideally work for anyone following our instructions.

Josiah’s Status Report for 9/28

Accomplishments

This week I ran some preliminary real-world tests to determine whether basic projectile motion equations could accurately predict where a ping pong ball would land, given two very close time frames and positions. These tests were ran using our smartphones, slow-mo recording (240fps), a whiteboard gridded out with a marker for translating the ball’s position in the video to position in real life. We restricted the axes to only x and y, tossing the ball parallel to the whiteboard. While my calculations turned out to be off by 10s of centimeters, by adapting to a python algorithm that takes air resistance into account, that error came down to less than 7cm. An important takeaway is that because the timeframe between ball frames is so low, getting accurate ball positions is CRITICAL. A difference of a centimeter can make a massive difference in computed initial velocities.

Progress

Besides the testing, I whipped up a spreadsheet for our bill of materials for the project, and added the materials required for the xy-plotting gantry. The total cost came to around ~$150, and could come down further if there are materials already at CMU we can take advantage of.

Gordon’s Status Report for 9/28

This week I met with Varun the FPGA TA, and he helped me a lot in figuring out exactly what I needed to do for setting up the board and getting started. Specifically he pointed me to his github repository that contained a guide for setup. I briefly read through the guide, and as the KRIA board arrived from the ECE inventory, I shifted my attention to setting up the actual board. I had learned from Varun that there were quite a few wires and ports that I had to setup, so I looked into what came with the KRIA from inventory and determined that I needed to get a display port cable and microSD’s from amazon. The KRIA came with the power supply, ethernet cable, and micro-usb, and I could bring a USB mouse and keyboard from home. I sent those orders off, grabbed a lock from receiving, and chose a box in 1307 to store the KRIA overnight.

 

The rest of the week I was helping the team execute the trajectory testing, and writing the design review presentation slides. I was talking with Jimmy about the camera and KRIA connection, and specifically where the CV would be done. We had decided to use the OAK-D PRO camera, and it turns out the camera is capable of doing the CV, so the KRIA usage was adjusted in our project plan to not have the CV. This is a better decision choice, as now all the camera needs to do is to send the identified coordinates over, and the KRIA can handle the calculations. Since we still don’t know the full capabilities of the OAK-D and if it can actually do the CV, we still have the idea of doing CV on KRIA as backup. 

 

It took the combined effort of the three of us to setup and execute the trajectory testing, as we had to use a ruler to mark up a white board grid, capture video, and parse through it to get coordinate estimations. The results of that are explained in the team status post. 

 

For the design review, I took on the role of redoing our block diagram, adding colors and taking inspiration from the guidance post on canvas. I was pretty proud of how it turned out, and we ironed out a lot of the vague terms as now we had identified what hardware we are using. I also added similar style blocks to other sections of the presentation, and contributed greatly to the content of those slides. 

 

Next week, besides listening to presentations and giving feedback,  I am hoping to get a chance to receive the parts I ordered and test the setup and connection to the Vivado and Vitis software that I need for KRIA use. I need to figure out how the microSD’s work, and get the KRIA up and running.