Saral’s final status report (12/10)

For the final push on our project, I worked with the team to get the final items off our checklist in making our system as robust as possible. More specifically, Omkar and I worked on tweaking the electromagnet circuit to de-magnetize it more effectively during drop offs (using resistors with a flyback diode), we also added bus capacitance on the regulator output to help with the inrush current spike during the rising edge of enabling the electromagnet, we worked on improving the visualizer to color code each of the robot paths to make the visualizer more usable.

I also worked on minor fixes with the rest of the team like edge case handling for the robots, and also filming videos for our final demos in case we have issues at the presentations.

Team’s status report 12/3/2022

The team worked on getting final things finished on the capstone project. More so, we shifted our focus from putting the final touches and cleanups on the project and shifting into documentation and presentation prep mode. Specifically, we worked on getting our final presentation, final report, and various videos taken. We also collected our various metrics for the project.

Saral’s status report 12/3/2022

This week I focused on getting all the final loose ends tied. Things like charging all the robots, improving the robot’s bugs like weak wheels, dying voltage regulators etc. I also focused on getting the performance of the sense-plan-act loop improved by optimizing the computer-vision visualizer

Saral’s Status Report 11/19/2022

This week I spent my time helping robustly our robot system to work well with numerous robots. Specifically, focused on making better, larger laminated goal drop off markers, scaling the visualizer to work with 4 fiducials, work on the reversing of the robots after a drop off, etc.

 

We’re almost on the validation phase and ready to collect metrics on the robot’s pickup and drop off based on our design goals and quantitive goal metrics.

Saral’s Status Report for 11/12/2022

(Disclaimer: This was a quieter week for capstone progress due to my hands being tied with a lot of other classes that had major deadlines due around this time. )

 

This week I focused on improving reliability with the pallet pickup especially in the weight transfer and poor traction of the robot after it picked up the pallet. Specifically, I learnt how to use the Waterjet in techspark and cut a bunch of prototype pallet pieces for us to try and pick up. We intend to do integration on this tomorrow (Sunday) morning.

I also focused on making the codebase parallel robot friendly as a couple of the functions were only optimized for handling one robot and we’re now scaling to multiple bots.

Team status report 11/5

Last week our team was able to make 1 robot follow an arbitrary path in the field. This week our goal was to robust-ify that so we can put pallets on the field and have the robot pick them up!

 

We fell a little short of that goal but are able to pick up the pallets with a ~50% reliability. The issues we ran into were the planner not being able to find a path occasionally, and the robot doing hard corners around the path being planned.

 

The first problem with the planner is being actively addressed and the second problem with the hard corners has now been fixed with the updated cubic spline implementation!

 

We also upgraded the network/communication latency with the robots using multi-threading to give us much better performance with our PID error controller.

 

Our goal for next week is to continue to improve the pickup reliability via the approach angle and also add multiple robots into the loop!

Saral’s status Report for 11/5

Since the computer vision has been pretty much done and stable, I focused my efforts this week on improving various smaller odds-and-ends to improve the group’s standing and progress.

Some of the things I worked on was adding safety features around the velocity control and also scaling the velocities appropriately when the thresholds were overshot.

I also helped work on manufacturing 2 pallets for use to use in our testing.

I also helped out with some of the debugging around cubic-spline generation to smoothen out the path and make it easier for the robot to follow.

Saral’s Status Report 10/29

This week I spent a lot of time on robustifying the computer vision algorithm to perform well in various lighting conditions and with blocked LEDs used for localization. In parallel I brought up a fiducial based robot localization system that seems to be far more reliable than the LED based approach. Since we have 2 working approaches, we might be able to perform some sort of sensor fusion on them too.

 

Additionally, we met as a team to perform integration and managed to sort a lot of bugs in the interface between the computer vision and the rest of the robot system.

Saral’s Status Update 10/22

This week, I worked on robust-ifying the Computer vision LED-Robot detection.

 

As a quick recap, the way the computer vision localization works is that the program applies an image mask to the image captured to isolate the LEDs from the appropriate robot. After this, the algorithm does the inverse affine transform, scaling etc.

 

However, due to manufacturing inaccuracies in our LEDs, changing ambient lighting conditions, camera inaccuracies, off-axis image shift etc, using a hardcoded LUT for which robot color is which is simply not robust and has been giving us a lot of issues.

 

As such, I refactored the computer vision algorithms to use K-nearest-neighbors to run an unsupervised clustering classification algorithm at runtime to figure out which LED is which. This adds practically no extra latency/compute overhead during runtime since all the algorithm does is a distance calculation to the centroids of each of the clusters.

 

Included below is a visualization of 4 robot colors (Orange, 2 types of blue, Green) and also subsequently the clustering centroids and the detected pixels. As you can see, often 2 types of LEDs with the same color have enough noise/discrepancy in the detection that the clustering approach is definitely the right approach to this problem.