Saral’s Status report for 10/8

This week I focused on a refactor of the Computer Vision Localizer to bring it from a ‘demo’ state to a clean implementation as outlined by our interface document. This involved redoing the entire code structure, removing a lot of hard-coded variables and exposing the right functions to the rest of the application.

I also worked on creating 8 distinct images for the robots to display and calibrating them for the algorithm via a LUT to account for the errors in the camera and neopixel’s not being perfectly color accurate.

I also worked on making the Computer vision code work with the fiducials. The fiducials are the QR code pieces that allow a human to place them at the edges of a sandbox (table). Based on the fiducial locations, I was able to write an algorithm to detect what fiducials were in what order and appropriately apply a homography to the sandbox and scale it such that each pixel was 1mm wide. This allows us to place the camera anywhere, at any angle and get a consistent transformation of the field for localization, path planning, and error controller! It also allows us to detect the pallets.

 

Attached below is an example. Pic 1 is a sample view from a camera (the fiducials will be cut out in reality) and Pic 2 is a transformation and scaling of the image between the fiducials [1,7,0,6] as the corner points

Team Status Report for 10/1

This week, the team primarily focused on getting the design presentation ready and ironing out all the software architecture. The computer vision and firmware side of the robot are well on their way and the motion-planning algorithm has been designed on paper and needs to be written up.

By the next status report, we hope to have at least one robot successfully accurately moving on the field and accurately able to pick up pallets using hard-coded goal-pose vectors. That target involves further work on the camera calibration, field construction, robot motion controller, and the robot-computer interface

Saral’s Status Report for 10/1

This week I worked on making the computer vision algorithm more robust to one or more pixels being blocked. This is quite useful since there is a chance that someone moving their hand over the playing field or an odd camera angle could possibly throw off the computer vision algorithm’s localization. Additionally, I worked on the Design presentation with the rest of the team and worked on solidifying some of our software architecture. I have also started work on building a visualizer for our demo. This visualizer will show exactly where the robots are in the field, and what paths they are following. This will be an invaluable tool for debugging system issues later in the project. Lastly, tomorrow (Sunday), Omkar and I will be working on the closed loop controller to make the robot motions more accurate

Saral’s Status Report 09/24

This week, I helped assemble the robots, and got most of them working! Assembling the robot took significantly longer than expected due to a couple parts being more finicky and us missing a few of the longer header pins we need. Currently all 3 robots are finished from a hardware perspective but we have a few issues with the electromagnet that we need to fix and robots 2,3 have some NeoPixel hardware issues. However, these are fairly minor things that we will get done in the next week.

 

Additionally, I also got the Computer Vision stack for localization started. I am successfully able to mask the NeoPixels, and also find their centroids. I am currently working on the inverse-pose transform code to enable our localization. The computer vision progress is going ahead of schedule and that leaves room to put in more work into the robot hardware issues!