Saral’s Status report for 10/8

This week I focused on a refactor of the Computer Vision Localizer to bring it from a ‘demo’ state to a clean implementation as outlined by our interface document. This involved redoing the entire code structure, removing a lot of hard-coded variables and exposing the right functions to the rest of the application.

I also worked on creating 8 distinct images for the robots to display and calibrating them for the algorithm via a LUT to account for the errors in the camera and neopixel’s not being perfectly color accurate.

I also worked on making the Computer vision code work with the fiducials. The fiducials are the QR code pieces that allow a human to place them at the edges of a sandbox (table). Based on the fiducial locations, I was able to write an algorithm to detect what fiducials were in what order and appropriately apply a homography to the sandbox and scale it such that each pixel was 1mm wide. This allows us to place the camera anywhere, at any angle and get a consistent transformation of the field for localization, path planning, and error controller! It also allows us to detect the pallets.

 

Attached below is an example. Pic 1 is a sample view from a camera (the fiducials will be cut out in reality) and Pic 2 is a transformation and scaling of the image between the fiducials [1,7,0,6] as the corner points

Omkar’s Status Report for 10/8

This week, I presented our design proposal and explained how the robots interact with the different components in our software stack. We got the robot controls working with the computer vision code to have a robot move in a straight-line trajectory with both a feedforward term and a feedback term (only a proportional controller). This controller took in the desired next pose from the spoofed path planning and the current pose from the computer vision and outputted the speeds of the two servos. This control scheme was able to reject disturbances in the environment in the form of a person pushing the robot (Video is here). I worked on taking an arbitrary path from the path planning module and computing the feedforward term within our newly defined software interface design. My section seems to be ahead of schedule. By next week, we should be unit testing and integrating the controls, computer vision, and path planning modules to have a single robot follow a given path. I also should work on implementing the controls for picking up and dropping off pallets.

Prithu’s Status Report 10/1

This week, most of my time was dedicated to researching and designing the algorithm that we will use for motion planning. After looking at various multi-agent motion planning papers (including one’s using dynamic planning), I ended up using a derivative of an algorithm that assigns each robot a priority and plans in space-time configuration space. In this way, we are able to pre-compute the robot paths such that they don’t result in a collision. Once a robot has completed its task, it is assigned the lowest priority and plans around the other robots in a similar way. I have started writing out this algorithm, and I plan to have a proof-of-concept done by tomorrow (Sunday) EOD. In addition to this, I worked with Omkar and Saral on the design presentation that Omkar will be presenting next week.

Team Status Report for 10/1

This week, the team primarily focused on getting the design presentation ready and ironing out all the software architecture. The computer vision and firmware side of the robot are well on their way and the motion-planning algorithm has been designed on paper and needs to be written up.

By the next status report, we hope to have at least one robot successfully accurately moving on the field and accurately able to pick up pallets using hard-coded goal-pose vectors. That target involves further work on the camera calibration, field construction, robot motion controller, and the robot-computer interface

Saral’s Status Report for 10/1

This week I worked on making the computer vision algorithm more robust to one or more pixels being blocked. This is quite useful since there is a chance that someone moving their hand over the playing field or an odd camera angle could possibly throw off the computer vision algorithm’s localization. Additionally, I worked on the Design presentation with the rest of the team and worked on solidifying some of our software architecture. I have also started work on building a visualizer for our demo. This visualizer will show exactly where the robots are in the field, and what paths they are following. This will be an invaluable tool for debugging system issues later in the project. Lastly, tomorrow (Sunday), Omkar and I will be working on the closed loop controller to make the robot motions more accurate

Omkar’s Status Report for 10/1

This week, I wrote the firmware for all of the different peripherals of the robots (Screen, LEDs, Servos, and Electromagnet). I also started the communication firmware that allows a computer to send POST requests to the robot. I brought up one robot in its entirety. The other robots are working, but some of the neopixel LEDs are not soldered properly, so only a few of the LEDs turn on for the other robots. I also worked on our design presentation slides, which I will present in the coming week. Our project is on schedule – we are planning on meeting tomorrow to determine the camera mounting and interface the robot firmware with the computer vision to have a better way of controlling the robots since we found out that the servos have a hard time driving straight – either because the servos are not exactly the same or because the PWM on the ESP8266 is driven by software interrupts. By next week, I aim to have a more robust communication framework in place and start on the controls software to get the robots to follow a straight line and hopefully a more complicated path.

Team Status Report for 9/24

In the near future, we envision the electromagnet not initially working and the camera lens being distorted at the edges being our most significant risks. We are planning on mitigating these risks by spending more time debugging the firmware and hardware on the MCU, the robot PCB, and the electromagnet and trying to understand why our electromagnet could not attract a paperclip. We found resources from manufacturers, namely Seeed and Keyestudio, about how to use their products with an Arduino, so we are confident that it should work soon. For the camera, we are going to have to calibrate the camera by moving a robot (with lit neopixel LEDs) in relation to a fixed camera and determining how the lengths between the centers of the LEDs change based on where the robot is. From there, we will have an idea of how the camera lens is curved at the edges compared to the center of view.

We are moving the robot field construction to the coming week, but everything else on the schedule should stay the same. As of now, we do not have any changes to the design or requirements, but we are reviewing and discussing the feedback that we got from the proposal presentation.

We got PCBs back this week and did an initial fit test: 

Then, we reflowed and assembled three robots:

We also got the screen working:

Omkar’s Status Report 09/24

I helped assemble the robots since the PCBs arrived in the middle of this week. Three robots were reflowed, and I started writing initial code to test that each of the individual components on the robots was working. I wrote code to test the servos, the neopixel LEDs, and the screen. I am still in the middle of debugging why we can’t turn on the electromagnet. Some of the robots are in different stages of bring-up. I think we should create a spreadsheet to track what hardware/firmware is working on which robot. We were unable to finish creating the field, but we discussed the plan for setting that up. Other than that, we seem to be on track for our schedule. We are going to build the field in the coming week. In the next week, I want to finish writing the firmware for all of the robots and ensure that all components are working on all robots.

Prithu’s Status Report 09/24

The first part of this week was dedicated to our proposal presentation – which I gave on Wednesday. Towards the middle of the week, our fabricated PCBs arrived and we began construction on the robots starting Wednesday night. We were able to reflow and solder 3 robot boards and attach all of the components (i.e. servos, wheels, electromagnet). We also designed and 3D printed a support for the front of the robot which is needed since the robot only has two wheels. We were able to get one robot fully working, but are still waiting to pick up the Neopixels (which I think arrived yesterday) to finish the other two. Our plan for this weekend is to start development on the firmware, CV, and motion planning code.

Saral’s Status Report 09/24

This week, I helped assemble the robots, and got most of them working! Assembling the robot took significantly longer than expected due to a couple parts being more finicky and us missing a few of the longer header pins we need. Currently all 3 robots are finished from a hardware perspective but we have a few issues with the electromagnet that we need to fix and robots 2,3 have some NeoPixel hardware issues. However, these are fairly minor things that we will get done in the next week.

 

Additionally, I also got the Computer Vision stack for localization started. I am successfully able to mask the NeoPixels, and also find their centroids. I am currently working on the inverse-pose transform code to enable our localization. The computer vision progress is going ahead of schedule and that leaves room to put in more work into the robot hardware issues!