Omkar’s Status Report for 10/8

This week, I presented our design proposal and explained how the robots interact with the different components in our software stack. We got the robot controls working with the computer vision code to have a robot move in a straight-line trajectory with both a feedforward term and a feedback term (only a proportional controller). This controller took in the desired next pose from the spoofed path planning and the current pose from the computer vision and outputted the speeds of the two servos. This control scheme was able to reject disturbances in the environment in the form of a person pushing the robot (Video is here). I worked on taking an arbitrary path from the path planning module and computing the feedforward term within our newly defined software interface design. My section seems to be ahead of schedule. By next week, we should be unit testing and integrating the controls, computer vision, and path planning modules to have a single robot follow a given path. I also should work on implementing the controls for picking up and dropping off pallets.

Prithu’s Status Report 10/1

This week, most of my time was dedicated to researching and designing the algorithm that we will use for motion planning. After looking at various multi-agent motion planning papers (including one’s using dynamic planning), I ended up using a derivative of an algorithm that assigns each robot a priority and plans in space-time configuration space. In this way, we are able to pre-compute the robot paths such that they don’t result in a collision. Once a robot has completed its task, it is assigned the lowest priority and plans around the other robots in a similar way. I have started writing out this algorithm, and I plan to have a proof-of-concept done by tomorrow (Sunday) EOD. In addition to this, I worked with Omkar and Saral on the design presentation that Omkar will be presenting next week.

Team Status Report for 10/1

This week, the team primarily focused on getting the design presentation ready and ironing out all the software architecture. The computer vision and firmware side of the robot are well on their way and the motion-planning algorithm has been designed on paper and needs to be written up.

By the next status report, we hope to have at least one robot successfully accurately moving on the field and accurately able to pick up pallets using hard-coded goal-pose vectors. That target involves further work on the camera calibration, field construction, robot motion controller, and the robot-computer interface

Saral’s Status Report for 10/1

This week I worked on making the computer vision algorithm more robust to one or more pixels being blocked. This is quite useful since there is a chance that someone moving their hand over the playing field or an odd camera angle could possibly throw off the computer vision algorithm’s localization. Additionally, I worked on the Design presentation with the rest of the team and worked on solidifying some of our software architecture. I have also started work on building a visualizer for our demo. This visualizer will show exactly where the robots are in the field, and what paths they are following. This will be an invaluable tool for debugging system issues later in the project. Lastly, tomorrow (Sunday), Omkar and I will be working on the closed loop controller to make the robot motions more accurate

Omkar’s Status Report for 10/1

This week, I wrote the firmware for all of the different peripherals of the robots (Screen, LEDs, Servos, and Electromagnet). I also started the communication firmware that allows a computer to send POST requests to the robot. I brought up one robot in its entirety. The other robots are working, but some of the neopixel LEDs are not soldered properly, so only a few of the LEDs turn on for the other robots. I also worked on our design presentation slides, which I will present in the coming week. Our project is on schedule – we are planning on meeting tomorrow to determine the camera mounting and interface the robot firmware with the computer vision to have a better way of controlling the robots since we found out that the servos have a hard time driving straight – either because the servos are not exactly the same or because the PWM on the ESP8266 is driven by software interrupts. By next week, I aim to have a more robust communication framework in place and start on the controls software to get the robots to follow a straight line and hopefully a more complicated path.