This week, I finished up the global planner implementation and tested it.
The global planner batches orders to plan for the robot’s waypoints, and can take in orders online.
I tested it by writing a “simulated” robot ROS node that takes waypoints and sends back location updates to make sure that the global planner node works. It just instantaneously teleports to the waypoint, essentially.
I plan to make the robot a bit more rigorous by having it simulate “driving” to the waypoint with ROS::Duration::sleep.
The global planner might be done, but there’s a lot of things I can do to make it more robust.
I know how to make it give a better route by returning the minimum route across every searched path, but that may drastically increase the runtime as it must search ALL paths and not just return the first one.
I feel like this is optimal, and I thought about proving it by inducting on the list of orders somehow, but it seems pretty hard to prove that this is optimal.
I discovered some dynamic programming potential here, by memoizing a hash of the robot state to take advantage of any repeated searches (however, I have to write a hash function for the robot state first haha).
Worked on design presentation and helped with parts ordering
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, I want to write some more rigorous tests and make it an optimization problem by minimizing the overall distance + make it DP.
In our initial testing, running SLAM on the Xavier using the ZED mini had low fps (10). Advaith found that many people have this problem using the default ROS wrapper on the Xavier and had to make modifications.
We are at risk of going over-budget due to the high cost of high power motors and controllers, and our custom frame. Our mitigation strategy is to look into prebuilt bases and potentially relaxing our requirements to allow for less powerful hardware.
We did calculations to determine our motor and battery requirements, but the calculations use some assumptions. We use a safety factor of 1.2, but in the case that this was not enough overhead, we would mitigate by relaxing our speed, weight, and battery duration requirements
Changes to System Design
Decided on depth cameras over lidar for vision due to budget constraints; we have an Intel RealSense already.
This week, most of my time was spent implementing the multi-order algorithm. It actually works!
It’s a backtracking algorithm that recurses on all possible states of the robot, and precomputes all shortest paths between nodes on an input graph to speed up algorithm computation.
It returns the first series of moves that it finds to be satisfactory to our timing requirements, so it’s just a constraint satisfaction problem for now.
I also started on a ROS node that can act as the global planner, which takes in orders and sends commands and waypoints to the robot.
There are two APIs for the ROS node: the first is the algorithm solver API and the second is the global planner node.
The repo is here: https://github.com/pythonicmux/multiorderAlgorithm
To test the algorithm, I created a weighted undirected graph of the engineering area of campus (that’s located in the test file on the repo, multiorder_alg_node.cpp). The weights are based on distances between selected road intersections on campus.
I assisted Advaith in trying out SLAM outdoors and setting up the Xavier to try localization on it.
I gave the proposal presentation and helped out with parts prediction, doing some calculations to ensure that our motors can supply enough torque for the inclines and the battery can power everything.
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, my deliverables are to finalize the global planner code and thoroughly test it with unit tests and a simulation of a user sending it orders and robot sending it status updates.
Possibly make the multiorder algorithm an optimization algo by minimizing total distance travelled + NP hard proof??
There is a risk that the WiFi on campus will not be consistent enough for our latency requirements. One contingency plan we have is to place a cellphone with a hotspot onboard the robot if the campus WiFi is not satisfactory. But this likely has implications for our ROS network since it would traditionally run in a local area network.
Another risk is that localization algorithms may not work on campus. A mitigation plan is to test building a map on campus using an Intel Realsense before ordering parts, so we can finalize the correct sensor modalities. Contingency plan is to use fiducial markers.
Changes to System Design
Changed our design from 2 motors per driver to 1 motor per driver based on feedback from Prof. Kim. Using multiple motors on a driver would make accounting for any minor differences between motors very difficult.
Decided on WiFi over LTE as our target for low-level communication protocol based on the suggestion from Prof. Kim that it is simpler to integrate into our robot.
This week, I learned many of the necessary skills needed to build this robot, namely using CAD and programming C++ on ROS.
I used Solidworks to create a lid for the robot:
I did the entirety of the ETH Zurich ROS Course (https://rsl.ethz.ch/education-students/lectures/ros.html) to learn ROS.
This involved writing ROS nodes and services for a simulated Husky robot with many sensors- I wrote a P controller that drove the Husky toward a pillar based on its laser sensor readings and also a ROS service/node to stop the robot before a collision.
Above is a picture of the husky after it drove to the pillar and stopped.
I also began thinking of the problem state for the multi-order algorithm. I’m pretty sure it’s NP-hard, but I’ll have to prove it to be sure.
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, I’m giving the proposal presentation.
I have to clearly state the multi-order problem mathematically, and come up with a brute-force solution that can find a non-optimal solution to a multi-order problem instance.
I also want to come up with a simulated weighted, undirected graph of where the robot can travel to in order to facilitate a basic, brute force implementation that can solve the multi-order problem.
Ideally, I can write a working brute-force algorithm that solves any simulated problems and start working on a polynomial-time algorithm (or better brute-force, if I can prove it to be NP-hard).
This week, I worked on the CAD model for our robot. I am using Solidworks and bringing in models of motors, wheels, and 8020 aluminum rods to get a better spatial idea of our robot design. This will help us plan for materials and make it easier to change our design based on our requirements.
I tested RTABMAP using an Intel Realsense camera using ROS. This is to test if it is a viable algorithm for our project. These tests should be done before buying parts, as the perception sensor is a significant investment.
I tested a basic Pure Pursuit controller in MATLAB with simulated differential drive robot.
Worked with team on Proposal Presentation
Schedule Progress
We are on schedule for our project. We have completed the majority of the Proposal Presentation.
Deliverables for Next Week
I wish to complete the CAD model with all boards, actuators, and sensors so that we can finalize our BOM.
Full testing of perception algorithms including ORBSLAM, RTABMAP, and HectorSLAM
Bring the IMU into ROS
I have a Bosch IMU that I want to test using ROS
Involves getting the driver running through a Teensy, then writing a ROS wrapper node.
Write a basic waypoint following robot in ROS using a simulated robot
Will shed some light on the controls aspect of the project