Wrote some graph visualizations for the GUI to give better telemetry to the emergency operator. Now, the GUI shows graphs of the battery life and linear/angular velocities.
Progress:
We’re focusing on putting final touches on Grubtub and finishing up the final documents!
Wrote a data association program which can track multiple pedestrians at once with Advaith.
Wrote a simple P controller which can go to waypoints using its heading. It worked relatively well for one day, before the robot started randomly driving off into the grass away from the waypoint more and more often. We are working on debugging this right now.
Tested the robot with the team.
Progress:
We’re focusing on mitigation.
Next week’s deliverables:
Final presentation
The robot should be able to drive either using GPS localization or with pure distance-based odometry.
Over the past week, I worked with the team to fix the robot localization integration with GPS, IMU, and wheel odometry. Much of this involved field testing and recording rosbags of sensor data and scouring the ROS forums for clues as to why the global odometry and local odometry didn’t line up.
I generalized the global planner to take in text files of UTM offsets for nodes and edges between the nodes and build a graph based off of those files, and then output the UTM offsets for the waypoints. This allows the local planner to directly get waypoints in the UTM frame.
I worked with Advaith on pedestrian state estimation
Progress:
Currently I am on-track.
Next week’s deliverables:
Fix the localization once and for all
Integrate the global planner with the working system
Over the past week, I mainly worked with the team to integrate all the submodules on the robot so that it can autonomously navigate through a specified series of waypoints for the interim demo.
Much of my work was done writing to the launch files and making sure the nodes correctly interfaced.
I also added a feature to the GUI that allows the emergency operator to log the robot’s current location in (x,y) offset (meters x meters) from the starting point and display it as a marker on rviz.
Progress:
Currently I am on-track.
Next week’s deliverables:
Bound the multiorder algorithm’s batch size since it’s exponentially complex
Work with team to integrate GPS and ORB-SLAM as a replacement for RTABMAP
Over the past week, I primarily worked to recover from the hack, and work with the team to reflash the Xavier and reinstall dependencies onto it.
I consolidated the custom messages between the ground station and robot FSM into one custom message ROS package for all components to use in order to make integration much easier.
I worked on the GUI based on my teammates’ usage feedback, combining the operator and user GUI into one.
My teammates ran the operator GUI and used it to remote control the robot successfully, showing that the GUI, joystick code and camera feed work in production.
Progress:
Currently I am on-track.
Next week’s deliverables:
Double-check the ground station FSM based on my teammates’ robot FSM code/tests
Help test the robot and help with mapping/vision, now that it can drive around.
Over the last two weeks, I worked individually on multiple software components and worked with the team to test out some of the hardware.
I created a more rigorous test to make sure the algorithm returns the optimal path, but it wasn’t optimal at first. I debugged and fixed the multiorder algorithm, as there was an internal hashing function for the robot’s partial state that was colliding and causing some states to not be visited in the calculation.
I started on the GUI for the emergency operator and the user, and combined it with the ground station (multiorder node) so that the GUI can place orders for a ground station to send to a simulated robot.
The emergency operator GUI has a camera feed and takes in joystick controls to send to the robot, and the user GUI can take in and send orders to the ground station.
Working with the team, I helped out with the robot FSM, and based on that I redesigned the multiorder node/ground station to match the interfaces that the robot now requires.
I worked on figuring out how to calibrate the Adafruit IMU on the Jetson, and helped Advaith with RTABMAP/network testing outdoors with the Jetson.
Progress:
Currently I am on-track, despite our waiting for the Roboclaw to arrive- instead, we worked on software to balance out our progress.
Next week’s deliverables:
When the Roboclaw arrives, we have to make sure that the ground station can manually drive the robot around the campus.
I’ll continue to add more features to the GUI, and make sure that the ground station works in production.
This week, I made the multiorder algorithm an optimization algorithm- now, it not only finds a satisfactory list of moves such that the robot can deliver all orders on time, but also if finds the path that has the minimum total distance travelled.
The algorithm uses dynamic programming to find the optimal path. The reason DP works on this problem is because the batch of a orders for a robot is constantly and always decreasing, so we know that the optimal solution for any given robot state is the minimum of all subsequent robot states that come from doing an allowed move + the cost of the action to reach that robot state; therefore, this problem shows very clear optimal substructure. We can base the new algorithm off of that recurrence relation and memoize to never repeat calculations and shave off complexity from a scale of n! to c^n (for some constant c).
I haven’t done any proof of the complexity yet, but hopefully I can do it at some point this semester.
This involved creating a hash of the robot state in order to allow for memoization of repeated states, and a modification of the internal function to pass back the optimal path at each subproblem.
Today I also worked with my team to begin assembly of the robot. We found a Roboclaw ROS Node that provides a wonderful interface for PID low-level controls and encoder odometry on the Roboclaw and its motors, so I got that up and running- we can now send a velocity message to it and the wheel will turn accordingly with the velocity!
I also worked on inspecting and testing the battery to make sure it was functional and intact with the voltage tester that we bought.
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, I want to fix a bug we have with our Xavier where it self-reboots seemingly arbitrarily and make sure (rigorously) that the optimization version of the multiorder algorithm is working correctly.
This week, I finished up the global planner implementation and tested it.
The global planner batches orders to plan for the robot’s waypoints, and can take in orders online.
I tested it by writing a “simulated” robot ROS node that takes waypoints and sends back location updates to make sure that the global planner node works. It just instantaneously teleports to the waypoint, essentially.
I plan to make the robot a bit more rigorous by having it simulate “driving” to the waypoint with ROS::Duration::sleep.
The global planner might be done, but there’s a lot of things I can do to make it more robust.
I know how to make it give a better route by returning the minimum route across every searched path, but that may drastically increase the runtime as it must search ALL paths and not just return the first one.
I feel like this is optimal, and I thought about proving it by inducting on the list of orders somehow, but it seems pretty hard to prove that this is optimal.
I discovered some dynamic programming potential here, by memoizing a hash of the robot state to take advantage of any repeated searches (however, I have to write a hash function for the robot state first haha).
Worked on design presentation and helped with parts ordering
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, I want to write some more rigorous tests and make it an optimization problem by minimizing the overall distance + make it DP.
This week, most of my time was spent implementing the multi-order algorithm. It actually works!
It’s a backtracking algorithm that recurses on all possible states of the robot, and precomputes all shortest paths between nodes on an input graph to speed up algorithm computation.
It returns the first series of moves that it finds to be satisfactory to our timing requirements, so it’s just a constraint satisfaction problem for now.
I also started on a ROS node that can act as the global planner, which takes in orders and sends commands and waypoints to the robot.
There are two APIs for the ROS node: the first is the algorithm solver API and the second is the global planner node.
The repo is here: https://github.com/pythonicmux/multiorderAlgorithm
To test the algorithm, I created a weighted undirected graph of the engineering area of campus (that’s located in the test file on the repo, multiorder_alg_node.cpp). The weights are based on distances between selected road intersections on campus.
I assisted Advaith in trying out SLAM outdoors and setting up the Xavier to try localization on it.
I gave the proposal presentation and helped out with parts prediction, doing some calculations to ensure that our motors can supply enough torque for the inclines and the battery can power everything.
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, my deliverables are to finalize the global planner code and thoroughly test it with unit tests and a simulation of a user sending it orders and robot sending it status updates.
Possibly make the multiorder algorithm an optimization algo by minimizing total distance travelled + NP hard proof??
This week, I learned many of the necessary skills needed to build this robot, namely using CAD and programming C++ on ROS.
I used Solidworks to create a lid for the robot:
I did the entirety of the ETH Zurich ROS Course (https://rsl.ethz.ch/education-students/lectures/ros.html) to learn ROS.
This involved writing ROS nodes and services for a simulated Husky robot with many sensors- I wrote a P controller that drove the Husky toward a pillar based on its laser sensor readings and also a ROS service/node to stop the robot before a collision.
Above is a picture of the husky after it drove to the pillar and stopped.
I also began thinking of the problem state for the multi-order algorithm. I’m pretty sure it’s NP-hard, but I’ll have to prove it to be sure.
Progress:
I am currently on-schedule with my tasks.
Next week’s deliverables:
Next week, I’m giving the proposal presentation.
I have to clearly state the multi-order problem mathematically, and come up with a brute-force solution that can find a non-optimal solution to a multi-order problem instance.
I also want to come up with a simulated weighted, undirected graph of where the robot can travel to in order to facilitate a basic, brute force implementation that can solve the multi-order problem.
Ideally, I can write a working brute-force algorithm that solves any simulated problems and start working on a polynomial-time algorithm (or better brute-force, if I can prove it to be NP-hard).