Integrated Kalman filter to work in 3D and tested on live data: Kalman 3D
Debugged the GPS localization on campus and took datasets for analysis
Below left is the GPS readings integrated with IMU/wheel odometry. On right is the raw GPS data. This is only one trial but we need to retry this when weather permits.
Modified the local planner with a new get_next_subgoal function
Progress:
We have a lot of integration to do during this last week
We need to get the localization up and running soon and perform a few basic waypoints on campus
We must pray to the robot gods for a smooth finish for this project
Over the past week, I primarily worked to recover from the hack, and work with the team to reflash the Xavier and reinstall dependencies onto it.
I consolidated the custom messages between the ground station and robot FSM into one custom message ROS package for all components to use in order to make integration much easier.
I worked on the GUI based on my teammates’ usage feedback, combining the operator and user GUI into one.
My teammates ran the operator GUI and used it to remote control the robot successfully, showing that the GUI, joystick code and camera feed work in production.
Progress:
Currently I am on-track.
Next week’s deliverables:
Double-check the ground station FSM based on my teammates’ robot FSM code/tests
Help test the robot and help with mapping/vision, now that it can drive around.
Over the last two weeks, I worked individually on multiple software components and worked with the team to test out some of the hardware.
I created a more rigorous test to make sure the algorithm returns the optimal path, but it wasn’t optimal at first. I debugged and fixed the multiorder algorithm, as there was an internal hashing function for the robot’s partial state that was colliding and causing some states to not be visited in the calculation.
I started on the GUI for the emergency operator and the user, and combined it with the ground station (multiorder node) so that the GUI can place orders for a ground station to send to a simulated robot.
The emergency operator GUI has a camera feed and takes in joystick controls to send to the robot, and the user GUI can take in and send orders to the ground station.
Working with the team, I helped out with the robot FSM, and based on that I redesigned the multiorder node/ground station to match the interfaces that the robot now requires.
I worked on figuring out how to calibrate the Adafruit IMU on the Jetson, and helped Advaith with RTABMAP/network testing outdoors with the Jetson.
Progress:
Currently I am on-track, despite our waiting for the Roboclaw to arrive- instead, we worked on software to balance out our progress.
Next week’s deliverables:
When the Roboclaw arrives, we have to make sure that the ground station can manually drive the robot around the campus.
I’ll continue to add more features to the GUI, and make sure that the ground station works in production.