Keshav Sangam’s Status Report for 4/30

Throughout the week, we finished programming our DWA path planning algorithm. There are some struggles still left, however. Although our path planning works correctly in simulation, there are integration challenges when moving from simulation to reality. In particular, the SLAM algorithm matches the lidar point clouds very poorly during sudden large accelerations and fast speeds. Unfortunately, autonomously moving involves large accelerations in both the translational and rotational senses.  This is not much of a problem in simulation since there were lots of CPU resources available since the SLAM is not running in real time. Moving SLAM to real time severely limits the computational resources for the rest of our algorithms (including path planning and our AruCo detection). The translational acceleration is less of a problem, since the SLAM algorithm can keep up with it through a decent range of speeds. Tomorrow, we are focusing on fine tuning the maximal rotational velocity and acceleration curves to ensure the map we generate of the environment is as accurate as possible. In essence, our work leading up to the demo and final report primarily revolves around integration and optimization, which we can do in parallel with our subsystem verification and system validation.

Keshav Sangam’s Status Report for 4/23

This week, we focused on integration. As of right now, SLAM is being integrated with path planning, but there are some critical problems we are facing. While the map that our SLAM algorithm builds is accurate, the localization accuracy is extremely poor. This makes it impossible to actually path plan. The poor localization accuracy stems from the fact that when we simulate our lidar/odometry data to test our SLAM algorithm, there is too little noise in the data to show problems with bad performance; thus, it is difficult to tune the algorithm without directly running it on the robot. Manually introducing noise in the simulated data did not help improve the tuning parameters. To solve this problem, we are going to record a ROS bag (a collection of all messages being published across the ROS topics) so that we don’t have to generate simulated data, instead taking sensor readings directly from the robot. By recording this data into a bag, we can replay the data and tune the SLAM algorithm from this. I believe we are still on track since we have a lot of time to optimize these parameters. We also worked on the final presentation slides, and started writing the final report.

Keshav Sangam’s Status Report for 4/16

This week, I focused on getting the robot to run headlessly, with all the sensors, the battery, and the Jetson entirely housed on top of the Roomba. Jai started by CADing and laser cutting acrylic panels with holes for the standoffs, in addition to holes for the webcam and the LIDAR. After getting the battery, we realized that the DC power jack for the Jetson had a different inner diameter than the cable that came with the battery, which we fixed via an adapter. Once we confirmed the Jetson could be powered by our battery, I worked on setting up the Jetson VNC server for headless control. Finally, I modified the Jetson’s udev rules to assign each USB device a static device ID. Since ROS nodes depend on knowing which physical USB device is assigned to which ttyUSB device, I created symlinks to ensure that the LIDAR is always a ttyLIDAR device, the webcam a ttyWC device, and the iRobot itself as a ttyROOMBA device. Here’s a video to the Roomba moving around:

https://drive.google.com/file/d/1U68VRGrZiqcw5ZbmErxF-N3jlftXx110/view?usp=sharing

As you can see, the Roomba has a problem with “jerking” when it initially starts to accelerate in a direction. This can be fixed by adding a weight to the front of the Roomba to ensure equal distribution (for example, moving the Jetson battery mount location could do this), since the center of mass is currently at the back end of the Roomba.

We are making good progress. Ideally, we will be able to construct a map of the environment from the Roomba within the next few days. We are also working in parallel on the OpenCV + path planning.

Team Status Report for 4/10

This week, our team presented our demo, worked on the component housing for the iRobot, and began more work on the OpenCV. In the demo, we showed the progress we had made throughout the semester, The housing is being transitioned from a few plates of acrylic to being a custom CAD model. Finally, the OpenCV beacon detection ROS package development has begun, and is shaping up nicely. As a team, we are definitely on track with our work.

Keshav Sangam’s Status Report for 4/10

This week, I focused on preparing for the demo and starting to explore OpenCV with ROS. The basic architecture of how we plan on using webcams to detect beacons is to create an OpenCV based ROS package. This takes in webcam input, feeds it through a beacon detection algorithm, and estimates the pose and distance for a detected beacon. From here, we then use ROS python libraries to publish to a ROS topic detailing the estimated offset of the beacon from the robot’s current position. Finally, we can detect visualize the beacon within ROS, and can thus inform our pathplanning algorithm. We are working on camera calibration and the beacon detection algorithm, and we plan to be able to get the estimated pose/distance of the beacon by the end of the week.

Keshav Sangam’s Status Report for 4/2

This week, I worked on getting the Jetson to send instructions (in the form of opcodes) to the iRobot in order to get it to move. To do this, we have to publish linear and angular speed messages to the command velocity ROS topic. From there, we build a driver to convert these speed commands into opcodes that tell the iRobot how to move. I forgot to take a video, but this will be demo’d on Monday. The next step is creating the path planning algorithm so that these command velocity messages can be published by the path planning (and hence become autonomous), rather than the current method of a keyboard-teleoperated iRobot.

Keshav Sangam’s Status Report for 3/27/2022

This week I worked on SLAM for the robot. On ROS Melodic, I installed the Hector SLAM package. There are a few problems with testing its efficacy; notably, the map building relies on the assumption that the LIDAR is held at a constant height while maneuvering the world. The next step is to build a mount on the Roomba for all the components so that we can actually test Hector SLAM. On top of this, I have looked into the IMU’s for the potential option of sensor fusion for localization. By using the iRobot’s wheel encoders and a 6-DoF IMU as the input into a fusion algorithm such as Extended Kalman Filtering (EKF) or Unscented Kalman Filtering (UKF), we could potentially massively increase the localization accuracy of the robot. However, things like bumps or valleys on the driving surface may cause localization errors to propogate through the SLAM algorithm due to the constant height assumption mentioned before. We will have to conduct tests on the current localization accuracy when we get the mount working in order to decide if the (E/U)KF+IMU is worth it.

Keshav Sangam’s status update for 2/26/2022

This week was spent preparing for the design presentation, figuring out the LIDAR, and starting ROS tutorials. As the presenter, I created a small script for myself to make sure I covered the essential information on Wednesday (see here). Raymond, Jai, and I installed ROS on the Jetson and got the LIDAR working in the ROS visualizer. Once again, videos are not allowed to be uploaded to this website but the design presentation has a GIF showing the LIDAR in action. We also started looking at ROS tutorials to better understand the infrastructure that ROS provides. Finally, we began writing our design report throughout the week. Hopefully we get the feedback from the design presentation soon, so we can incorporate necessary changes and update the report in time for the submission.

Keshav Sangam’s status report for 2/19/2022

This week was primarily aimed at setting up some environments. Since ROS is not compatible with versions of macOS beyond 10.15 (Mojave), I had to dual boot my computer to install Windows. See here for an explanation why: https://discourse.ros.org/t/macos-support-in-ros-2-galactic-and-beyond/17891

The setup process with Windows is ongoing, so no results as of yet. Thankfully, we know that the Xavier runs a Linux distro and ROS has complete Linux support.

The LIDAR also arrived as of yesterday, but the software developed by Slamtec called RoboStudio comes with its own host of problems. The Slamtec server is based in China, and for some reason, that prevents the RoboStudio application from being able to access the plugin manager necessary to install LIDAR support. Thus, I can’t actually verify that the LIDAR is working. At the very least, the LIDAR is spinning. I would upload a video but I’m getting the following error (happens with mp4 and mov file types):

I believe we are a bit behind schedule. It would be nice to have ROS installed on the Xavier by next week and have the RPLIDAR demo application running.

Keshav Sangam’s Status Report for 2/12/2022

This week was primarily research oriented. My work revolves around processing sensor data, so since LIDAR and mmWave sensors haven’t been delivered, I focused on learning more about the Kalman filter and its various offshoots.  The extended version of the filter works better in non-linear systems, and thus makes sense to use for our purposes. However, in researching sensor fusion techniques, we came across this article that uses a neural network to interpolate the mmWave data for robust mapping. The  network also has additional features such as a radar-based semantic recognizer for object recognition, but it is unlikely we need such features. The final trained network was also found on Github, so we will have to test the efficacy of this network for our robot to see if we could avoid creating, testing, and optimizing an extended Kalman filter. I ordered the mmWave sensor board that was mentioned in the paper, to further maximize the chances that the network could work for us. My progress is on schedule, but it would be extremely helpful if the sensors arrived ASAP so we could work on sensor-Jetson interfacing. I could play around with and test the sensors personally. Deliverables for next week are dependent on whether the senors arrive. If they do, I hope to have the Jetson able to read in both sensors’ data. If not, I will focus on helping Jai with ROS and controlling the robot from the Jetson.