Raymond Xiao’s status report for 04/16/2022

I am currently working on setting up the CV portion of our system. The main sensor that I will be interfacing with is the camera. This week, we estimated the parameters of the camera we will be using using MATLAB (https://www.mathworks.com/help/vision/camera-calibration.html). These parameters will be necessary to correct any distortions and accurately detect our location. The camera will be used to detect ArUco markers which are robust to errors. There is a cv2 module named aruco which I will be using. As of right now, I have decided on using the ArUco dictionary DICT_7X7_50. This means that each marker will be 7×7 bits and there are 50 unique markers that can be generated. I decided to go with a larger marker (versus 5×5) since it will make it easier for the camera to detect. At a high level, the CV algorithm will detect markers in a given image using a dictionary as a baseline. These markers are fast and robust.

We also ordered some connectors for our battery pack to Jetson but it turns out we ordered the wrong size. We have reordered the correct size and have also used standoffs as well as laser cut new platforms to house the sensors for more stability. In terms of pacing, I think we a bit behind schedule but by focusing this week, I think we will be able to get back on track. My goal by the end of this week is to have a working CV implementation which can publish its data to a ROS topic which can then be used by other nodes in the system.

Keshav Sangam’s Status Report for 4/16

This week, I focused on getting the robot to run headlessly, with all the sensors, the battery, and the Jetson entirely housed on top of the Roomba. Jai started by CADing and laser cutting acrylic panels with holes for the standoffs, in addition to holes for the webcam and the LIDAR. After getting the battery, we realized that the DC power jack for the Jetson had a different inner diameter than the cable that came with the battery, which we fixed via an adapter. Once we confirmed the Jetson could be powered by our battery, I worked on setting up the Jetson VNC server for headless control. Finally, I modified the Jetson’s udev rules to assign each USB device a static device ID. Since ROS nodes depend on knowing which physical USB device is assigned to which ttyUSB device, I created symlinks to ensure that the LIDAR is always a ttyLIDAR device, the webcam a ttyWC device, and the iRobot itself as a ttyROOMBA device. Here’s a video to the Roomba moving around:

https://drive.google.com/file/d/1U68VRGrZiqcw5ZbmErxF-N3jlftXx110/view?usp=sharing

As you can see, the Roomba has a problem with “jerking” when it initially starts to accelerate in a direction. This can be fixed by adding a weight to the front of the Roomba to ensure equal distribution (for example, moving the Jetson battery mount location could do this), since the center of mass is currently at the back end of the Roomba.

We are making good progress. Ideally, we will be able to construct a map of the environment from the Roomba within the next few days. We are also working in parallel on the OpenCV + path planning.

Team Status Report for 4/16

This week, our team mainly worked to have all of our components housed on the iRobot to run headlessly. Now that we have the robot ready with all of its sensors fixed, we can start to test the Hector SLAM with the robot as a medium and even try to provide odometry data as an input to the Hector SLAM so we can avoid the pose estimation errors (the loop closure problem). After that, we can integrate the path planning algorithm, and after some potential debugging, from there we can start testing with a cardboard box “room” to map. As a team, we are on track with our work.

Jai Madisetty’s Status Report for 4/16

The bulk of the work the past week was dedicated to the housing of the devices since getting the right dimensions and parts took some trial and error. I am currently working on the path planning portion of this project. The task is a lot more time consuming than I imagined since there are a multitude of inputs to consider when coding the path planning algorithm. Since our robot solely deals with indoor, unknown environments , I am implementing the DWA (dynamic window approach) algorithm for path planning. If time permits, I would consider looking more into a D* algorithm, which is similar to A* but is “dynamic” as parameters of the heuristic can change mid-process.

Given we only have a week left before final presentations, we are planning on getting path planning fully integrated by Wednesday, and then running tests up until the weekend.

Team Status Report for 4/10

This week, our team presented our demo, worked on the component housing for the iRobot, and began more work on the OpenCV. In the demo, we showed the progress we had made throughout the semester, The housing is being transitioned from a few plates of acrylic to being a custom CAD model. Finally, the OpenCV beacon detection ROS package development has begun, and is shaping up nicely. As a team, we are definitely on track with our work.

Keshav Sangam’s Status Report for 4/10

This week, I focused on preparing for the demo and starting to explore OpenCV with ROS. The basic architecture of how we plan on using webcams to detect beacons is to create an OpenCV based ROS package. This takes in webcam input, feeds it through a beacon detection algorithm, and estimates the pose and distance for a detected beacon. From here, we then use ROS python libraries to publish to a ROS topic detailing the estimated offset of the beacon from the robot’s current position. Finally, we can detect visualize the beacon within ROS, and can thus inform our pathplanning algorithm. We are working on camera calibration and the beacon detection algorithm, and we plan to be able to get the estimated pose/distance of the beacon by the end of the week.

Jai Madisetty’s Status Report for 4/10

This week, I focused on preparing proper housing for the different components. We are planning on sticking with the same design as the barebones acrylic and tape design we had previously. We are still waiting on the standoffs so we have not yet started fixing components into place. I predict that CADing will be the most time consuming this week.

While waiting for the standoffs to arrive, we are working on getting our webcam integrated and calibrated with the NVIDIA Jetson. We are also planning on starting the path planning implementation this week. I began researching different algorithms that work and integrate well with ROS.

 

 

Raymond Xiao’s status report for 04/10/2022

This week, I am helping to set up the webcam. Specifically, on calibrating the camera (estimating parameters about it). The camera model we are using is just a standard pinhole webcam. We are currently planning on calibrating the camera on our local machine and then using it on the Jetson. This is a reference to a page that we plan on using to help us in calibration: https://www.mathworks.com/help/vision/camera-calibration.html.

We are also setting up our initial testing environment right now. As of right now, we are going with cardboard boxes since they are not expensive and easy to setup. Depending on how much time is left, we may move to more secure ones.

Team status report for 04/02/2022

This week, our team worked on setting up the acrylic stand that the sensors as well as Jetson will be mounted on. The supports as of right now are wooden supports, but we plan to use standoffs in the future. We have most of our initial system components set up right now for the demo which is great. As a team, we are all pretty on track with our work. In terms of our plans for next week, overall, we will continue working on the iRobot programming and implementing a module for cv2 ArUco tag recognition.

Raymond Xiao’s status report for 04/02/2022

This week, I worked on helping set up for the demo this upcoming week with my team. I specifically worked on helping to set up the iRobot sensor mounting system with Jai. We used two levels of acrylic along with wooden supports for now (we have ordered standoffs for further support). As of right now, the first level will house the Xavier and webcam while the upper level will house the LIDAR sensor. I have also been reading the iRobot specification document in more detail since there are some components of it that are not supported in the pycreate2 module. For this week, I plan on helping set up the rest of the system for the demo as well as a wrapper module for ArUco code that we will be using for human detection.