Status Report (2/17)

Group Update

We realized that the sensors were not adequate for environment (the minimum distance they can sense is less than what we were expecting) but this is being mitigated using a larger maze. We have also removed the Camera which was a backup for the LIDAR because we concluded that including an extra software for the camera to act as a backup for the LIDAR would take too much time and take our focus away from the project.

The Camera module was cut from the project since we decided that we only needed the LIDAR to implement SLAM. This has significantly lowered our cost and we can focus on getting a better LIDAR. We were also not sure if we would be using ROS or Python library but we ultimately decided to go with Python. This change would not incur any additional cost.

Our Gantt chart has been updated significantly due to the changes. We pushed back most of the deadlines due to the changes in the design and the ongoing research about modules as well as components that we did this week.

Amukta’s Update

Last week, I researched python data libraries, imaging modules, the ROS navigation stack, and the robot’s map representation. Next week, I will set up the Raspberry Pi, set up the configuration to receive lidar points, and experiment with it to come up with maze dimensions. I’m on schedule, since though the maze building was pushed back, the camera module was cut.

Kanupriyaa’s Update

Probability in Localization :

This week I was supposed to start implementing the Localization part of the SLAM algorithm. I spent the first half of the week learning about different filter that I would need for probability determination and settled on an EKF filter that will suffice for the SLAM that we are trying to implement. I spent the next two days learning about EKF probability filter and how the calculation for this filter would take place in regards to SLAM. I spent the last day working on writing simple Python code using numpy to build a small probability filter which I need to expand more to finish the localization for the SLAM.

Looking Ahead:

According to the Gantt Chart I am on schedule. Next week I need to focus on including the filter I have created and using it to make a mapping algorithm which would complete the localization part of the SLAM algorithm. This would be significantly harder to code than the small filter I have made as of now but since I understand the concept much better it should mitigate some of the difficulty I encountered this week. By the end of next week I hope to have the localization code for the SLAM algorithm completed

Tiffany’s Update

This week, I researched and finalized purchases for the robot’s main components.

Chassis:

I evaluated the effects of buying a chassis vs custom building a chassis. Most chassis kits use yellow DC hobby motors that either don’t include encoders or have low resolution, unstably attached encoders. We decided to build a custom robot to include higher speed motors and higher resolution encoders. In addition, this would allow us to integrate mounts for the sensors and batteries we are using more easily, and be more flexible with the robot dimensions. We purchased a 12V, 350 rpm wheel kit (includes dc motor, built in encoder, mounting brackets, and rubber wheels).

Lidar & Camera:

I researched different lidar sensors, and checked with Professor Mukerjee about previous year’s leftover sensors. The two main lidars I compared were the XV-11 lidar, which is a component from the Neato autonomous vacuum, and the RPLidar. The specifications for the two lidars seem nearly identical, except the XV-11 is sold with an Arduino microcontroller for $159, while the RPLidar is $99 with a usb adapter. Both collect 8000 data points per second (360 points per revolution), have scan rate of 5.5 – 10 Hz, use 5V power supply, and have detection range of 0.15 – 12 m. The XV-11 is very popular for autonomous robots, and has a sizable collection of articles and open source support. The RPLidar does not have as much open source support, but the company does provide a SDK. In addition, Professor Mukerjee mentioned that previous capstone teams used the RPLidar. Therefore we decided to purchase the cheaper RPLidar.

One Issue I discovered is that the lidar’s minimum detection range is 15 cm, which may be too short for a maze with 8 in wide hallways. So I suggested revising the Gantt chart to delay building the maze until we can confirm the actual range.

ROS Navigation Stack:

I also researched available SLAM tools we could build our pipeline from, particularly the ROS navigation stack. I completed a 4 hour online course on configuring and testing the navigation stack for Turtlebot in Rviz.

Leave a Reply

Your email address will not be published. Required fields are marked *