Shanel’s Status Report 2/29
At the beginning of the week I really thought we were behind, especially because we spent the last week prioritizing the design document and presentation. However, our team benefited from work sessions outside of designated course time on Friday and the weekend.
By Wednesday, we had received all our parts and collected design feedback from the presentations. As a result of the design feedback, we made two major design changes to the project:
- Add ultrasonic sensors to be able to detect obstacles between the height of the spinning lidar, camera, and roomba.
- Leverage Google’s Cartographer SLAM platform for navigation algorithms
I focused most of my efforts testing out Tesseract, setting up the Raspberry Pi OS, and creating initial integration between iRobot’s Open Interface and ROS. I found that Tesseract has over an 95% level of accuracy with text, but has a large degree of variability for colored images of non-standard fonts. Some potential solutions I will try next are increasing the resolution of the photos and binarizing the image before processing for higher contrast.
In the next week, I hope to start testing OCR on live images from the Pi Camera module, and start construction of the 3D printed platform to hold the sensors in place.