Browsed by
Author: shanelh

Final Presentation

Final Presentation

Here is a link to our final presentation, given to Section 2 earlier this week: https://docs.google.com/presentation/d/1yigIsbeaICcAaOPwGWH7EM3ge098cm2YGi6YGCCXt0w/edit#slide=id.g83f711c7c0_0_246

Team Status Report 4/19

Team Status Report 4/19

This week was focused on 1.) integration of the program in real time and 2.) optimizations to the navigation and path planning We connected all the components of the project together into a multithreaded program joining the Roomba, the lidar sensor, SLAM, and navigation system. This entailed writing control flow to make sure the program status was being updated as expected, and data was being passes smoothly between each thread. One by one, we were able to add each thread…

Read More Read More

Shanel’s status report 4/19

Shanel’s status report 4/19

This week, we focused on integration of each of our separate components together. The majority of the work we’re doing at this point is integration through pair programming. The part that I programmed this week was the control flow of each of the three threads (obstacle detection, SLAM/path planning, path execution) happening at the same time. For the end condition, we decided that the program would be complete when it is enclosed by walls on all sides and has mapped…

Read More Read More

Shanel’s Status Report 4/11

Shanel’s Status Report 4/11

I dedicated our efforts this past week to the software side of our project. First, I implemented A* search to generate a path for the robot to a previously selected destination. I used the Manhattan Heuristic so we could have a simple representation of the bearings of the robot as one of : North, South, East, and West. The path is represented as a list of points for the robot to navigate to next. As a MVP, we made the…

Read More Read More

Shanel’s status report 4/5

Shanel’s status report 4/5

For the demo, we plan to exhibit a full circle of data transfer (not necessarily in real time). It will start with the robot moving based on instructions we give it. It will take in odometry and lidar scan data that will feed into BreezySLAM. BreezySLAM will give us back a final point bytearray generated as a map. We then feed this map as a pgm fil into our navigation algorithm which generates a path and set of instruction back…

Read More Read More

Shanel’s Status Report 3/29

Shanel’s Status Report 3/29

This week I migrated our web application form plain JavaScript to React. I initially built it using Cloudinary to store picture and videos that would be transmitted from the Pi, but ultimately decided to use our Amazon credits for EC2 and S3 instead. Now, our web application runs on an EC2 instance. The Pi will transmit the map onto the EC2 instance, where the web app will look for the images and videos to display. I also wrote our updated…

Read More Read More

Shanel’s Status Report 3/22

Shanel’s Status Report 3/22

The beginning of the week was spent deciding how to rework our project to something feasible that we would be able to deliver at the end of the semester. The group and I spent an initial meeting discussing what changes we are going to make: what to cut out, what to add, and what concerns we had. I then took this bullet point list and wrote out the full statement of work. Since we decided to add more to the…

Read More Read More

SHANEL’S STATUS REPORT 3/7

SHANEL’S STATUS REPORT 3/7

For the OCR algorithm, I got Tesseract set up and working on input image files. I also wrote a python script to start taking input from the Raspberry Pi Camera and process optical character recognition, but haven’t been able to test it. The process to download Tesseract and OpenCV onto the Raspberry Pi took longer than expected, but we finally got it. I also came up with a general navigation planning flow using a Global and Local SLAM subsystem modeled…

Read More Read More