Team Update
From the feedback given during our design presentation, our main risk is getting depth from our thermal sensor for collision detection. Professor Nace, who has experience with the specific model of thermal sensor we are using, suggested that we wouldn’t be able to get much depth information from it. As a result, we came up with two contingency plans to mitigate the risk. First, we will test out ultrasonic sensors that are already available for us to use. Since these sensors are low cost and readily available, we could use numerous sensors up and down the length of our robot to detect nearby objects. However, this choice could involve a tradeoff because the ultrasonic sensors may be able to accurately identify humans vs inanimate objects. A second contingency plan is to use object identification to identify humans. We have started to experiment with different classification algorithms to test whether or not they would function sufficiently well in close proximity.
Another risk we identified is the existence of objects which have narrow bases but wide upper components, which would pose a challenge for our robot which uses collision with objects at the base, but will have an extending structure on which the camera will sit. A contingency plan we have come up with to mitigate this risk is to add sensors along the legs of the tripod to sense close objects that might hit our robot.
The design of our system remains for the most part the same. However a few changes we have made include removing the light ring module and adding an lcd screen with photo prompts. In addition, as mentioned above we may be switching our collision detection sensor from thermal to ultrasonic or using object identification. We also added a new requirement of stopping latency, which is important for us to measure how efficiently our Roomba can stop after sensing a collision. Lastly, we created a diagram demonstrating the different behavior paths of the Roomba in different scenarios. This will be useful for us moving forward to identify where we must make simplifying assumptions, and to create our algorithm for movement and photo capture.
We have updated our Gantt chart according to the changes we made to our design listed above. It is shown here below:
For the most part, our schedule is the same.
Adriel
This week I worked on the design presentation slides and the design report. I also delivered our design presentation to the section B teams. I have begun designing some more concrete ideas of our tripod mount structure. I have also begun testing ultrasonic sensors as a potential alternative for our human collision detection mechanism.
I am slightly behind schedule because the feedback from our design presentation requires us to rethink what will go inside of our enclosure, and thus I cannot build it. After this weekend, these decisions will be solidified and I will have a clear idea of what exactly the enclosure will look like.
Next week, I hope to have at least a preliminary version of an enclosure and tripod mount. I also hope to have my ethics document completed.
Mimi
This past week I’ve spent most of my time working on our design presentation and design report. Our team spent a significant amount of time meeting to make design decisions and brainstorm contingency plans for major risks. I also wrote code to trigger image capture using the rpi and the rpi camera module.
My progress is a day behind schedule according to our Gantt Chart that we created, since we had to buy a new sd card with enough memory to host opencv. However I will be able to make up this in class on Monday since our new sd card has arrived.
In the next week, I hope to successfully download and compile opencv on our raspberry pi. In addition, I will move the code for image capture onto our raspberry pi, connect the camera module, and test out image capture. I will also need to spend time completing the ethics assignments and participating in the discussions.
Cornelia
This past week, I committed a lot of time to working on our design presentation. We made a lot of important design choices and focused on providing quantitative specifications as well as diagrams. On my own, I have been writing code with PyCreate2 to move the Roomba and do collision detection (for objects) with the built-in sensors. All the code is written, but has not been loaded onto the RPi or been tested with the Roomba.
According to our Gantt chart, I am on schedule with code – but simply need to load it onto the RPi and run on the Roomba. This has not happened yet because we only recently obtained the correct cord to connect the RPi to the Roomba. This will not set me behind however, as I will continue to write the code.
This next week, I hope to use data from the camera and sensors to adjust the Roomba’s position accordingly as well as finish the ethics assignment. Before Spring Break, I hope to have our Roomba move according to my code and in response to the camera’s feed.