Team Update
The most significant risk we have identified is the Roomba’s inability to adjust its position to capture all faces in the shot due to a wall collision. One idea we had for managing this risk was to present some type of indication such a row of LED lights or an LCD display that will tell the subjects that a photo will be taken soon and that should reorient/reposition themselves so that our robot may take the optimal photo. Another risk we see is that our thermal sensor provides us data that is difficult to use. We plan on extensively testing the sensor soon so that we may understand how effective it will be at calculating distance from a human (with reasonable granularity). As a contingency plan, we have looked into ultrasonic sensors to see if they will be a viable alternative.
For our design, we’ve changed the requirement of the Roomba stopping 1 foot away from the subject of the photo to stopping 3 feet away. We felt this was necessary because we believe that 3 feet is a safe distance that humans that accidentally walk towards the Roomba (and may bump into it), and is a comfortable distance to take a clear photo. We’ve also included a new requirement that the width of the face in the shot should be greater than 1/6 of the width of the image. This means that the human subject(s) is/are the focus of the picture, and are not completely far away. We don’t see a very large cost with this change because we haven’t begun testing the capabilities of the integrated devices yet.
For the most part, our schedule is the same.
Adriel
This week, I’ve completed set up for the Raspberry Pi. This included installing the operating system, enabling Wifi and SSH capability, and registering the device with the CMU-DEVICE network. Now as long as the Pi is supplied power and the Pi is on campus and we are on campus, we can upload code to the device and test for compilation errors. Additionally, we’ve made continual progress on updating our Design Review slides. Updates that I have made include creating a diagram to visualize the “thought process” of our robot when it is in the test environment, and presenting more quantifiable data about why we chose our specific thermal sensor. I am also continuing to do research and place orders on the interconnects between the Pi, the Roomba, the thermal sensor, and the camera.
Currently, I am on schedule.
For next week, I hope to create some kind of platform that may be mounted onto the Roomba that will also keep the tripod in a stable position. Additionally, I hope to have all the items needed to connect our devices so that we may begin integration.
Mimi
This week I moved the face detection testing code onto the rpi, however our sd card did not have enough space to download opencv, so we had to put in an order for another one with more memory. I also started learning how to trigger image capture with the rpi and the rpi camera module, so that I will be able to integrate that feature when we receive our camera. This week I also worked on putting together our design presentation, which included making design decisions, reworking our block diagram, researching papers to get quantitative data, meeting to get feedback, and working on the slides and presentation itself.
My progress is on schedule according to our Gantt Chart that we created.
In the next week, I hope to successfully download and compile opencv on our raspberry pi once our newly ordered sd card arrives. I also hope to write code that will trigger image capture and save images using our rpi and rpi camera module.
Cornelia
This week I have started working on code to move the Roomba in at least 4 directions (forward, backward, left, and right). There is a lot of documentation (1, 2, 3) online for PyCreate, a library that allows users to move a Roomba with a RPi and the library we are using, so I have been diligently perusing them. There are also many past projects regarding Roomba movement that I am continuing to consult. In addition, our team as a whole have been meeting a lot to work on the Final Project Report as well as the Design Review Presentation slides for our presentation next week. This has involved meeting with our TA, discussing additional challenges like how the tripod will mount onto the Roomba, gathering more quantitative metrics for validation, and collating everything into a final slide deck. More specifically, I have been developing a diagram of our final overall system to allow better visualization of what our final product will look like and doing calculations to figure out the number of sensors we need to cover a wide enough range of field.
We found out this week that connecting the Roomba to the Raspberry Pi is more complicated than we thought and may require two individual heads that we connect ourselves. Adriel is figuring this out through research and ordering the parts to do so. In the meantime, I won’t be able to test my movement code on the Roomba just yet. To catch up to our planned schedule, I will continue to work on the code remotely and load it onto the RPi to test with Roomba the moment we get it connected.
Next week, I will work on continuing to develop the Roomba movement code and adding in object collision detection using the built-in sensors on the Roomba. When vacuuming (as the Roomba was originally intended), the Roomba will bump into walls or the legs of a chair or table and turn to move away from whatever it hit. I will be re-coding that since we are overriding the Roomba’s original movement and this will require our own programming.