Weekly Status Reports

5/4 Status Report

Team Update

Currently, the most significant risk that we foresee is the robot getting too close to humans to take a decent picture. After a lot of testing, we found that the minimum distance that our sensors would report was around 24 inches, which is very close to the distance requirement we’ve set. This means that in order to accurately detect an upcoming obstacle, the robot would have to be fairly close to the obstacle, which might hinder its ability to take a decent picture. Another risk that we foresee is the support structure not being sturdy enough, after pivoting from our initial heavier structure.

To mitigate the IR sensor risk, we plan to do more testing with the sensor processing software and playing with IR sensor trigger values to determine the farthest that the sensors can detect without constantly being triggered. To mitigate the support structure risk, we have reduced some of the height to our structure so that the robot isn’t as top heavy. We plan to conduct more testing in the actual test environment up until our final demo, and will respond to any issues as they come up.

Our Gantt chart remains the same. It’s attached here for easy reference:

Adriel

This week I worked on one of the iterations of the support structure of our robot. It turns out the wood we got was too heavy and that we would risk damaging the Roomba’s motors if we kept all that weight on top. Thus, we pivoted to using lighter wood, and a less tall support structure. I also worked on moving the top platform connections from a breadboard to a protoboard. All of these wires and components were soldered onto the protoboard to ensure that wires would not fall out during our final demo.

I am currently on schedule.

For the last few moments before our final demo, I hope to have any outstanding issues resolved and have testing fully completed.

Mimi

This week I worked on helping to build the base of our structure, which carries all of our hardware on top and needs to be sturdy enough to move around, but not too heavy. Specifically, I helped measure and cut the wood in the wood shop, glue the pieces together, and test on the robot. Cornelia and I also made some small adjustments to our code based on testing.

I am currently on schedule.

For tomorrow, we plan to do final testing and adjustments for the final demo in our actual demo space. In the coming week we will also need to do our final demo presentations, and our final report.

Cornelia

This week I worked on rehearsing and preparing for our final presentation which happened on Monday. I also worked on building the base of our structure, which involved getting lighter wood, cutting them, slotting them, and gluing it all together. We then assembled the entire structure and adjusted the height. We tested our robot a lot. Mimi and I also made small updates to the code after Adriel adjusted the camera movement code.

According to our Gantt chart which has remained unchanged, I’m on schedule.

With one day left before the final demo, we plan to do more testing to make sure we’re completely ready for the public demo on Monday. Then we will work on the final report due Wednesday night.

4/27 Status Report

Team Update

Currently, the most significant risk that we foresee is our IR sensors due to their lack of reliability. We have tried to mitigate this risk by making their positions permanent after testing, and by writing software that discards outlying values in time and position. During our in-lab demo, the robot still got really close to subjects. We have added many redundant checks, but this still remains an issue. It never touches subjects, but gets fairly close. Another significant risk is the stability of our robot. We have mitigated this by ditching the tripod and switching to a wooden structure approach.

Our design changed slightly while we were preparing for the in-lab demo. There are now only 2 IR sensors at each of the 6 locations – this is because many of our IR sensors broke and the height of 3 sensors stacked on top of one another was less stable. To still have a checking system and discard outliers, we are comparing current values with values from one previous iteration – only if both meet our condition do we determine that the IR sensors are being triggered. In addition, we have reduced our thermal sensors down to just 1. Although both work, we decided we didn’t need the one that faces the back. If we move back to adjust the margins in the frame, we don’t need to know if it’s a human – the robot should stop moving back if anything triggers the distance sensors. We now have 3 Arduinos total (2 of the Arduinos for the 12 IR sensors, 1 of the Arduinos for the LCD screen, motor, and thermal camera sensor), connected to the RPi by GPIO pins.

Our Gantt chart remains the same. It’s attached here for easy reference:

Adriel

This week I worked on completing the final structure of the robot (excluding the supporting structure that was previously the tripod). I assisted in testing and verification of the robot in preparation for our final demo. I also made some progress on creating content for our final presentation slides.

I am currently on schedule.

This week I hope to have our poster, final slides, and additional testing completed.

Mimi

This week I worked on completing our structure for our final in-lab demo, debugging our code, and testing our metrics against the success values we set in our design doc. We also redesigned the base of our roomba which we will build for the public demo. This week I also worked on our slides and content for the final presentation this coming week.

I am currently a little bit behind schedule, because I haven’t finished testing yet as we are still making changes to the structure of our robot. I will make this up this week by going in to lab outside of class to do so once we build our final structure.

This next week I hope to help finish our structure for the public demo, continue working on our final presentation, make our poster for the public demo, and finish testing.

Cornelia

At the beginning of the week, we all worked together on testing our robot before our final in-lab demo on Monday. After the demo went smoothly, we finally received our barreljack-to-USB connectors to power the Arduinos separately from the Raspberry Pi. We also purchased wood to create the vertical structure of our robot. For the rest of the week, I worked on the final presentation slides for next week.

According to our Gantt chart which has remained unchanged, I’m on schedule.

This upcoming week I hope to continue testing our robot with Mimi and Adriel. We still need to take our robot to Wiegand Gym and test it in those specific lighting and WiFi conditions, especially now that we know which section of the gym we will be in for the public demo. In addition, I will be preparing to give our final presentation on Wednesday and working on our poster for the public demo.

4/20 Status Report

Team Update

Currently, the most significant risk that we foresee is our IR sensors due to their fragility and lack of reliability. We have tried to mitigate this risk by making their positions permanent after testing, and by writing software that discards outlying values in time and position. Another significant risk is the stability of our robot. With our added second layer and our tall track for our camera, the height of our robot may contribute more to its instability, and cause issues such as photo blur or lack of durability. Our approach to this problem is to cover the majority of our wires and hardware components so that they will be protected when the roomba is moving. Additionally, we will secure the legs of the tripod more permanently and securely so that the entire structure is more stable.

The overall approach of our existing design remains, aside from a few detail changes. First, we were originally sending data using USB to USB-C between our raspberry pi and Arduino, but we recently switched to sending signals using GPIO pins, because it is provides more efficient and reliable data for us to use. Second,

Our Gantt chart remains the same. It’s attached here for easy reference:

Adriel

This week I worked on refining the design, laser cutting, and assembling the final structure of our robot. I have also wired up the majority of all components (logic level shifters, motor drivers, Arduinos, RPi, etc.) within our robot. I also set up some example code for the communication between Raspberry Pi and Arduinos using the GPIO pins as a framework for Cornelia and Mimi to use.

I am currently on schedule.

This week I hope to have everything prepared for our demo on Monday, and plan to make adjustments as they become necessary.

Mimi

This week I worked on installing new ir sensors, cutting pvc pipes for holding up our second level on our roomba, putting together the track for the camera, defining our logic for moving based on image optimization, debugging our code, and beginning to test our metrics against the success values we set in our design doc.

I am currently on schedule.

This next week I hope to finish testing, after which I will begin making changes to meet any tests we did not meet.

Cornelia

This week I worked with Mimi to finalize the algorithm for photo optimization. Since we received the motor to move the camera up and down and Adriel was able to connect it to the entire system, we were able to code the roomba to move backwards based on face and image margins as well as moving the camera up and down on a track for fine adjustments. We spent a lot of the week testing and making more adjustments, debugging our code and making sure our robot will perform for final in-lab demo next week. We also had our reading discussion on Wednesday, so I spent some time reviewing my notes from the reading since we completed that a while ago so I could be prepared for the discussion.

According to our Gantt chart which has remained unchanged, I’m on schedule.

This upcoming week I hope to continue testing our robot with Mimi and Adriel. We will be taking our robot to Wiegand Gym and testing it in those specific lighting and WiFi conditions.

4/13 Status Report

Team Update

The most significant risk that we foresee is not being able to secure the upper components of the robot on the small base of the tripod. In order to mitigate this risk, we plan on evenly balancing the weight distribution on the upper half of the robot. We plan to look for a washer that matches the screw coming out of the tripod to increase stability of the robot. If this is unsuccessful we plan to use excessive amounts of tape and hot glue for stability.

The overall approach of our existing design remains, aside from a few detail changes. These detail changes include the implementation of the motorized stick, and the structure of the upper half of the robot. The motorized stick will now be a motor that pulls on a string that the Raspberry Pi camera module is attached to so that it may move up and down. The sensors and components housed on the upper half of the robot remain the same, but the placement of everything has changed.

Our Gantt chart remains the same. It’s attached here for easy reference:

Adriel

This week I was able to create a redesign for the layout of the upper half of our robot. The parts for the revised motorized stick idea arrived slightly late, so I wasn’t able to construct it. Because of this,  I instead wired up the connections from our LCD display to the Raspberry Pi. I also created a very basic circuit that would emulate the way that the Raspberry Pi would send signals to go up or down.

I am currently behind schedule because of the late parts, but I plan to mitigate the negative effects of this by arriving to lab early on Monday and trying to complete it before the start of class time.

Next week, I will actually assemble the upper half of our robot so that it is organized and stable for our final demo.

Mimi

This week I worked on the logic for moving the roomba based on image optimization. We had to go back to our design doc to refine how we plan to implement this behavior. I also worked with our team to determine what additional parts we need, and what our plan is for the next couple of weeks before our final demo.

I am slightly behind schedule because I have not completed the code for moving the roomba based on image optimization. I will catch up my work by meeting with my group and working for a good chunk of time tomorrow night before the start of next week.

This next week I hope to finish code for moving the roomba based on image optimization. I will also work on beginning to test our metrics against the success values we set in our design doc.

Cornelia

This past Sunday I worked on finishing the second reading assignment. Then, during class and throughout the week, I worked with Mimi to move the Roomba based on image margins, as stated in our design document. We also did a lot of testing and refinement to reach the metrics we outlined in the design document for verification and testing.

I am slightly behind schedule due to Spring Carnival but I plan to catch up on Sunday night before the start of classes next week.

This coming week I will continue to work on code to adjust the Roomba’s position based on the image optimization standards we defined as well as continue testing our robot. I plan to help Adriel with assembly as well so that wires are fully enclosed and connected securely.

4/6 Status Report

Team Update

Our risk from last week remains: poor lighting causing OpenCV to fail to detect faces. We plan to test our robot in Wiegand Gym to simulate the conditions of the final demo, as suggested by Professor Nace, to make sure our robot will perform as desired.
Another risk is the communication received from the Arduinos – there is often garbage data (most likely because the signals are coming too fast over the serial port and things are sent before they can be read, causing overlaps and malfunctions). We are mitigating this using regex at the moment, which seems to work. Through more testing, we may be able to see if this is a long-term solution. If it is not, we may need to send the data at longer time intervals.
Another risk is the signals we receive from the thermal camera sensor. When tested in isolation, the thermal camera sensor behaves exactly as expected. However, once attached to the whole system, it detects human body signatures even when they’re not present. We are still trying to figure this problem out.

Currently, we are using cardboard pieces and cardboard boxes to hold up our RPi and camera. This is obviously not a final product. We will be shifting this to permanent structures using the Balsa wood we have already purchased and laser-cut for other parts of the project. In addition, we will be making an enclosure for all of our wires as well as moving wires out of bread-boards and instead soldering them together for a permanent solution.

Our Gantt chart remains the same. It’s attached here for easy reference:

Adriel

I spent this week testing individual sensors and soldering the wires onto the pins of the IR sensors. I wired up these sensors and the thermal sensor to the Arduino and situated all of our individual components on the top half of our robot for our midsemester demo. After examining several alternatives, I’ve decided to take a different approach to the motorized stick by using a stepper motor attached to a string to raise and lower the platform that the Raspberry Pi and Raspberry Pi camera will be on. Additionally, I spent time reading the required textbook chapters for Reading Assignment 2 and writing a reflection.

My progress is currently behind schedule because I haven’t procured all of the materials required for the modified version of the “motorized stick”. I have tried to offset the impact of this setback by preparing everything necessary for LCD display integration so that it can be completed as quickly as possible.

Next week, I hope to have the stepper motor platform completed. Additionally, I hope to have the LCD display integrated into our robot.

Mimi

This week I spent 12+ hours alone with the rest of our team preparing for our mid-semester demo, debugging in lab and testing our robot behavior, and soldering sensors. Aside from that, I also worked on updating our detailed Gantt chart and learning how to increase image resolution. Lastly, I spent a significant amount of time reading the chapters and writing a reflection for our second reading assignment.

My progress is currently on schedule.

In the next week, I will work with Cornelia on adding code to move the Roomba according to photo specifications. This will include moving forwards and backwards, and up and down based on the margins of faces in the photo, which are detected with OpenCV.  Additionally, I will work on trying to speed up the process of uploading photos to the cloud.

Cornelia

This past week was jam-packed! This week was focused on soldering wires to the tiniest pins on the back of our IR sensors (this took so long, my goodness) and assembling our robot. We ran into problems regarding powering our Raspberry Pi (which is running all of our code, uploading images to a Dropbox folder over wifi, and powering the attached Arduinos) but found out which cords and battery packs we had would work. We constructed temporary stands and mounts for our RPi and camera at the top of our robot. We spent a lot of time testing then got ready for our mid-semester demo on Wednesday! After our demo, we soldered the rest of our sensors (all 18) and discovered that some don’t work. We will need to order some more to replace those and a few more as back-up. On the programming side, I had to refine a lot of the code involving parsing of lines read from the Arduinos and integration of image capture code. At the end of the week, I focused my efforts on the second reading assignment which involves both reading two chapters of The Pentium Chronicles as well as writing a reflection and submitting that to Canvas.

According to our Gantt chart, I am on schedule.

This next week, I will be working with Mimi to optimize photos and their margins. The deliverable is the robot being able to move forwards and backwards, up and down, based on the margins around faces in the photos captured. Once all the IR sensors are attached, I will be able to use all of them to implement a vote system for more accurate human detection as well. In addition, I will be cleaning up the code that has been weakly stitched together for mid-semester demo.

3/30 Status Report

Team Update

One risk that we have identified is poor lighting and photos that are too dark and low quality. Poor lighting also affects the accuracy of the OpenCV face detection. For our demo location we have tried to find a brighter area to accommodate our robot, however in the future we may consider attaching a light ring to our robot to ensure that every photo is bright enough if we continue to run into problems.

Another significant risk continues to be mounting and getting accurate data from our IR and thermal sensors. We have begun testing with the sensors we received recently, and it seems that if we use multiple sensors to average or vote we can get accurate thermal data from a reasonable distance of a few feet. However, we have yet to mount our sensors onto the tripod structure so there is still risk involved.

For the design of our system, there have not been any significant changes over the past week. We have been primarily implementing our current design and doing testing.

There were minimal changes made to our Gantt chart, which is shown here.

Adriel

This week I continued testing on our IR and thermal sensors with Cornelia. I made some modifications to the tripod mount so that it can be fixated on the Roomba correctly, while also providing access to the USB to serial DIN cable. I designed the files for the surface that the RPi, Arduinos, and sensors will be placed on. I laser cut the files, but occasionally the laser wouldn’t cut all the way through the balsa wood, even with multiple passes, so I finished the cut using a box cutter. I also worked with Mimi and Cornelia to solder wires onto the sensors.

My progress is currently behind schedule, because using the box cutter took slightly longer than expected. Additionally, the sensor wires being soldered took much longer than expected because we thought we could use pre-made male to female wire in the sensor, but they would not all fit in the little box where the pins are located. At this point, I should’ve made some progress on the motorized stick. In order to catch up, I will start designing the motorized stick early this week, and take concrete steps to complete its construction.

By next week, I hope to have the motorized stick constructed and have the LCD display integrated into our project. I also hope to have everything tested and prepared for our midsemester demo.

Mimi

This week I completed the code which automatically uploads photos taken to a Dropbox folder. I also did some minimal timing testing, finding that it takes approximately 1/10 of a second to write an image to a jpg file from the array I have it in initially, and then around 2-3 seconds to upload to dropbox. I also did testing with taking images with the raspberry pi camera and using OpenCV to outline faces. From the testing I did, it appears that lighting may be an issue for our project, because in poor lighting the OpenCV is not as accurate and the images have poor quality. Lastly, Cornelia and I worked on integrating our code together so that images can be taken when we sense a human, while the Roomba is moving around.

My progress is currently on schedule.

In the next week, we have to prepare for our mid-semester demo on Wednesday. For our project, I will continue working with Cornelia on working out any kinks in the code and integrating the sensor data. We will also work on code for adjusting the Roomba position based on camera data, to optimize photos. Lastly, I will continue testing the speed of image transfer and try to make it faster.

Cornelia

This past week, I worked with Adriel to test the IR and thermal camera sensors in isolation from the system using an Arduino. I then worked on connecting the Arduino to the RPi to send signals from the sensors to the RPi. Then I ultimately integrated the received signals into the Roomba code. The code is able to detect humans, but the logic for stopping the Roomba 3 ft away still needs to be tested and debugged. Additionally, after detection, we need to look for a face and trigger image capture if detected. I also made changes to the Roomba code such that when it detects things with its built-in sensors, it turns 60º away from the triggered sensor, not the center, front-most point. This required some measurements and calculations shown in the attached image here:

According to our Gantt chart, I am on schedule. We have connected and tested at least 1 of each type of our sensors (IR and thermal camera), so connecting the rest will just be a matter of duplication. We have received all of our IR sensors, thermal camera sensors, and Arduinos. We have also integrated the camera code with the Roomba movement code and are able to take photos when a face is detected and the Roomba is moving.

This next week, before mid-semester demo, I will be testing our robot with the team and making sure the IR and thermal camera sensors are able to detect humans and stop at least 3 ft away. After the demo, I will be working with Mimi to move the Roomba according to the camera frames received and faces’ positions within them. I will also be working on the second reading assignment due next Sunday night.

3/23 Status Report

Team Update

Risks we foresee that could jeopardize our success are the functionality of the sensors we’ve just ordered. We’ve conducted testing and integration using a few of each type of sensor, but if any of the new sensors were to fail, we may need to change our design to feature less sensors, or come up with a new algorithm for mitigating the impact of outlier data.

Our system as a whole has remained the same, we have just moved our schedule around regarding what sensory input will affect the Roomba movement.

Our Gantt chart has changed slightly. For our mid-semester demo, we have prioritized human detection and taking a photo without the fine adjustments. We will be working on those after the mid-semester demo. For Cornelia, we’ve swapped Roomba movement according to the camera with Roomba movement according to thermal and IR sensors. Mimi has also been helping with collision detection with the Roomba.

Our new Gantt chart for the coming week:

We still expect to be fully prepared for the midsemester demo, and our final demo.

Adriel

This week, I constructed the tripod mount that will be fixed on the Roomba. I did this by designing the CAD files and laser cutting the wood. I also completed the ethics assignment and the second submission.

According to the Gantt chart, I am currently on schedule.

This week, I hope to have the camera and RPi enclosure created, and have a design created for the motorized stick.

Mimi

This week I reinstalled OpenCV on our raspberry pi after some of its files got corrupted and we had to wipe the disk and reinstall the OS. I also wrote code which allows us to use our PiCamera module and OpenCV to get a video feed from the camera and save frames from the feed when desired. Lastly, I worked with Cornelia to come up with a plan for our software MVP, and how we will integrate our code for our mid semester demo. In addition I attended an ethics discussion and wrote up my final analysis in response.

According to our Gantt chart, I am on schedule besides testing our OpenCV accuracy which I will be able to catch up on this week as our project gets closer to its mid-semester demo state.

This coming week I hope to incorporate the OpenCV component into the camera capture code and write functions that can be used easily by Cornelia’s code. I will also begin testing the OpenCV accuracy uses real time video and figure out how to send images over wifi to a google drive folder.

Cornelia

This past week (and over Spring Break), I worked on the code to make the Roomba move and perform collision detection with its built-in IR sensors. Since I figured out the serial port communication between the RPi and the Roomba before break, I was able to move the code onto the RPi when I got back from Spring Break and make it move. After fixing lots of bugs, I ran into trouble with multithreading and interruptions. At the end of the week, I was finally able to figure out how to send interrupts to make the Roomba stop according to the built-in IR sensors and turn away from objects. Over break I also worked on the first ethics assignment and after our class discussion on Tuesday, I worked on the second ethics assignment that was due within 48 hours.

According to our newly updated Gantt chart, I am still a little behind. While the Roomba moves in all directions and can do collision detection with objects, it does not yet adjust according to the thermal camera or IR sensors. We only have 2 IR sensors and 1 thermal camera sensor right now, and I will be working with those this weekend and next week. We are still waiting for the rest of the IR sensors and the last thermal camera sensor. They should arrive this week.

This next week, I will be working with the IR sensors and thermal camera sensor on the Arduino to detect humans. I will also be working with Adriel to connect the Arduino to the RPi and figure out how the RPi will communicate with the Arduino and Roomba at the same time.

3/9 Status Report

Team Update

The most significant risks are the same as previous posts. However, now that we have the Roomba actually moving, we are most concerned with getting the Roomba to move according to the algorithms that Mimi will be deriving using OpenCV outputs. This will require Mimi’s photo optimization algorithm to talk to Cornelia’s Roomba movement code – the two subsystems described in our design report will need to be integrated. We are going to start working together as soon as possible. After Spring Break, both of our algorithms should be complete individually and we will need to work together to send each other the correct datas and use the correct functions. This is also how we will work with the sensor data.

Another significant risk we foresee is putting together the robots various components physically. Our design now includes the use of numerous IR and thermal sensors, so we will need place them according to the calculations we’ve made and ensure our design works and that the sensors are stable enough to get accurate data. We will also need to face any challenges involved with mounting our hardware components onto the tripod with enough durability so that the Roomba can move around freely.

We made several changes to our existing design after getting feedback from our Design Review Presentation and as we worked on our Design Report. We are now including an LCD display to show a countdown before snapping a photo – this was a suggestion many people made throughout our process. We have acquired one for free from IDeATe. We have also redesigned our robot to use 18 IR sensors (3 at each of 6 points to eliminate outliers and get reliable data) to detect objects, and then use thermal cameras to detect if that object is actually a human. These sensors are talking to an Arduino which Cornelia has leftover from 18-220, and the IR sensors were purchased. Below is our new block diagram:

Our Gantt chart has updated a little, due to some hold-ups regarding Roomba and RPi communications (explained further below).

Adriel

This week, I contributed to our design report. I identified and tested a new sensor that will help us with collision detection, namely an IR proximity sensor. I was also able to solder the pins onto and test the thermal camera sensor. I also have the design for the wooden mount and the enclosure. 

I am currently behind schedule because I haven’t had time to gain access to the tools required to create the mount structure and the enclosure. I will spend this break ensuring that these tools will be available by the time break ends.

Next week, I hope to have the tools and materials prepared for the mount and the structure. Additionally, I hope to make some progress on the motorized stick.

Mimi

The earlier part of the week was spent entirely on our design report. Following the report, I worked on testing out image capture. I was able to use the rpi and the raspberry pi camera module to capture photos and video. I also began looking into how to stream video in real time to be analyzed by OpenCV. Lastly, I began working on putting OpenCV onto our new SD card, however it takes many hours to download all of the necessary software.

I am currently on schedule.

Over this week, although it is Spring Break, I will try to get a head start on future deliverables by working on my image capture, image wifi transfer, and face detection code. I will need to figure out how to integrate all of these components with each other, as well as with Cornelia’s Roomba movement code, and the data from the sensors.

Cornelia

Over the weekend and on Monday, I spent all of my time working with Adriel and Mimi on our Design Report. Constructing and compiling this took our full attention. For the rest of the week, I worked on getting the Raspberry Pi and the Roomba to communicate over serial and mini DIN. I ran into lots of problems with this, as I could not figure out the USB.usbserial port to “create the bot” with using PyCreate2. However, on Thursday, I was able to get the Roomba to move in accordance with python code I put on the Raspberry Pi! This was a great breakthrough, especially before leaving for Spring Break – now I know the Roomba can run my code over the cable we have, and thus I can write code over break and just copy it onto the Pi when I get back to Pittsburgh.

According to our Gantt chart, I am a little behind. While the Roomba moves in all directions and can do collision detection with objects, it does not yet adjust according to camera input. This will require Mimi’s code and my code to interact. I will be recovering from this by working on it over Spring Break, which was originally scheduled as slack time, and working with Mimi to integrate our code.

This next week, I hope to use data from the camera (Mimi’s code) to adjust the Roomba’s position accordingly as well as finish the Ethics assignment (since the deadline was delayed). I will be able to test the code I write over break the moment I get back to Pittsburgh.

3/2 Status Report

Team Update

From the feedback given during our design presentation, our main risk is getting depth from our thermal sensor for collision detection. Professor Nace, who has experience with the specific model of thermal sensor we are using, suggested that we wouldn’t be able to get much depth information from it. As a result, we came up with two contingency plans to mitigate the risk. First, we will test out ultrasonic sensors that are already available for us to use. Since these sensors are low cost and readily available, we could use numerous sensors up and down the length of our robot to detect nearby objects. However, this choice could involve a tradeoff because the ultrasonic sensors may be able to accurately identify humans vs inanimate objects. A second contingency plan is to use object identification to identify humans. We have started to experiment with different classification algorithms to test whether or not they would function sufficiently well in close proximity.

Another risk we identified is the existence of objects which have narrow bases but wide upper components, which would pose a challenge for our robot which uses collision with objects at the base, but will have an extending structure on which the camera will sit. A contingency plan we have come up with to mitigate this risk is to add sensors along the legs of the tripod to sense close objects that might hit our robot.

The design of our system remains for the most part the same. However a few changes we have made include removing the light ring module and adding an lcd screen with photo prompts. In addition, as mentioned above we may be switching our collision detection sensor from thermal to ultrasonic or using object identification. We also added a new requirement of stopping latency, which is important for us to measure how efficiently our Roomba can stop after sensing a collision. Lastly, we created a diagram demonstrating the different behavior paths of the Roomba in different scenarios. This will be useful for us moving forward to identify where we must make simplifying assumptions, and to create our algorithm for movement and photo capture.

We have updated our Gantt chart according to the changes we made to our design listed above. It is shown here below:

For the most part, our schedule is the same.

Adriel

This week I worked on the design presentation slides and the design report. I also delivered our design presentation to the section B teams. I have begun designing some more concrete ideas of our tripod mount structure. I have also begun testing ultrasonic sensors as a potential alternative for our human collision detection mechanism.

I am slightly behind schedule because the feedback from our design presentation requires us to rethink what will go inside of our enclosure, and thus I cannot build it. After this weekend, these decisions will be solidified and I will have a clear idea of what exactly the enclosure will look like.

Next week, I hope to have at least a preliminary version of an enclosure and tripod mount. I also hope to have my ethics document completed.

Mimi

This past week I’ve spent most of my time working on our design presentation and design report. Our team spent a significant amount of time meeting to make design decisions and brainstorm contingency plans for major risks. I also wrote code to trigger image capture using the rpi and the rpi camera module.

My progress is a day behind schedule according to our Gantt Chart that we created, since we had to buy a new sd card with enough memory to host opencv. However I will be able to make up this in class on Monday since our new sd card has arrived.

In the next week, I hope to successfully download and compile opencv on our raspberry pi. In addition, I will move the code for image capture onto our raspberry pi, connect the camera module, and test out image capture. I will also need to spend time completing the ethics assignments and participating in the discussions.

Cornelia

This past week, I committed a lot of time to working on our design presentation. We made a lot of important design choices and focused on providing quantitative specifications as well as diagrams. On my own, I have been writing code with PyCreate2 to move the Roomba and do collision detection (for objects) with the built-in sensors. All the code is written, but has not been loaded onto the RPi or been tested with the Roomba.

According to our Gantt chart, I am on schedule with code – but simply need to load it onto the RPi and run on the Roomba. This has not happened yet because we only recently obtained the correct cord to connect the RPi to the Roomba. This will not set me behind however, as I will continue to write the code.

This next week, I hope to use data from the camera and sensors to adjust the Roomba’s position accordingly as well as finish the ethics assignment. Before Spring Break, I hope to have our Roomba move according to my code and in response to the camera’s feed.

2/23 Status Report

Team Update

The most significant risk we have identified is the Roomba’s inability to adjust its position to capture all faces in the shot due to a wall collision. One idea we had for managing this risk was to present some type of indication such a row of LED lights or an LCD display that will tell the subjects that a photo will be taken soon and that should reorient/reposition themselves so that our robot may take the optimal photo. Another risk we see is that our thermal sensor provides us data that is difficult to use. We plan on extensively testing the sensor soon so that we may understand how effective it will be at calculating distance from a human (with reasonable granularity). As a contingency plan, we have looked into ultrasonic sensors to see if they will be a viable alternative.

For our design, we’ve changed the requirement of the Roomba stopping 1 foot away from the subject of the photo to stopping 3 feet away. We felt this was necessary because we believe that 3 feet is a safe distance that humans that accidentally walk towards the Roomba (and may bump into it), and is a comfortable distance to take a clear photo. We’ve also included a new requirement that the width of the face in the shot should be greater than 1/6 of the width of the image. This means that the human subject(s) is/are the focus of the picture, and are not completely far away. We don’t see a very large cost with this change because we haven’t begun testing the capabilities of the integrated devices yet.

For the most part, our schedule is the same.

Adriel

This week, I’ve completed set up for the Raspberry Pi. This included installing the operating system, enabling Wifi and SSH capability, and registering the device with the CMU-DEVICE network. Now as long as the Pi is supplied power and the Pi is on campus and we are on campus, we can upload code to the device and test for compilation errors. Additionally, we’ve made continual progress on updating our Design Review slides. Updates that I have made include creating a diagram to visualize the “thought process” of our robot when it is in the test environment, and presenting more quantifiable data about why we chose our specific thermal sensor. I am also continuing to do research and place orders on the interconnects between the Pi, the Roomba, the thermal sensor, and the camera.

Currently, I am on schedule.

For next week, I hope to create some kind of platform that may be mounted onto the Roomba that will also keep the tripod in a stable position. Additionally, I hope to have all the items needed to connect our devices so that we may begin integration.

Mimi

This week I moved the face detection testing code onto the rpi, however our sd card did not have enough space to download opencv, so we had to put in an order for another one with more memory. I also started learning how to trigger image capture with the rpi and the rpi camera module, so that I will be able to integrate that feature when we receive our camera. This week I also worked on putting together our design presentation, which included making design decisions, reworking our block diagram, researching papers to get quantitative data, meeting to get feedback, and working on the slides and presentation itself.

My progress is on schedule according to our Gantt Chart that we created.

In the next week, I hope to successfully download and compile opencv on our raspberry pi once our newly ordered sd card arrives. I also hope to write code that will trigger image capture and save images using our rpi and rpi camera module.

Cornelia

This week I have started working on code to move the Roomba in at least 4 directions (forward, backward, left, and right). There is a lot of documentation (1, 2, 3) online for PyCreate, a library that allows users to move a Roomba with a RPi and the library we are using, so I have been diligently perusing them. There are also many past projects regarding Roomba movement that I am continuing to consult. In addition, our team as a whole have been meeting a lot to work on the Final Project Report as well as the Design Review Presentation slides for our presentation next week. This has involved meeting with our TA, discussing additional challenges like how the tripod will mount onto the Roomba, gathering more quantitative metrics for validation, and collating everything into a final slide deck. More specifically, I have been developing a diagram of our final overall system to allow better visualization of what our final product will look like and doing calculations to figure out the number of sensors we need to cover a wide enough range of field.

We found out this week that connecting the Roomba to the Raspberry Pi is more complicated than we thought and may require two individual heads that we connect ourselves. Adriel is figuring this out through research and ordering the parts to do so. In the meantime, I won’t be able to test my movement code on the Roomba just yet. To catch up to our planned schedule, I will continue to work on the code remotely and load it onto the RPi to test with Roomba the moment we get it connected.

Next week, I will work on continuing to develop the Roomba movement code and adding in object collision detection using the built-in sensors on the Roomba. When vacuuming (as the Roomba was originally intended), the Roomba will bump into walls or the legs of a chair or table and turn to move away from whatever it hit. I will be re-coding that since we are overriding the Roomba’s original movement and this will require our own programming.

Camerazzi - CMU ECE Senior Design Capstone Project 2019