Echo Gao’s Status Update for 12/5

This week, we did our last demo on-class, and finished off testing. We found the following test results against our metrics:

  • Delay from web app sends signal, to raspberry pi receives signal < 1s: We tested this desired latency by setting an alarm at 1:00 at 12:59:59 and see if this message can be received by raspberry pi in the next second. 
  • From robot received message to robot activate < 5s: The time needed for the Raspberry Pi to initialize the camera stream and CV algorithm takes around 8.3s.
  • Delay from facial recognition to chase starting < 0.25s: Achieved
  • Fast image processing: ML pipeline FPS > 10: Achieved around 13 FPS running our algorithm on our Raspberry Pi.
  • Accurate human detection: false positive: Never occurs during testing.
  • Accurate human detection: false negative: Occurs under poor lighting conditions.
  • Effective chase duration > 30s, chase overall linear distance > 5m: Achieved.

We also decided our schedule and division of work for the final video, and filmed many runs of interacting with the robot.

Next Week:

Finish the final video, start writing the final paper.12

Team Status Update for 12/5

Echo & Page:

  • Finished testing our completed version of the project against our metrics.
  • Filmed videos for final video
  • Decided on division of labor on the final video

We have finished the project on time. Started working on the final video and report.

Yuhan:

  • implemented a logging feature
  • implemented user login, so users can now only view their own private alarms, but both public and their own private ringtones
  • implemented alarm & ringtone deletion
  • deployed website using pm2

I am currently on schedule.

The website is currently alive at http://ec2-3-129-61-132.us-east-2.compute.amazonaws.com:4000/

Peizhi Yu’s Status Update for 12/5

This week, we did our last demo on-class, and finished off testing. We found the following test results against our metrics:

  • Delay from web app sends signal, to raspberry pi receives signal < 1s: We tested this desired latency by setting an alarm at 1:00 at 12:59:59 and see if this message can be received by raspberry pi in the next second. 
  • From robot received message to robot activate < 5s: The time needed for the Raspberry Pi to initialize the camera stream and CV algorithm takes around 8.3s.
  • Delay from facial recognition to chase starting < 0.25s: Achieved
  • Fast image processing: ML pipeline FPS > 10: Achieved around 13 FPS running our algorithm on our Raspberry Pi.
  • Accurate human detection: false positive: Never occurs during testing.
  • Accurate human detection: false negative: Occurs under poor lighting conditions.
  • Effective chase duration > 30s, chase overall linear distance > 5m: Achieved.

We also decided our schedule and division of work for the final video, and filmed many runs of interacting with the robot.

Next Week:

Finish the final video, start writing the final paper.

Peizhi Yu’s Status Update for 11/21

This week, I exhaustively tested out the success rate and the limitations of our obstacle-dodging algorithm.

  • Due to limitations to the sensors on the robotic base, our algorithm can’t deal with transparent obstacles like a water bottle.
  • It works with a 80% success rate of not touching the obstacle and going around it. The obstacle should be with a diameter that’s less than 15cm, otherwise it will be recognized as a wall

I identify the problem as the following:

  1. The robot uses a optical sensor that’s based on reflections, and having a transparent obstacle will not be detected by the sensors.
  2. The algorithm we are using essentially makes the robot turn when it comes close to a certain object, and continue moving forward when none of the sensors have readings. Problem with this approach is that there’s only around 120 degree of the front that’s covered by the 6 sensors. Although I did do a 0.25s extra turning, the larger the obstacle is, the likelier the robot will scratch with the obstacle.
  3. Simply adding the extra rotation time will not fix the problem because it will make our robot look kind of silly.

Echo Gao’s Status Update for 11/21

This week I was mainly working on the  hardware part, namely, how to fix the camera onto the robot so that it catches the person in its view. I used CAD SolidWorks to design the basic structure for the camera bracket and then import the file into 3D print. The 3D printer I used at home is called Longer 3D with transparent resin. It is not as accurate as the one at school but due to COVID-19 I decided to work at home. The main constrain of this 3D printer is that the shaft diameter and attaching points cannot be adjusted based on locations on the object. Therefore, when I test printed the first time, shaft diameter was too small and the entire structure got distorted and shifted. When I test printed the second time, I raised this parameter, but then the supports became harder to pill off. And each test print takes around 1-3 hours depending on the setting of the supports. After pilling off the supports, I used sand papers and other various tools to sculpt the two handles on this bracket so that they fit the holes on our USB camera.

Team Status Update for 11/14

Echo & Page

Implemented a script on the Raspberry Pi and achieved communication between the back end of the web application and the Raspberry Pi. The Raspberry Pi now will receive a message from the back end at user specified alarm time. It will also receive and play the user specified ringtone after activation.

Implemented wall detection in our final script. The robot will halt and get ready for the next request from the server after detecting a wall based on our own algorithms.

A portable power supply is tested to be working and we are now able to put everything onto the robot and start final optimization and testing.

We are on schedule.

Yuhan

This week(week of 11/9 – 11/15):

  • prepared for interim demo on Wednesday
    • cleared and prepared sample alarm and schedule data
  • deployed current implementation to EC2(Ubuntu)
  • tested code snippet on Pi
  • supported Echo and Peizhi in the integration of the code snippet into the main program running on Pi
  • set up access for Echo and Peizhi for easier debugging

Next week(week of 11/16 – 11/22):

  • continue testing communication between Pi and web app
  • modify communication code snippet on RPi so it writes to files instead of printing to stdout(to keep a record of messages & when they are received, to validate reliability of the alarm scheduling)
  • implement alarm and ringtone deletion
  • research on and implement database change

I am on schedule.

Problems: refer to personal update.

Echo Gao’s Status Update for 11/14

This week, we finished: 1. integrate raspberry pi with Web app 2. Add final stop motion(wall encountered) to our entire program 3.demo in class

We coordinated with Yuhan on how raspberry pi and her website would interact. Raspberry pi now is always ready to accept messages from the web. User can setup alarm time and ringtone/song on our AWS website. When it is the scheduled time, the server will send this specified ringtone to our raspberry pi. Pi will parse the message, execute the shell command to wake the robot up, play the song, and run.

(set up time on web)

(received message from server)

This week we also integrated our “stop action” algorithm we developed last week to our main algorithm. Now, the robot is fully functional. On the hardware side, power bank for the raspberry pi we ordered was also delivered. Now pi can be fully mounted on top of the robot. We have not yet find a way to fix the camera in a better& more elegant manner.

Next week, we will start on testing, optimizing algorithms, and finding success rates.

Team Status Update for 11/7

Echo & Peizhi

This Week:

  • Installed heatsinks and a 3.5-inch screen that’s mounted in the shroud for our raspberry pi 3.
  • Implemented wall-detection algorithm through sensor data and self-developed algorithm
  • Optimized obstacle detection and avoiding algorithm

Problems:

  • Purchased wrong product. Expected a power bank, turned out to be a power adaptor
  • Displaying captured video on the 3.5-inch screen, won’t fit even with 100×150 display window

Next week:

  • Integration for web application and hardware platform
  • Further optimization & Mounting Pi and camera onto the base
  • Purchase power bank

We are on schedule.

Yuhan

This week:

  • implemented backend with MongoDB
    • web app data pulled from & stored to cloud
  • implemented communication between web app and Pi with SQS(message queue)
    • send messages on web app backend(javascript, scheduled by node-cron), receive and delete messages on Pi(python)
    • did not test it on Pi, used local machine instead

Problems: Refer to my personal update.

Next week:

  • prepare for interim demo on Wednesday
    • deploy current implementation to EC2
    • clear and prepare sample alarm and schedule data
  • continue testing communication between Pi and web app
    • test code snippet on Pi
    • handle connection errors gracefully, especially for receiving messages
  • support Echo and Peizhi in the integration of the code snippet into the main program running on Pi
    • figure out a way to host the web app with minimum down time, and maximum secure access to EC2, MongoDB and SQS for Echo and Peizhi for easier development and debugging

I am on schedule. For the plans for the next few weeks, refer to my personal update.

Peizhi Yu’s Status Update for 11/7

This week, we were mainly dealing with 1. mounting hardware pieces together 2. implementing stop action for the robot. 3.adjusting the size of camera display on LCD screen 4. fixed the sensor data error we fixed last week

We started off by adding heat sink on raspberry pi and put it into a case. A mistake we made was that we realized the power pack we ordered was not what we expected. Therefore we need to reorder a new one, which dragged us a bit behind our intended schedule. (Without the power pack, raspberry pi cannot be fully mounted on top of the robot. But everything else were in place by now.)  Next, we spent a long time implementing the final stop action for the robot, which was not intended. That is, when the robot hits the wall, it should turn off. There is a “wall detection signal” in its sensor packet, which we thought could be used to accurately detect the wall. Yet that builtin function does not work as expected. So we need to figure out a way to make the robot distinguish between an obstacle and wall. If obstacle is detected, robot should rotate until the obstacle is not in its view and continue moving. If wall is encountered, robot should shut down. By looking at the sensor data values returned from the robot, we found that when wall is detected, 4 out of 6 of the sensor value will be greater than 100. The best approach was to find the median of all 6 sensors. If this number is greater than 50, we will say that wall is detected.

Last week, the problem we encountered was that if we excessively call the get_sensor function, the robot sometimes blow up and returns unintended values which messes up our entire program. This problem is solved by contacting IRobot Create2’s technical support. We followed the instruction provided, and our robot seems to be working fine now.

 

Next week, we will have our robot fully implemented with the raspberry pi mounted on top of robot so that we can give a cleaner and more elegant look of our product. We will also coordinate with Yuhan on making connections between raspberry pi and her website to setup alarm time and download user specified ringtone.