Product Pitch

Team B1 : ChaseMe Alarm Clock

Can’t wake up on time for your zoom meetings? Our team is building a no-snooze alarm clock that you have to catch to turn it off.

All you have to do is to set an alarm through our web application. When it is time to wake up, your Roomba’s alarm will start ringing. As you approach and try to catch it, it can recognize you through face detection and move away from you. The alarm and movement will only stop when you back it into a wall. It is environmentally aware, so you don’t worry about it bumping into and damaging your furnitures. You can also also customize your ringtone on our web application.

If bright light, loud noise, and solving puzzles do not do it for you, try catching our robot chicken.

 

Yuhan Xiao’s Status Update for 12/5

Over the thanksgiving I decided to keep using NoSQL instead of switching to SQL databases. The advantage of NoSQL is that it allows room for future development, e.g. if we want to create a “big ringtone” – one long ringtone consists of 2-4 (original) ringtones (because one original ringtone is confined to up to 16 notes by Create2 Open Interface), is way harder if using fixed data schema in this case. It also allows us to change directions during our course of development with its dynamic schema. Besides, we don’t need any advanced filtering or querying, e.g. the most “advanced” filtering is probably filtering alarms by a specific user ID to get private alarm data of a user, other than that it is usually just “get all ringtones” or “get all alarms”.

Deliverables completed this week:

  • continued testing communication between Pi and web app
    • implemented a logging feature for myself to keep track of the messages sent from web app to Pi
  • implemented user login
    • users can now only view their own private alarms, but both public and their own private ringtones
  • implemented alarm deletion
    • tested by validating the logs after manually creating and deleting alarms
  • implemented ringtone deletion
  • deployed website using pm2
    • alive on http://ec2-3-129-61-132.us-east-2.compute.amazonaws.com:4000/

Next Week:

  • complete final video + report
  • minor UI/UX touch-ups

Echo Gao’s Status Update for 12/5

This week, we did our last demo on-class, and finished off testing. We found the following test results against our metrics:

  • Delay from web app sends signal, to raspberry pi receives signal < 1s: We tested this desired latency by setting an alarm at 1:00 at 12:59:59 and see if this message can be received by raspberry pi in the next second. 
  • From robot received message to robot activate < 5s: The time needed for the Raspberry Pi to initialize the camera stream and CV algorithm takes around 8.3s.
  • Delay from facial recognition to chase starting < 0.25s: Achieved
  • Fast image processing: ML pipeline FPS > 10: Achieved around 13 FPS running our algorithm on our Raspberry Pi.
  • Accurate human detection: false positive: Never occurs during testing.
  • Accurate human detection: false negative: Occurs under poor lighting conditions.
  • Effective chase duration > 30s, chase overall linear distance > 5m: Achieved.

We also decided our schedule and division of work for the final video, and filmed many runs of interacting with the robot.

Next Week:

Finish the final video, start writing the final paper.12

Team Status Update for 12/5

Echo & Page:

  • Finished testing our completed version of the project against our metrics.
  • Filmed videos for final video
  • Decided on division of labor on the final video

We have finished the project on time. Started working on the final video and report.

Yuhan:

  • implemented a logging feature
  • implemented user login, so users can now only view their own private alarms, but both public and their own private ringtones
  • implemented alarm & ringtone deletion
  • deployed website using pm2

I am currently on schedule.

The website is currently alive at http://ec2-3-129-61-132.us-east-2.compute.amazonaws.com:4000/

Peizhi Yu’s Status Update for 12/5

This week, we did our last demo on-class, and finished off testing. We found the following test results against our metrics:

  • Delay from web app sends signal, to raspberry pi receives signal < 1s: We tested this desired latency by setting an alarm at 1:00 at 12:59:59 and see if this message can be received by raspberry pi in the next second. 
  • From robot received message to robot activate < 5s: The time needed for the Raspberry Pi to initialize the camera stream and CV algorithm takes around 8.3s.
  • Delay from facial recognition to chase starting < 0.25s: Achieved
  • Fast image processing: ML pipeline FPS > 10: Achieved around 13 FPS running our algorithm on our Raspberry Pi.
  • Accurate human detection: false positive: Never occurs during testing.
  • Accurate human detection: false negative: Occurs under poor lighting conditions.
  • Effective chase duration > 30s, chase overall linear distance > 5m: Achieved.

We also decided our schedule and division of work for the final video, and filmed many runs of interacting with the robot.

Next Week:

Finish the final video, start writing the final paper.

Yuhan Xiao’s Status Update for 11/21

This week I worked on debugging my current web app implementation, to make sure the audio plays correctly when the user is editing the ringtone.

I modified code snippet to Pi to start testing the reliability of the scheduling functionality of the alarm. Specifically I am testing to see if the node-cron package is working as intended. I am conducting the experiment for at least one week, so the result will be out earliest by next update.

Per last update, I researched a bit more into the possible SQL database options. For now I am leaning towards spinning up a local MySQL database on the EC2 instance for storing the ringtone data.

I am currently on schedule.

This week(week of 11/16 – 11/22):

  • continue testing communication between Pi and web app
  • modify communication code snippet on RPi so it writes to files instead of printing to stdout(to keep a record of messages & when they are received, to validate reliability of the alarm scheduling)
  • research on database change

Next week(week of of 11/23 – 11/29):

  • continue testing communication between Pi and web app
  • implement database change
  • implement alarm and ringtone deletion
  • implement ringtone creation by importing MIDI files

Peizhi Yu’s Status Update for 11/21

This week, I exhaustively tested out the success rate and the limitations of our obstacle-dodging algorithm.

  • Due to limitations to the sensors on the robotic base, our algorithm can’t deal with transparent obstacles like a water bottle.
  • It works with a 80% success rate of not touching the obstacle and going around it. The obstacle should be with a diameter that’s less than 15cm, otherwise it will be recognized as a wall

I identify the problem as the following:

  1. The robot uses a optical sensor that’s based on reflections, and having a transparent obstacle will not be detected by the sensors.
  2. The algorithm we are using essentially makes the robot turn when it comes close to a certain object, and continue moving forward when none of the sensors have readings. Problem with this approach is that there’s only around 120 degree of the front that’s covered by the 6 sensors. Although I did do a 0.25s extra turning, the larger the obstacle is, the likelier the robot will scratch with the obstacle.
  3. Simply adding the extra rotation time will not fix the problem because it will make our robot look kind of silly.