Yuhan Xiao’s Status Update for 12/5

Over the thanksgiving I decided to keep using NoSQL instead of switching to SQL databases. The advantage of NoSQL is that it allows room for future development, e.g. if we want to create a “big ringtone” – one long ringtone consists of 2-4 (original) ringtones (because one original ringtone is confined to up to 16 notes by Create2 Open Interface), is way harder if using fixed data schema in this case. It also allows us to change directions during our course of development with its dynamic schema. Besides, we don’t need any advanced filtering or querying, e.g. the most “advanced” filtering is probably filtering alarms by a specific user ID to get private alarm data of a user, other than that it is usually just “get all ringtones” or “get all alarms”.

Deliverables completed this week:

  • continued testing communication between Pi and web app
    • implemented a logging feature for myself to keep track of the messages sent from web app to Pi
  • implemented user login
    • users can now only view their own private alarms, but both public and their own private ringtones
  • implemented alarm deletion
    • tested by validating the logs after manually creating and deleting alarms
  • implemented ringtone deletion
  • deployed website using pm2
    • alive on http://ec2-3-129-61-132.us-east-2.compute.amazonaws.com:4000/

Next Week:

  • complete final video + report
  • minor UI/UX touch-ups

Team Status Update for 12/5

Echo & Page:

  • Finished testing our completed version of the project against our metrics.
  • Filmed videos for final video
  • Decided on division of labor on the final video

We have finished the project on time. Started working on the final video and report.

Yuhan:

  • implemented a logging feature
  • implemented user login, so users can now only view their own private alarms, but both public and their own private ringtones
  • implemented alarm & ringtone deletion
  • deployed website using pm2

I am currently on schedule.

The website is currently alive at http://ec2-3-129-61-132.us-east-2.compute.amazonaws.com:4000/

Peizhi Yu’s Status Update for 12/5

This week, we did our last demo on-class, and finished off testing. We found the following test results against our metrics:

  • Delay from web app sends signal, to raspberry pi receives signal < 1s: We tested this desired latency by setting an alarm at 1:00 at 12:59:59 and see if this message can be received by raspberry pi in the next second. 
  • From robot received message to robot activate < 5s: The time needed for the Raspberry Pi to initialize the camera stream and CV algorithm takes around 8.3s.
  • Delay from facial recognition to chase starting < 0.25s: Achieved
  • Fast image processing: ML pipeline FPS > 10: Achieved around 13 FPS running our algorithm on our Raspberry Pi.
  • Accurate human detection: false positive: Never occurs during testing.
  • Accurate human detection: false negative: Occurs under poor lighting conditions.
  • Effective chase duration > 30s, chase overall linear distance > 5m: Achieved.

We also decided our schedule and division of work for the final video, and filmed many runs of interacting with the robot.

Next Week:

Finish the final video, start writing the final paper.

Yuhan Xiao’s Status Update for 11/21

This week I worked on debugging my current web app implementation, to make sure the audio plays correctly when the user is editing the ringtone.

I modified code snippet to Pi to start testing the reliability of the scheduling functionality of the alarm. Specifically I am testing to see if the node-cron package is working as intended. I am conducting the experiment for at least one week, so the result will be out earliest by next update.

Per last update, I researched a bit more into the possible SQL database options. For now I am leaning towards spinning up a local MySQL database on the EC2 instance for storing the ringtone data.

I am currently on schedule.

This week(week of 11/16 – 11/22):

  • continue testing communication between Pi and web app
  • modify communication code snippet on RPi so it writes to files instead of printing to stdout(to keep a record of messages & when they are received, to validate reliability of the alarm scheduling)
  • research on database change

Next week(week of of 11/23 – 11/29):

  • continue testing communication between Pi and web app
  • implement database change
  • implement alarm and ringtone deletion
  • implement ringtone creation by importing MIDI files

Peizhi Yu’s Status Update for 11/21

This week, I exhaustively tested out the success rate and the limitations of our obstacle-dodging algorithm.

  • Due to limitations to the sensors on the robotic base, our algorithm can’t deal with transparent obstacles like a water bottle.
  • It works with a 80% success rate of not touching the obstacle and going around it. The obstacle should be with a diameter that’s less than 15cm, otherwise it will be recognized as a wall

I identify the problem as the following:

  1. The robot uses a optical sensor that’s based on reflections, and having a transparent obstacle will not be detected by the sensors.
  2. The algorithm we are using essentially makes the robot turn when it comes close to a certain object, and continue moving forward when none of the sensors have readings. Problem with this approach is that there’s only around 120 degree of the front that’s covered by the 6 sensors. Although I did do a 0.25s extra turning, the larger the obstacle is, the likelier the robot will scratch with the obstacle.
  3. Simply adding the extra rotation time will not fix the problem because it will make our robot look kind of silly.

Echo Gao’s Status Update for 11/21

This week I was mainly working on the  hardware part, namely, how to fix the camera onto the robot so that it catches the person in its view. I used CAD SolidWorks to design the basic structure for the camera bracket and then import the file into 3D print. The 3D printer I used at home is called Longer 3D with transparent resin. It is not as accurate as the one at school but due to COVID-19 I decided to work at home. The main constrain of this 3D printer is that the shaft diameter and attaching points cannot be adjusted based on locations on the object. Therefore, when I test printed the first time, shaft diameter was too small and the entire structure got distorted and shifted. When I test printed the second time, I raised this parameter, but then the supports became harder to pill off. And each test print takes around 1-3 hours depending on the setting of the supports. After pilling off the supports, I used sand papers and other various tools to sculpt the two handles on this bracket so that they fit the holes on our USB camera.

Yuhan Xiao’s Status Update for 11/14

This week I helped Echo and Peizhi connect with and expand on the code snippet I wrote last week. The code snippet can receive and delete messages containing alarm data from the web app server in real time. For now it is live at http://3.137.164.67:3000/(only when I spin up the server). Echo and Peizhi expanded on the snippet so it can initiate the robot sequence when a message is received.

I helped debug authentication and connection issues with AWS SQS when running the snippet on the Pi. I also helped modifying API endpoints according to the requirement of the Create 2 Open Interface so the data received on Pi can be directly used for ingestion.

I set up IAM users for both Echo and Peizhi, just in case they need to access web server or SQS console when they are debugging in the future.

I tried to move web server from Amazon Linux machine to Ubuntu in an attempt to deploy the website so it is always live, using Nginx, PM2, etc. It did not work  in the end after a few hours of debugging. It might have something to do with my project file structure. I might look into other services that help deploy MERN stack application, or Node.js application, or readjust my overall file structure.

I am on schedule.

For system design changes: I am considering switching MongoDB out for Redshift/RDS/Aurora. Right now I have the full working implementation using MongoDB, but I am concerned it might be an overkill considering the simple structure and light load of the data we are working with. Hence I am considering switching from NoSQL to SQL databases.  And using databases provided by AWS family like RedShift might be easier for integration. I can even just use a CSV file to store the data.

This week(week of 11/9 – 11/15):

  • prepared for interim demo on Wednesday
    • cleared and prepared sample alarm and schedule data
  • deployed current implementation to EC2(Ubuntu)
  • tested code snippet on Pi
  • supported Echo and Peizhi in the integration of the code snippet into the main program running on Pi
  • set up access for Echo and Peizhi for easier debugging

Next week(week of 11/16 – 11/22):

  • continue testing communication between Pi and web app
  • modify communication code snippet on RPi so it writes to files instead of printing to stdout(to keep a record of messages & when they are received, to validate reliability of the alarm scheduling)
  • implement alarm and ringtone deletion
  • research on and implement database change

Week of 11/23 – 11/29:

  • continue testing communication between Pi and web app
  • implement ringtone creation by importing MIDI files

Team Status Update for 11/14

Echo & Page

Implemented a script on the Raspberry Pi and achieved communication between the back end of the web application and the Raspberry Pi. The Raspberry Pi now will receive a message from the back end at user specified alarm time. It will also receive and play the user specified ringtone after activation.

Implemented wall detection in our final script. The robot will halt and get ready for the next request from the server after detecting a wall based on our own algorithms.

A portable power supply is tested to be working and we are now able to put everything onto the robot and start final optimization and testing.

We are on schedule.

Yuhan

This week(week of 11/9 – 11/15):

  • prepared for interim demo on Wednesday
    • cleared and prepared sample alarm and schedule data
  • deployed current implementation to EC2(Ubuntu)
  • tested code snippet on Pi
  • supported Echo and Peizhi in the integration of the code snippet into the main program running on Pi
  • set up access for Echo and Peizhi for easier debugging

Next week(week of 11/16 – 11/22):

  • continue testing communication between Pi and web app
  • modify communication code snippet on RPi so it writes to files instead of printing to stdout(to keep a record of messages & when they are received, to validate reliability of the alarm scheduling)
  • implement alarm and ringtone deletion
  • research on and implement database change

I am on schedule.

Problems: refer to personal update.

Echo Gao’s Status Update for 11/14

This week, we finished: 1. integrate raspberry pi with Web app 2. Add final stop motion(wall encountered) to our entire program 3.demo in class

We coordinated with Yuhan on how raspberry pi and her website would interact. Raspberry pi now is always ready to accept messages from the web. User can setup alarm time and ringtone/song on our AWS website. When it is the scheduled time, the server will send this specified ringtone to our raspberry pi. Pi will parse the message, execute the shell command to wake the robot up, play the song, and run.

(set up time on web)

(received message from server)

This week we also integrated our “stop action” algorithm we developed last week to our main algorithm. Now, the robot is fully functional. On the hardware side, power bank for the raspberry pi we ordered was also delivered. Now pi can be fully mounted on top of the robot. We have not yet find a way to fix the camera in a better& more elegant manner.

Next week, we will start on testing, optimizing algorithms, and finding success rates.