Yuhan Xiao’s Status Update for 11/7

As planned last week, this week I

  1. implemented the backend of web app, using
    1. MongoDB – for storing ringtone and alarm schedule permanently
    2. SQS – for sending data to Pi when an alarm is created
    3. node-cron library(nodejs) – for scheduling the data sending mentioned above, at whenever the alarm time is (and repeating the sending every week)
  2. for the frontend, switched the data store from local storage to actual  database in cloud using axios;
  3. wrote a code snippet that receives simple data from the web app backend in real time, which is to be incorporated with the main program deployed on Pi.

Code for web app is here and code snippet deployed on Pi is here(private repository, let me know if you need access).

I tested my implementation by scheduling an alarm clock through my frontend at about one minute later than current time, e.g. say at 10pm. I verified that the alarm got stored permanently by checking MongoDB Atlas. By the alarm, I mean the day, time and ringtone associated with the alarm set. I verified on the “alarm schedule page” reflects an updated schedule with the new alarm. I waited for about one minute, and then a message with the alarm data was sent from web app backend to SQS queue at exactly 10pm. Then a message is received a few seconds later, by the code snippet also running on my machine.

One change to my system design is that I will be scheduling alarms on web app server instead of Pi, since I noticed the node-cron library supports repeating jobs every week, which is exactly the frequency of a defined communication between web app and Pi. The library also seems to have a solid user base and well-written documentation.

The coding turned out to be a bit more intensive than planned, so I did not get to test the snippet on Pi, or verify if the same alarm can be reliably started the same time one week later. But overall, I am mostly on schedule.

In the next few weeks before the final demo, I am planning to set up a testing/measuring workflow(i.e. reliability/robustness/latency of communication between web app and Pi), and add more operations to be supported by the web app(e.g. alarm/ringtone deletion, ringtone creation by importing MIDI files).

This week:

  • implemented backend with MongoDB
    • web app data pulled from & stored to cloud
  • implemented communication between web app and Pi with SQS(message queue)
    • send messages on web app backend(javascript, scheduled by node-cron), receive and delete messages on Pi(python)
    • did not test it on Pi, used local machine instead

Next week(week of 11/9 – 11/15):

  • prepare for interim demo on Wednesday
    • deploy current implementation to EC2
    • clear and prepare sample alarm and schedule data
  • continue testing communication between Pi and web app
    • test code snippet on Pi
    • handle connection errors gracefully, especially for receiving messages
  • support Echo and Peizhi in the integration of the code snippet into the main program running on Pi
    • figure out a way to host the web app with minimum down time, and maximum secure access to EC2, MongoDB and SQS for Echo and Peizhi for easier development and debugging(e.g. maybe setting up separate user accounts for them)

Week of 11/16 – 11/22:

  • continue testing communication between Pi and web app
  • implement alarm and ringtone deletion

Week of 11/23 – 11/29:

  • continue testing communication between Pi and web app
  • implement ringtone creation by importing MIDI files

Team Status Update for 11/7

Echo & Peizhi

This Week:

  • Installed heatsinks and a 3.5-inch screen that’s mounted in the shroud for our raspberry pi 3.
  • Implemented wall-detection algorithm through sensor data and self-developed algorithm
  • Optimized obstacle detection and avoiding algorithm

Problems:

  • Purchased wrong product. Expected a power bank, turned out to be a power adaptor
  • Displaying captured video on the 3.5-inch screen, won’t fit even with 100×150 display window

Next week:

  • Integration for web application and hardware platform
  • Further optimization & Mounting Pi and camera onto the base
  • Purchase power bank

We are on schedule.

Yuhan

This week:

  • implemented backend with MongoDB
    • web app data pulled from & stored to cloud
  • implemented communication between web app and Pi with SQS(message queue)
    • send messages on web app backend(javascript, scheduled by node-cron), receive and delete messages on Pi(python)
    • did not test it on Pi, used local machine instead

Problems: Refer to my personal update.

Next week:

  • prepare for interim demo on Wednesday
    • deploy current implementation to EC2
    • clear and prepare sample alarm and schedule data
  • continue testing communication between Pi and web app
    • test code snippet on Pi
    • handle connection errors gracefully, especially for receiving messages
  • support Echo and Peizhi in the integration of the code snippet into the main program running on Pi
    • figure out a way to host the web app with minimum down time, and maximum secure access to EC2, MongoDB and SQS for Echo and Peizhi for easier development and debugging

I am on schedule. For the plans for the next few weeks, refer to my personal update.

Peizhi Yu’s Status Update for 11/7

This week, we were mainly dealing with 1. mounting hardware pieces together 2. implementing stop action for the robot. 3.adjusting the size of camera display on LCD screen 4. fixed the sensor data error we fixed last week

We started off by adding heat sink on raspberry pi and put it into a case. A mistake we made was that we realized the power pack we ordered was not what we expected. Therefore we need to reorder a new one, which dragged us a bit behind our intended schedule. (Without the power pack, raspberry pi cannot be fully mounted on top of the robot. But everything else were in place by now.)  Next, we spent a long time implementing the final stop action for the robot, which was not intended. That is, when the robot hits the wall, it should turn off. There is a “wall detection signal” in its sensor packet, which we thought could be used to accurately detect the wall. Yet that builtin function does not work as expected. So we need to figure out a way to make the robot distinguish between an obstacle and wall. If obstacle is detected, robot should rotate until the obstacle is not in its view and continue moving. If wall is encountered, robot should shut down. By looking at the sensor data values returned from the robot, we found that when wall is detected, 4 out of 6 of the sensor value will be greater than 100. The best approach was to find the median of all 6 sensors. If this number is greater than 50, we will say that wall is detected.

Last week, the problem we encountered was that if we excessively call the get_sensor function, the robot sometimes blow up and returns unintended values which messes up our entire program. This problem is solved by contacting IRobot Create2’s technical support. We followed the instruction provided, and our robot seems to be working fine now.

 

Next week, we will have our robot fully implemented with the raspberry pi mounted on top of robot so that we can give a cleaner and more elegant look of our product. We will also coordinate with Yuhan on making connections between raspberry pi and her website to setup alarm time and download user specified ringtone.

Echo Gao’s Status Update for 11/7

This week, we were mainly dealing with 1. mounting hardware pieces together 2. implementing stop action for the robot. 3.adjusting the size of camera display on LCD screen 4. fixed the sensor data error we fixed last week

We started off by adding heat sink on raspberry pi and put it into a case. A mistake we made was that we realized the power pack we ordered was not what we expected. Therefore we need to reorder a new one, which dragged us a bit behind our intended schedule. (Without the power pack, raspberry pi cannot be fully mounted on top of the robot. But everything else were in place by now.)  Next, we spent a long time implementing the final stop action for the robot, which was not intended. That is, when the robot hits the wall, it should turn off. There is a “wall detection signal” in its sensor packet, which we thought could be used to accurately detect the wall. Yet that builtin function does not work as expected. So we need to figure out a way to make the robot distinguish between an obstacle and wall. If obstacle is detected, robot should rotate until the obstacle is not in its view and continue moving. If wall is encountered, robot should shut down. By looking at the sensor data values returned from the robot, we found that when wall is detected, 4 out of 6 of the sensor value will be greater than 100. The best approach was to find the median of all 6 sensors. If this number is greater than 50, we will say that wall is detected.

Last week, the problem we encountered was that if we excessively call the get_sensor function, the robot sometimes blow up and returns unintended values which messes up our entire program. This problem is solved by contacting IRobot Create2’s technical support. We followed the instruction provided, and our robot seems to be working fine now.

 

Next week, we will have our robot fully implemented with the raspberry pi mounted on top of robot so that we can give a cleaner and more elegant look of our product. We will also coordinate with Yuhan on making connections between raspberry pi and her website to setup alarm time and download user specified ringtone.

Yuhan Xiao’s Status Update for 10/31

This week I did some research as to (1) how our remote web app server can communicate with Pi (how alarm data is passed from one to another), and (2) how alarm data is stored. As a result of my research, I decided to make some adjustments to our current system design – (1) AWS SQS is used to send data from the remote server to Pi, and (2) cron jobs will be scheduled on Pi instead of remote web app server.

My reason for change (1) is that just having MongoDB is not sufficient for real time communication between Pi and the remote server; sure, when user created new alarm on the frontend interface, data can be sent to backend and stored to MongoDB immediately, but MongoDB does not know if and which alarm is created in real time, and hence cannot pass the newly created alarm data to Pi in real time. This means instead of Python Requests library on Pi, we will use AWS SQS Python SDK. And on the remote server, AWS SQS Javascript(Node.js) SDK will be used.

While change (1) is absolutely necessary, change (2) is more up for debate and can be changed back later on. I made this change because this way we say goodbye to the potential latency between web app and Pi whenever the alarm starts. We did not plan to schedule cron jobs on Pi initially because we wanted to make sure Pi have sufficient computing power to run the CV code efficiently; I checked that cron job does not use up much resources(so does connection with SQS), and especially since only one cron job is scheduled for one regular alarm, Pi should probably do fine.

I have some pseudo code for this approach at the end of the update.

Since I spent more time on research and some more on ethic readings, plus big assignments from other classes , I have only written some pseudo code instead of actual code. As a result, I am running a little behind my schedule this week, but I will make up for it next week.

My next week’s schedule is to turn the pseudo code into actual code, with the exception of the addToCron() on the last line of the pseudo code on Pi.

On Pi:

import boto3, logging
sqs = boto3.client('sqs')
q = sqs.queue('name')
while True:
alarmData = q.receive_message()
logging.info(alarmData)
q.delete_messge(alarmData["id"])
addToCron(alarmData["time"], alarmData["ringtone"])
...

On remote server(web app backend):

// mongoDB init
var mongoose = require("mongoose");
mongoose.Promise = global.Promise;
mongoose.connect("mongodb:<LOCAL HOST>");
var alarmSchema = new mongoose.Schema({
time: String,
ringtone: Array[int]
});

// SQS setup
var AWS = require('aws-sdk');
AWS.config.update({region: 'REGION'});
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});

app.post("/add-alarm", (req, res) => {
var alarmData = new Alarm(req.body);
alarmData.save() // alarm saved for permanent storage })
.then(item => {
var msg = buildMsg(alarmData);
sqs.sendMessage(msg, function(err, data) { // alarm added to queue, which Pi can read from
if (err) { throw new Error('Alarm data is not passed to Pi'); }
});
res.send("alarm created");
})
.catch(err => {
res.status(400).send("unable to create the alarm");
});

const buildMsg = (alarmData) => {
var msg = {
MessageAttributes: { alarmData }, // payload
Id: "MSG_ID",
QueueUrl: "SQS_QUEUE_URL"
};
return msg;
}
...

Peizhi’s Status Report for 10/31

Hardware portion almost finished:

After replacing our broken webcam with a USB camera, we continued our testing. This week, we combined our obstacle avoiding algorithm with our human avoiding algorithm together. Now, our alarm clock robot is fully functional with all our requirements reached. The complete algorithm is as followed: the robot starts self rotate at its idle stage while playing the song when no person is detected through camera. As soon as a person is detected through the camera, robot will start moving in the direction away from him/her while trying to fix this person’s position in the middle of the camera. Yet if obstacles are encountered, the robot will immediately start self rotate and move in the direction where no obstacles are detected through its sensors. At that point on, the robot will look back at its camera again to find the person. If the person is still in its view, it will perform the above action. If the person is not in its view, the robot will start self rotating until it finds a person again. As one might see, avoiding obstacles take priority over avoiding user. The “finish action/ alarm turn off action” will be done next week. That is, when the robot runs into a wall, the entire program finishes. A problem we encountered is that the distance information we received from IRobot Create2’s sensors to detect nearby obstacles sometimes blows up to completely inaccurate numbers. In that case, our entire program runs into undefined behavior and crashes. We have not yet find a solution to this problem.

(avoiding both human and obstacles demo)

 

(avoiding human only demo)

Integration with Webapp:

Now, we are stating to work on sockets and how raspberry pi communicates with Webapp from Yuhan’s website. Next week, we will work on how to let the Webapp controls the entire on and off of our program and how webapp sends ringtone and time to raspberry pi.

Team Status Update for 10/31

Peizhi & Echo:

we almost finished all hardware portion of our project. Right now, we are on schedule.

This week:

  • finished integrating human avoidance algorithm with obstacle avoidance algorithm
  • finished tuning parameters for the robot’s moving speed. (might adjust later)
  • started looking into sockets and how raspberry pi communicates with webapp

Next week:

  • mount hardware on top of the robot base
  • finish the very last “alarm goes off” algorithm when the robot hits the wall
  • integrate with Yuhan’s webapp
  • optimize our algorithm
  • find solutions to the case when sensor data received from robot blows up

Yuhan:

This week:

  • researched possible ways of real time communication from web app to Pi
    • tried to work with just MongoDB and failed
    • looked up azure Iot hub, RabbitMQ, etc
    • settled on AWS SQS
  • wrote pseudo code hosted on both Pi and the web app/remote server

Next week:

  • borrow a Pi
  • Turn pseudo code into actual code
    • test MongoDB works on remote server (for permanent storage)
    • test AWS SQS works from remote server to Pi (for real time communication)

Echo Gao’s Status Update 10/31

Hardware portion almost finished:

After replacing our broken webcam with a USB camera, we continued our testing. This week, we combined our obstacle avoiding algorithm with our human avoiding algorithm together. Now, our alarm clock robot is fully functional with all our requirements reached. The complete algorithm is as followed: the robot starts self rotate at its idle stage while playing the song when no person is detected through camera. As soon as a person is detected through the camera, robot will start moving in the direction away from him/her while trying to fix this person’s position in the middle of the camera. Yet if obstacles are encountered, the robot will immediately start self rotate and move in the direction where no obstacles are detected through its sensors. At that point on, the robot will look back at its camera again to find the person. If the person is still in its view, it will perform the above action. If the person is not in its view, the robot will start self rotating until it finds a person again. As one might see, avoiding obstacles take priority over avoiding user. The “finish action/ alarm turn off action” will be done next week. That is, when the robot runs into a wall, the entire program finishes. A problem we encountered is that the distance information we received from IRobot Create2’s sensors to detect nearby obstacles sometimes blows up to completely inaccurate numbers. In that case, our entire program runs into undefined behavior and crashes. We have not yet find a solution to this problem.

(avoiding both human and obstacles demo)

 

(avoiding human only demo)

Integration with Webapp:

Now, we are stating to work on sockets and how raspberry pi communicates with Webapp from Yuhan’s website. Next week, we will work on how to let the Webapp controls the entire on and off of our program and how webapp sends ringtone and time to raspberry pi.

Yuhan Xiao’s Status Update for 10/24

This week I finished all the other essential web app UI development(“create ringtone” page ). I also enabled local data to pass through between components(e.g. when you clicked submit on the form on “create alarm” page, it shows up on “alarm schedule” page, etc) by the use of component states and local storage. This means all pages can now load alarm/ringtone data dynamically.

This also means I am currently on schedule and the web app is ready for backend integration next week as planned.

Scroll down to the bottom to see screenshots of all 3 pages of the web app. But since there are musical components to the web app(e.g. when you create the ringtone, you can preview your ringtone or individual note; you can also preview the ringtone on “alarm schedule” page, etc, like what a normal built-in phone alarm would let you do), clone to this github repo(email me to grant you access), do “npm install” and “npm start” to locally host the web app. I realize ec2 instance shuts down my web app hosting after a while so if you want to interact with the web app, you need to clone the repo.

Here are a few things that did not really go as planned:

  1. Right now the “create ringtone” page supports a more narrow range of notes(6 octaves) than create 2 robot base can support(8 octaves). This is due to some problems with the audio playing library chosen, that after the 6th octave, it stops playing audible sounds. I did not have time to investigate into the reason yet, but since 6 octaves should suffice, I will look into this after backend integration, if I have the time.
  2. I wanted to make a more intuitive interface for “create ringtone page”, but since there is not enough time for it, I also have to put it in the backlog and look into it after backend integration(with database for permanent storage and with Pi).

My schedule(same as last week’s):

  • week 8 (this week):
    • create ringtone UI
    • pass alarm data from create alarm page to alarm schedule page
    • pass ringtone data form create ringtone page to create alarm page
  • week 9 (next week):
    • backend set up
    • connect UI with backend
    • connect web app with Pi

Echo Gao’s and Peizhi Yu’s Status Update for 10/24

Computer Vision: No change, raspberry pi keeps running at 10 fps anytime when testing.

Hardware & Communication between raspberry pi and Robot base: This week, we finished implementing the idle stage action: if the robot does not detect a person from the pi camera, it will keep self rotating. Next, we finished obstacle-handling algorithm: if an object is detect through the robot base’s 4 light bump sensors (right, left, front-left, front-right), it will first self rotate until the obstacle is not in its sensor’s detection range, then move forward for another second. At this point, the person is probably not in camera’s view. So the robot will return to its idle stage again: self rotate until the camera catches the person to perform the next move. Here, we are testing with all bottle sized obstacles scattered sparsely on the ground. We are assuming that during the “moving forward for another second” action, there are no other obstacles on its way. Else it would push the obstacles away instead of avoiding it.

This is a illustration of how robot moves away to avoid obstacles (no camera & person involved):

Accident encountered: Later when we were trying to integrate our obstacle-handling algorithm back to our main code, we realized that our raspberry pi camera was very likely broken. We identified that it is camera’s hardware problem from this error message: camera control callback no data received from sensor. We are now trying to verify our guess and buy a new pi camera.  Next week, we will solve this problem and start thinking about how Web App communicates with raspberry pi