Team Status Update for 10/31

Peizhi & Echo:

we almost finished all hardware portion of our project. Right now, we are on schedule.

This week:

  • finished integrating human avoidance algorithm with obstacle avoidance algorithm
  • finished tuning parameters for the robot’s moving speed. (might adjust later)
  • started looking into sockets and how raspberry pi communicates with webapp

Next week:

  • mount hardware on top of the robot base
  • finish the very last “alarm goes off” algorithm when the robot hits the wall
  • integrate with Yuhan’s webapp
  • optimize our algorithm
  • find solutions to the case when sensor data received from robot blows up

Yuhan:

This week:

  • researched possible ways of real time communication from web app to Pi
    • tried to work with just MongoDB and failed
    • looked up azure Iot hub, RabbitMQ, etc
    • settled on AWS SQS
  • wrote pseudo code hosted on both Pi and the web app/remote server

Next week:

  • borrow a Pi
  • Turn pseudo code into actual code
    • test MongoDB works on remote server (for permanent storage)
    • test AWS SQS works from remote server to Pi (for real time communication)

Echo Gao’s Status Update 10/31

Hardware portion almost finished:

After replacing our broken webcam with a USB camera, we continued our testing. This week, we combined our obstacle avoiding algorithm with our human avoiding algorithm together. Now, our alarm clock robot is fully functional with all our requirements reached. The complete algorithm is as followed: the robot starts self rotate at its idle stage while playing the song when no person is detected through camera. As soon as a person is detected through the camera, robot will start moving in the direction away from him/her while trying to fix this person’s position in the middle of the camera. Yet if obstacles are encountered, the robot will immediately start self rotate and move in the direction where no obstacles are detected through its sensors. At that point on, the robot will look back at its camera again to find the person. If the person is still in its view, it will perform the above action. If the person is not in its view, the robot will start self rotating until it finds a person again. As one might see, avoiding obstacles take priority over avoiding user. The “finish action/ alarm turn off action” will be done next week. That is, when the robot runs into a wall, the entire program finishes. A problem we encountered is that the distance information we received from IRobot Create2’s sensors to detect nearby obstacles sometimes blows up to completely inaccurate numbers. In that case, our entire program runs into undefined behavior and crashes. We have not yet find a solution to this problem.

(avoiding both human and obstacles demo)

 

(avoiding human only demo)

Integration with Webapp:

Now, we are stating to work on sockets and how raspberry pi communicates with Webapp from Yuhan’s website. Next week, we will work on how to let the Webapp controls the entire on and off of our program and how webapp sends ringtone and time to raspberry pi.

Yuhan Xiao’s Status Update for 10/24

This week I finished all the other essential web app UI development(“create ringtone” page ). I also enabled local data to pass through between components(e.g. when you clicked submit on the form on “create alarm” page, it shows up on “alarm schedule” page, etc) by the use of component states and local storage. This means all pages can now load alarm/ringtone data dynamically.

This also means I am currently on schedule and the web app is ready for backend integration next week as planned.

Scroll down to the bottom to see screenshots of all 3 pages of the web app. But since there are musical components to the web app(e.g. when you create the ringtone, you can preview your ringtone or individual note; you can also preview the ringtone on “alarm schedule” page, etc, like what a normal built-in phone alarm would let you do), clone to this github repo(email me to grant you access), do “npm install” and “npm start” to locally host the web app. I realize ec2 instance shuts down my web app hosting after a while so if you want to interact with the web app, you need to clone the repo.

Here are a few things that did not really go as planned:

  1. Right now the “create ringtone” page supports a more narrow range of notes(6 octaves) than create 2 robot base can support(8 octaves). This is due to some problems with the audio playing library chosen, that after the 6th octave, it stops playing audible sounds. I did not have time to investigate into the reason yet, but since 6 octaves should suffice, I will look into this after backend integration, if I have the time.
  2. I wanted to make a more intuitive interface for “create ringtone page”, but since there is not enough time for it, I also have to put it in the backlog and look into it after backend integration(with database for permanent storage and with Pi).

My schedule(same as last week’s):

  • week 8 (this week):
    • create ringtone UI
    • pass alarm data from create alarm page to alarm schedule page
    • pass ringtone data form create ringtone page to create alarm page
  • week 9 (next week):
    • backend set up
    • connect UI with backend
    • connect web app with Pi

Echo Gao’s and Peizhi Yu’s Status Update for 10/24

Computer Vision: No change, raspberry pi keeps running at 10 fps anytime when testing.

Hardware & Communication between raspberry pi and Robot base: This week, we finished implementing the idle stage action: if the robot does not detect a person from the pi camera, it will keep self rotating. Next, we finished obstacle-handling algorithm: if an object is detect through the robot base’s 4 light bump sensors (right, left, front-left, front-right), it will first self rotate until the obstacle is not in its sensor’s detection range, then move forward for another second. At this point, the person is probably not in camera’s view. So the robot will return to its idle stage again: self rotate until the camera catches the person to perform the next move. Here, we are testing with all bottle sized obstacles scattered sparsely on the ground. We are assuming that during the “moving forward for another second” action, there are no other obstacles on its way. Else it would push the obstacles away instead of avoiding it.

This is a illustration of how robot moves away to avoid obstacles (no camera & person involved):

Accident encountered: Later when we were trying to integrate our obstacle-handling algorithm back to our main code, we realized that our raspberry pi camera was very likely broken. We identified that it is camera’s hardware problem from this error message: camera control callback no data received from sensor. We are now trying to verify our guess and buy a new pi camera.  Next week, we will solve this problem and start thinking about how Web App communicates with raspberry pi

Team Status Update for 10/24

Peizhi & Echo

We are about 1 week ahead of our schedule.

This week:

  • Obstacle detection & reaction
  • Rotation when idle/lost track

Next week:

  • Spot issue with Pi camera (likely burnt), purchase new one if necessary
  • Purchase Raspberry Pi tool kit and mount the Pi + camera on the robotic base
  • Integrate the obstacle detection script with our main Computer Vision script

Yuhan

My schedule is unchanged and posted in my individual status update.

There is no significant change to the web app system design either.

Refer to my individual update for screenshots of the web app.

The current priority/risk is to integrate Pi with web app. I will be focused on it next week onwards. To mitigate, My plan is to go over the following test cases:

  1. send data for one alarm to Pi, and see:
    1. if the robot base can be activated within a reasonable latency/in real time;
    2. if the robot base can play the ringtone sequence correctly.
  2. schedule one alarm, and see if the robot base can be activated at scheduled time;
  3. schedule 3 alarms in one row;
  4. schedule one alarm 3 times;
  5. schedule one alarm with a specified interval, and see if the robot can be  repeatedly activated reliably.

Peizhi Yu’s Status Report 10/17

Computer Vision: No change, raspberry pi keeps running at 10 fps anytime when testing.

Hardware & Communication between raspberry pi and Robot base:

As mentioned in last week’s report, since we found out that multithreading does not work on Raspberry pi 3, we decided to use a counter to keep track of the time, that is, the robot does not move when it is in idle stage at first. When it first detects a person in the camera, it starts to perform the move away action by comparing the person’s position with the center of the camera. If the person is in on the left side, it will go forward right so that the camera keeps locking the person in its view while running away. If the person is on the left, it will go forward left as result. We used a counter in this process. When the robot is in the progress of performing such a move action, it will not receive any new instructions from raspberry pi. (For example, if the person first appears on the left side of the camera and suddenly jumps to the right side of the camera on the next frame, robot will only perform forward left action.) Now we set the counter to 5, which means that the robot will receive new instructions every 0.5 seconds. However, the only concern is that, now, the robot will have a little pause every 0.5 seconds, which is the time it takes to process new instructions. Even though it does not cause any issues for now, the pause is still quite obvious and is a little bit distracting.

We also made sure that the robot can keep playing songs while moving using its built in function calls: bot.playSong().

We are now starting to look into how Robot base sensors can be used to detect obstacles and walls. The problem we are facing right now is that: we realized when the robot encounters an obstacle, it will turn and move in another direction. Yet we need to make sure that after it turns, the camera still locks onto the person. For now we have not yet figured out a way to make the robot both move in the way to avoid obstacles and move away from the person.

Schedule

We are still ahead of our original schedule, which makes room for unintended difficulties.

Yuhan Xiao’s Status Update for 10/17

I set up an EC2 instance and prepared it for web app deployment.

I worked on web app frontend and made UI for user to set time and choose from a list of ringtones. Feel free to try it out here (note this might be updated to display new UI next week). If you clicks on the ringtone name, it plays a short ringtone that is defined by an array of notes, and each note consists of pitch and duration. I  implemented a “fast selecting” feature(user can choose “weekdays” to auto-select all five weekdays, and vice versa) which is also pretty neat.

I wrote a short script for Pi to communicate with the web app on EC2 instance and receives sample ringtone data.

I also worked on some portions of design review report(overall & subsystem architecture, etc).

Other work I did:

  1. I looked into using AWS Amplify(versus EC2) as a choice of hosting the web app. It can allow me to deploy and publish my web app every time I update it using one console command. However in the end, I decided against it because I am not sure if and how to implement the alarm scheduling using Amplify (2) it requires using GraphQL and DynamoDB which I am less familiar with, hence longer time to ramp up.
  2. I checked out Typescript as a possible option for frontend framework language, and ultimately decided it is not worth since the frontend framework I chose lacks documentation for Typescript.
  3. I decided on Ant Design as the frontend framework since it covers all UI widgets I would need.
  4. I looked at a few node packages for playing audio from midi files. I realized I had some misunderstanding about what midi is and how some of the “midi player” packages work. In the end I decided to use a different package for playing audio than I previously planned.
    1. I used this for playing audio by specifying notes: https://www.npmjs.com/package/soundfont-player
      1. It is a lighter version of https://github.com/gleitz/midi-js-soundfonts
      2. I was gonna use https://github.com/grimmdude/MidiPlayerJS but it turns out it just translate a midi file into a sequence of json data, but not play the audio
      3. I was gonna use sample-player package but decided other packages have a bigger user base
  5. I realized the user might not want to create a new ringtone every time they set an alarm, so I made a new page that is not previously on the mockup for design presentation for users to make ringtones and then the ringtones will be updated on create alarm page.

Currently on schedule. My own updated timeline (before integration) is as follows:

  • week 7(here):
    • web app build and deployment set up
    • spike on communication between web app and Pi
    • create alarm UI
    • alarm schedule UI
  • week 8:
    • create ringtone UI
    • pass alarm data from create alarm page to alarm schedule page
    • pass ringtone data form create ringtone page to create alarm page
  • week 9:
    • backend set up
    • connect UI with backend
    • connect web app with Pi
  • week 10(demo)

Team status Update for 10/17

Peizhi & Echo:

We are ahead of our schedule.

This week:

  • the robot now is able to perform “run-away-from-person-action” correctly while keeping our frame rate still at a minimum of 10 fps.
  • wrote design report
  • made the robot keep playing song while moving

Next week:

  • figure out how robot should move when obstacles are encountered
  • when in idle stage (when no person is detected), the robot self rotates until person appears in view
  •  how to physically mount raspberry pi, robot base, usb acceleration and all other parts together as one piece of product

Yuhan:

The most significant risk in my project is how communication between Pi and web app should be scheduled. The most straightforward approach it seems is to use weekly “schedule” command from the Open Interface in the control while loop of Pi. This way the web app only has to communicate to the Pi once(e.g. “alarms at 8am every Monday”) and iRobot will repeat the alarm routine every week. However, I am not very sure if pycreate2 supports the schedule command. I am also not sure how customized the weekly alarm routine can be and if it fits our requirement(e.g. we need the routine to include actuator commands as well as song commands).

A contingency plan is to leave it to the web server to schedule the alarms, i.e. “alarms at 8am every Monday” = web server sends signal to Pi at 7:55am on 10/19, then again on 10/26, etc. I think it requires a more complicated logic that might get buggy more easily and harder to maintain. Right now I am mostly focused on frontend development and then connecting it to the database and will deal with it as it comes later on.

My updated timeline(same as my individual status report):

  • week 7(here):
    • web app build and deployment set up
    • spike on communication between web app and Pi
    • create alarm UI
    • alarm schedule UI
  • week 8:
    • create ringtone UI
    • pass alarm data from create alarm page to alarm schedule page
    • pass ringtone data form create ringtone page to create alarm page
  • week 9:
    • backend set up
    • connect UI with backend
    • connect web app with Pi
  • week 10(demo)

Echo Gao’s Status Update for 10/17

Computer Vision: No change, raspberry pi keeps running at 10 fps anytime when testing.

Hardware & Communication between raspberry pi and Robot base:

As mentioned in last week’s report, since we found out that multithreading does not work on Raspberry pi 3, we decided to use a counter to keep track of the time, that is, the robot does not move when it is in idle stage at first. When it first detects a person in the camera, it starts to perform the move away action by comparing the person’s position with the center of the camera. If the person is in on the left side, it will go forward right so that the camera keeps locking the person in its view while running away. If the person is on the left, it will go forward left as result. We used a counter in this process. When the robot is in the progress of performing such a move action, it will not receive any new instructions from raspberry pi. (For example, if the person first appears on the left side of the camera and suddenly jumps to the right side of the camera on the next frame, robot will only perform forward left action.) Now we set the counter to 5, which means that the robot will receive new instructions every 0.5 seconds. However, the only concern is that, now, the robot will have a little pause every 0.5 seconds, which is the time it takes to process new instructions. Even though it does not cause any issues for now, the pause is still quite obvious and is a little bit distracting.

We also made sure that the robot can keep playing songs while moving using its built in function calls: bot.playSong().

We are now starting to look into how Robot base sensors can be used to detect obstacles and walls. The problem we are facing right now is that: we realized when the robot encounters an obstacle, it will turn and move in another direction. Yet we need to make sure that after it turns, the camera still locks onto the person. For now we have not yet figured out a way to make the robot both move in the way to avoid obstacles and move away from the person.

Schedule

We are still ahead of our original schedule, which makes room for unintended difficulties.

Team status Update for 10/10

Peizhi & Echo:

We are ahead of our schedule. we originally scheduled to move on to hardware portion starting next week, yet we already made significant progress on this part during this week.

This week:

  • finished optimizing TensorflowLite on raspberry pi and successfully reached 10 fps
  • made raspberry pi control the movement of the robot base. Now it can perform simple instructions such as move in any direction and play song
  • robot base was able to move according to the person that is detected through pi camera and CV algorithm on pi. Algorithm needs to be optimized next week

Next week:

  • program the robot to perform “run-away-from-person-action” correctly while keeping our frame rate still at a minimum of 10 fps.
  • program the robot to move in a way to avoid all obstacles
  • try figuring out the hardware part, that is, how to physically mount raspberry pi, robot base, usb acceleration and all other parts together as one piece of product