Status Report #6 (Mar 30)

Haohan Shi

This week I primarily worked on the improvement of text-to-speech functionality and the integration of LCD screen for next week’s demo.

In order to correctly send the instruction to LCD screen without waiting in our main control system. I designed a non-blocking structure for LCD control class so that the main control system can send signals to the LCD at any moment and the LCD will react accordingly.

Also, I implemented two major functions, countdown, photo taking, and some simple control commands, and I need to add serial communication to the robot base so that the robot will follow the movement instructions accordingly. In addition, some simple commands will also be added such as displaying current time, etc.

Yanying Zhu

This week I continued working on the movement system. The edge detection system worked out well and the robot is now able to move without falling. I also changed the wait time, turn angle a bit in order to make the turning more smooth.

Next step is to integrate the robot’s movement system with serial command from raspberry pi.

Olivia Xu

  • In the process of integration I found out that it’s necessary to combine my individual .py files for each screen-function (countdown.py, loading.py, blinking.py, etc) to a single file because the individual while loops are becoming a real problem with blocking other things, and I don’t want a global “interrupt” variable. So major code edits to get one single giant nested while loop. Some more efforts in self-learning python from scratch.
  • completed more / fixed previous cut-scenes/loading scenes and facial expressions to get better fake-gifs made a time display
  • Working on transition design from one display image to another, knowing the specific workings, responses, response times of the mic, speaker, API, etc

 

Team Status

We are currently on the first integration stage so that the demo can run smoothly. But a lot of functions such as motion control and hot word detection still need polishing.

Our camera died suddenly during tests on Saturday, we need to purchase another one next week. We will borrow one from one friend for Monday’s demo.

Status Report #5 (Mar 23)

Yanying Zhu

I worked on the movement system which involves obstacle detection and path planning and edge detection. The majority part of the obstacle detection works. It was able to turn right and left when there is obstacle ahead and was able to back up when the obstacle is too close ahead. The main issue was that the sensor is very sensitive to objects far away and has a high environment requirement. I spent most of the time collecting sensor data with respect to different obstacle position.

The major steps next week is to work on edge detection and improve on current path planning algorithm, which involves figuring out the best turning angle, turn speed…etc.

Olivia Xu

  • Used python Image library to effectively display my images and texts while taking up as little space and time resources as possible.
  • Facing issues with the image display functions blocking other threads while running and ultimately getting everything stuck.
  • Drew more images and designed layout for combination of images and texts such as weather and time. 
  • finished up facial expressions image line-up for fake-gifs on our robot face 
  • load scenes

 

 

Status Report #4 (Mar 9)

Haohan Shi

This week I worked on setting up Google cloud application and corresponding python code for us to do speech-to-text so that we can analyze what the user said and give the correct response.

One major issue I noticed is the delay of the response comming from Google cloud, from Google cloud console I’m seeing an average of ~1s latency:

The latency can be much worse if the network traffic is busy in a public area such as demo room, so our current workaround can be adding a “processing” screen on the lcd so that the user can know the command is being processed.

My next focus is to combine our hotword detection and the speech-to-text together, so that the STT functionality can be successfully invoked by the hotword. After this, I should start working on processing the input text and finalize all the “supported commands” of our robot.

Yanying Zhu

I mainly worked on the control: obstacle avoidance system of the robot. There are several research paper that demonstrates sensor based path planning algorithm for small robot. I aim to use simplified version of such algorithm and apply it to our robot. I have created a base code for the robot control system which contains basic states: wait , scan and move. It should be able to do simple obstacle avoidance based on that.  The next steps would be improving the complexity of the code by adding more states, for example backing up when the distance is too short and hard to turn.

Olivia Xu

self-learned python Image library to effectively display my images and texts while taking up as little space and time resources as possible. Facing issues with the image display functions blocking other threads while running and ultimately getting everything stuck. Drew more images and designed layout for combination of images and texts such as weather and time.

Status Report #3 (Mar 2)

Haohan Shi

Our USB microphone, speaker and RPi camera arrived this week. I mainly worked on these tasks this week:

  1. Setup USB microphone and speaker on Raspberry Pi. RPi uses headphone jack and HDMI as default sound output. I consulted several guides such as Snowboy document and Google Assistant  SDK document to properly set up our USB microphone and speaker as default sound input and output device.
  2. Setup hotword detection on RPi with our microphone and speaker. It took me a significant amount of time to correctly install all required library and packages, and fix some build errors.
  3.  Hotword detection adjustment. Consider the fact that the ambient noise will be very high during the demo, we need to make sure that the program will still achieve a pretty high detection rate and not give a lot of false alarms. Thus I adjust the sensitivity and input gain on the microphone and tested under simulated environment by playing loud crowd noise and running the program
  4. Camera input. I researched the basics of getting video input and capture images from the camera. Also, I tried some simple integration by putting together hotword detection and image capture.

My goal next week is to start adding command recognition to our system, and ask for more input to our hotword detection model.

 

Yanying Zhu

  1. Currently working on the movement control system but have not tested any of these on the robot. The idea it to set a time interval and get the proximity sensor value every interval. And let the robot turns around to a different direction and then go straight whenever a obstacle at a direction is detected. There’s six values total for all of the proximity sensors together so still have to find a algorithm to fully integrate these data.
  2. Future goal is that if obstacle detection works out, I can work on edge detection which avoids falling. Another workaround is to draw lines around its moving area so line detection can be used which will be straightforward.

Olivia Xu

  1. Got LCD connected to Pi. spent a lot of time finding a solution as it turns out Adafruit’s given library has a bug in GPIO. used pip instead
  2. got sample image code running
  3. drew some static images specifically to the size of 320*240 pixels to use

Team Status

We are currently on schedule, camera input and speaker + microphone in/output is correctly configured. And our movement control is also in progress.

We encontered dead Raspberry Pi this week, luckily we still have a backup RPi but we may consider order one more. Which is something we need to take care of in our project management.