Status Report #3 (Mar 2)

Haohan Shi

Our USB microphone, speaker and RPi camera arrived this week. I mainly worked on these tasks this week:

  1. Setup USB microphone and speaker on Raspberry Pi. RPi uses headphone jack and HDMI as default sound output. I consulted several guides such as Snowboy document and Google Assistant  SDK document to properly set up our USB microphone and speaker as default sound input and output device.
  2. Setup hotword detection on RPi with our microphone and speaker. It took me a significant amount of time to correctly install all required library and packages, and fix some build errors.
  3.  Hotword detection adjustment. Consider the fact that the ambient noise will be very high during the demo, we need to make sure that the program will still achieve a pretty high detection rate and not give a lot of false alarms. Thus I adjust the sensitivity and input gain on the microphone and tested under simulated environment by playing loud crowd noise and running the program
  4. Camera input. I researched the basics of getting video input and capture images from the camera. Also, I tried some simple integration by putting together hotword detection and image capture.

My goal next week is to start adding command recognition to our system, and ask for more input to our hotword detection model.

 

Yanying Zhu

  1. Currently working on the movement control system but have not tested any of these on the robot. The idea it to set a time interval and get the proximity sensor value every interval. And let the robot turns around to a different direction and then go straight whenever a obstacle at a direction is detected. There’s six values total for all of the proximity sensors together so still have to find a algorithm to fully integrate these data.
  2. Future goal is that if obstacle detection works out, I can work on edge detection which avoids falling. Another workaround is to draw lines around its moving area so line detection can be used which will be straightforward.

Olivia Xu

  1. Got LCD connected to Pi. spent a lot of time finding a solution as it turns out Adafruit’s given library has a bug in GPIO. used pip instead
  2. got sample image code running
  3. drew some static images specifically to the size of 320*240 pixels to use

Team Status

We are currently on schedule, camera input and speaker + microphone in/output is correctly configured. And our movement control is also in progress.

We encontered dead Raspberry Pi this week, luckily we still have a backup RPi but we may consider order one more. Which is something we need to take care of in our project management.

Status Report #2 (Feb 24)

Haohan Shi

Our Polulu robot kit and Raspberry Pi arrived this Tuesday, so my major accomplishment this week is:

    1. Get familiar with the Polulu Arduino library and figure out its sample code on how to read the input of proximity sensors, print onto onboard LCD screen, and control motors.
    2. Set up serial communication between the robot kit and Raspberry Pi, I wrote a sample code on basic serial communication, RPi runs a Python script that can receive user input and sent the input as a string to Polulu, whereas Polulu replies back with the instruction received and “execute”. See below video for more information. This is only a test on the successful communication between the two and this method will be implemented in the actual control system.
    3. I ordered the USB microphone and speaker so that I can start working on the actual voice control part next week. While we are waiting for these two to arrive, I started researching on the implementation of “hotword” on RPi, and found out Snowboy currently is a very good option to implement hotword detection on RPi. It is super lightweight and does not require an Internet connection when detecting, it only takes about 5% of CPU power when running on our RPi, so we can have plenty CPU resource to work on the camera image processing. I tried to train a personal model on “Hi Meo” and tested locally on my Macbook and achieved ~70% (7 out of 10) detection rate.

My goal for next week is to implement the hotword detection on RPi when microphone and speaker arrives and start designing block diagram for our complete system for design review, which includes what RPi and Polulu should do on different tasks, when and how does the communication between the two occur, and the requirements for each part of the system.

Olivia Xu

  1. Got to know the example features that comes with Pololu’s Zumo robot and its custom library. Checked for specific object detection sensors: it can detect proximity of objects from its front, left, and right. There are visible IR sensors in the front, and it effectively turns and follows to face its “opponent” (person in this case) as it moves.
  2. Worked on obstacle and edge avoidance. Need to decide how to get around objects and whether an A* algorithm is necessary. Also we will need a camera and image recognition for a more precise proximity detection for multiple obstacles.

Yanying Zhu

  1. Getting familiar with the ZUMO Arduino library. Our team members has implemented the Raspberry Pi to ZUMO robot serial communication script and that turns out to be working effectively. I studied the sample code (boarder detection) in order to better understand how line sensor and proximity sensors works. The library contains many detailed sub-functions some of which I don’t fully understand.
  2. Next step would be to implement robot moving system using the library functions. The motor library is relatively simple and straightforward. We can already realize moving with different speed and turning around. But the hard portion would be how to detect obstacles and update path planning in real time systems.

Team Status

We are currently on schedule because we started working on the actual implementation of movement control such as edge and obstacle detection this week, and the basic functionality of our control system which is hotword detection is currently ready to test on RPi.

Our main concern this week to our project is the computation power of RPi, especially graphics computation, after some research on facial detection and video recording examples on RPi, we noticed that the framerate on video can be as low as 7-8 FPS when opencv + facial recognition is on, and due to the fact that we have lots of other functionalities and components for RPi to take care of, we need to pay attention to the CPU usage for different components.