Status Report #10 (Apr 27)

Haohan Shi

This week I mainly worked on improvement and assembly of our case. The initial design of our outer case lacks mounting mechanics and relies only on mechanical hooks, and it turns out to be fairly unstable since the 3D printing isn’t accurate enough, and the laser-cut material such as wood or acrylic sheets are too heavy and thick. In addition, laser-cutting cannot make curvature surfaces easily so that the initial design doesn’t look very nice when we assembled.

For our second edition, we added the mounting holes which matched the screws holes on the robot platform and used glue guns to seal all the connection parts.

Then I conducted several tests with my teammates to find out what parameters work best for our hot word detection.

 

Sensitivity Gain Distance (from mic) Noise False Alarm time Hotword Success Speech-Recognition Success
0.4 1 5 None 0 0%
0.4 1 10 None 1 66.7% 100%
0.4 1 15 None 0 80% 62.5%
0.4 1 25 None 0 0%
0.5 1 5 None 4 50% 50%
0.5 1 15 None 3 10% 100%
0.5 1 25 None 0 10% 100%
0.4 3 5, 15 None 0 0%
0.4 3 25 None 0 71.4% 70%
0.4 3 35 None 0 90% 66.7%
0.4 3 45 None 0 0%

Sensitivity Gain Distance (from mic) Noise False Alarm time Hotword Success Speech-Recognition Success
0.4 5 5, 15 None 1 0%
0.4 5 25 None 1 29.4% 71.4%
0.4 3 5, 15 Noisy 0 0%
0.4 3 25 Noisy 0 66.7% 58.4%
0.4 3 35 Noisy 0 38.5% 70%
0.4 3 5, 15, 35 Very Noisy 0 0% 0%
0.4 3 25 Very Noisy 0 9.1% 100%

The result provides a lot of insights that we didn’t think of initially. It turns out 0.4 and 3 are optimal parameters currently because the ambient noise may be very loud and we are ~25 inches away from the robot during the demo. But more data needs to be collected during a noisy and very noisy environment.

Olivia Xu

Designed new case for the robot with all the information from test runs from last week. Need a good way to mount things and need larger space. So I made a bottom with tabs that have hole for mounting with screw onto the robot, this also allows a much larger design above. Surprisingly weight isn’t much of a problem when the speed is turned up.

Notable constraints:

  • Have to adjust camera angle with different table height. The head has to stay movable since the demos take place on different tables.
  • 3D printing is expensive and support materials tend to be a lot, and often times the support materials fail to print well. So I sliced up the model I made into pieces that could be printed with close to zero support material.
  • 3D printers tend to break too quickly for the staff to service all of them, and there are a lot of people trying to use them during final weeks
  • Need to make sure weight distribution is ok for the rear to protrude by this much. will place portable battery relatively close to front, and the head has torque.

still needs paint

Yanying Zhu

During this week I worked on conducting structured tests with teams and preparing for the final presentation. For movement system, the testing metrics for obstacle avoidance algorithm and edge detection are basically letting it run on the table and record the failure number. The goal for movement system is to achieve 100% success rate on avoiding obstacles and prevent falling. And I think test result shows that we are pretty close to our goal. For speech recognition and hotword detection, we tested on different sets of parameters: distance from mic, gain, sensitivity level and noisy environment to figure out best set of parameters that we would choose and also to document data  for future analyzation. We would do test on facial recognition next week. Facial recognition hardly have any parameters to play around because the altering frame rate would greatly affects processing time, so we might just record success rate and see how it performs. Finally after we glue the case and every hardware component within, we would probably run demo tests as an entire process. 

Team  Status

We are almost done with our project. The next thing to do is to make sure everything is configured and runs smoothly with our case and parts assembled together. In addition, we need to run more tests to check the success rate of each functionality, both the software side and the hardware side.

Status Report #9 (Apr 21)

Haohan Shi

This week I worked on implementing the facial detection on our robot. The complete procedure consists of four parts: facial detection, data gathering, training, and prediction.

The first step is to use openCV to capture the camera image and try to find the faces that are used for recognition. I used openCV’s built-in Haar Cascade Classifier for facial detection:

cascadePath = "haarcascade_frontalface_default.xml"

faceCascade = cv2.CascadeClassifier(cascadePath)
The pre-trained classifier turns out to work very well when the room is bright enough so that the faces can be clearly detected in the images.
The second step is to gather training data for our prediction model. Based on the previous step, I write a script that can continuously capture 30 images and crop out the non-face part of the image. Thus the data set looks like this:
The third step is to train the model on our dataset to learn our faces. For training, I am using a simple and effective algorithm, LBPH (Local Binary Pattern Histogram), which is also included in openCV.
The last step is to use the same LBPH recognizer, loaded with our training result, to predict the who the faces in the image belongs to. My algorithm is to run prediction on 30 consecutive frames, and if there are more than 15 frames with >15 confidence that has someone’s face, it means that person is in front of the camera. The whole prediction process takes about 3-4 seconds, and the result is stable and meets our expectation.

Yanying Zhu

This week I worked on finalizing the following state and moving state. Due to the limitation of the sensor on the robot, it cannot follow in a very precise way, what it can do is roughly follow a very close object in front of it (the object cannot move away suddenly), turning left or right at a speed while avoiding falling down the table. I have implemented these two states and waiting to integrate these states with voice command through serial.

We also integrated Meobot’s movement with face recognition. Meo would slowly turns around to search for the people calling his name and stop. There are two major issues that we are currently solving: one is the latency between facial recognition executes and Meo actually stops. The other is that we are also trying to put all devices (raspberry pi, battery, microphone…etc) on Meo’s back. This greatly affects the motor speed that I previously set.

Olivia Xu

Drew more images for showing dynamic weather. Not sure if this will slow down performance by too much. Tried 3D printing multiple times, was not fully prepared for unfortunate circumstances like the shop failing/losing parts without notification. Had to redesign laser cutted model for inlab demo.

Team Status

We are almost done with our coding part. We are still waiting for our cases to be printed since the makerspace staff lost a few parts that we previous taken to print. The next step is to further calibrate the parameters such as hotword detection sensitivity, input gain, movement speed, etc after we assemble everything together. Also, some procedure refinement needs to be done such as directly asks for user input again when the command can’t be understood so that the user doesn’t need to call the hotword again and again for the same command.

Status Report #8 (April 13)

Haohan Shi

This week I was working on setting up the object detection on Raspberry Pi using openCV, the building process takes an extremely long time than expected. It took the raspberry pi around 1.5 hours to build on 4 cores with increased swap size and the build failed 3-4 times due to race condition. Thus I had to rerun the build several times and it caused the SD card to be corrupted.

Using a swapfile this large will allow the operating system to (potentially) run rampart writes to your SD card. Which is bad for a flash-based storage as it’s more prone to corrupting your SD card/filesystem. (link)

I have purchased a new SD card but I need to re-install all our previous system library and setup, also, some of my test scripts for object detection is also lost. I plan to first recover all our previous setup on the new SD card and try to build openCV using only 1 core with small swap size, but longer build time (~6 hours).

Olivia Xu

Done with first meobot box prototype. The mechanism is all parts slide together, with the two sides “clipping” onto each side of the robot body, and the front and back panels using extra extruded pieces to hold both sides together. Smallest body thickness is 0.3mm. I will be using Ultimaker 3 Extended printers in HH Makerspace. For the sake of time I chose 0.15 mm layer height. Total time 11 hours, 76 grams of plastic.

Carved out space for camera, LCD screen in “head” area (leaving head empty for now and 3D print & super glue later if this model works). Need to get the prints on Monday and see what fits and what doesn’t.

Yanying Zhu

This passing week I worked on the “following user” feature, which is basically an opposite algorithm of obstacle avoidance. Currently it works in a not so perfect way. Since is designed to recheck the state and distance of the obstacle ahead of it every computer cycle, it oscillates frequently when the obstacle moves. I am unsure how good it could end up, but I’m currently calibrating on the best turn speed. Another option is to redefine its behavior: it can face the user at first without moving, and then sprint to the user if the user stand still for a certain seconds.

The future schedule would be further polishing the movement of the robot. It’s also possible to add more features if we can come up with interesting ideas.

Team Status

Our moving and case design is still on the schedule this week, we already have our first version of the outer case so we can print out and test on next week. However, the main functionality of facial and object detection should be done this week, but due to the wasted long build time and corrupted SD card, the main functionality is pushed back to next week.

Our experience from this incident is always keeping several spare parts and backups on the project if possible. So far we have encountered Raspberry Pi failure, camera failure and this time SD card failure. Having backup component and setup will save lots of time if this kind of incident occurs.

Status Report #7 (April 6)

Haohan Shi

This week I worked primarily on adding additional supporting commands for our robot and start working on the facial recognition feature on Raspberry Pi. Also, I helped with the design of the appearance of the final robot, such as where to put each component and what’s the dimension of each part.

There are three main features that I put into this week. First is time display, which is to get the current time in current timezone and translate it into audio and corresponding lcd display. Second is the weather information. We used a free API to get temperature, current weather, etc, and pass this information on to the lcd display. The third one is the automatical/manual stop when listening to the user command or when the user specifically tells the robot to “stop moving”. This involves serial communication between the Pi and the robot.

Olivia Xu

This week I focused on getting a case for our robot. Out of customizability, weight,  and appearance considerations. We’ve decided to 3D print a box. I’ve measured around our robot with all its expected parts connected (RPi, portable charger for RPi, LCD, camera, mic, speaker) and downloaded the CAD file for the Zumo robot and started constructing the box in solid works. Knowing the approximate space above and behind the robot, I’m going to make side panels with small ledges to clip onto the robot, connected front panel, and slide-in back and top panels. Also, there needs a way for RPi to sit above the robot, we could use mounting screws or I could organize slide-in slots for more panels inside the case.

Yanying Zhu

So as demoed in lab, meobot is able to complete stay at a wandering state as it moves, stop randomly, avoids obstacles and edges on the table. Serial communication with raspberry pi is also set up so that meobot stops when serial command is received and resumes to this self-moving when raspberry processing ends. There is a small bug in the current self-moving state that if it is in the process of turning it is not able to edge detect. This can be fixed next week.

Another major thing that I will be working on in the next week is adding more features to the movement system. One possible feature is following people. We discussed on following people using face recognition before, I think I can do a basic version first, which is follow people using sensors.

 

Team Status

After discussion, we have the following changes to our schedule

Original:

New:

We moved the facial recognition part after the interim demo so that we can have time to polish the existing features, and we will start working on the outer appearance design next week.