Month: April 2020

Abha’s Status Update for April 25

Abha’s Status Update for April 25

This week, I mostly worked on assembling the robot. I also helped Jade collect data for speech recognition accuracy. First, I assembled the head and body of the robot. I put all of the electronics inside them and made sure everything still worked. One minor 

Ashika’s Status Update for April 25

Ashika’s Status Update for April 25

This week, I worked on testing all the storytelling components of KATbot to see if they meet our metrics. For the part of speech testing, I created a python testing script to check the accuracy, with both correct and incorrect inputs, of the part of 

Jade’s Status Report for Apr 25

Jade’s Status Report for Apr 25

On Monday I just tested our code before the demo we had and participated in the demo.

On Wednesday I wrote testing code to measure speech recognition accuracy. The code pulled random words from the thousand most common English words and then prompted the user to say them and compared the speech recognition output to the original prompt word. Abha and I tested it on the Raspberry Pi with the microphone hardware setup so that we could get a good understanding of what the accuracy of the system would be.

Person Date/time correctness total file
Jade 4/22 12:20 81 100
Abha 4/22 12:41 91 100
Jade 4/22 12:45 89 100 op2.txt
Abha 4/22 12:48 80 100
Jade 4/22 12:52 91 100 op3.txt
Jade 4/22 1:03 91 100 op4.txt
Average: 0.8716666667

So far over 6 tests the system is averaging an 87% accuracy which is slightly above our metric of 85% accuracy.

I also had the program output a text file with the exact word errors so that we could analyze them.

On Friday I wrote testing code to measure system latency between the pi and the system. It measures the latency between when the user finishes saying the user input to when the system says the next line of dialog. I tested this only on one laptop with text inputs and found that the latency was within our 4- 6 second range. We still need to test latency for the whole system with speech recognition included.

I am on schedule.

Next week I will be working on finishing measurements and writing the final paper.

Team Status Update for April 25

Team Status Update for April 25

This week, Jade and Ashika focused on testing the audio and storytelling components respectively. They also cleaned up minor bugs in the software as they went. Abha continued to work on assembling the robot. Everyone worked on creating the final presentation for next week. There 

Abha’s Status Update for April 18

Abha’s Status Update for April 18

On Monday, I set up the face display and got code working on it to draw shapes and lines. This was a precursor so that I could debug issues with drawing shapes on a canvas before creating the actual face. On Wednesday, I worked with 

Ashika’s Status Update for April 18

Ashika’s Status Update for April 18

This week, I mainly focused on tying up loose ends and helping the others with integration. As mentioned in the team status report, I modified the program to output the entire story once it is finished with all the new words bolded so users can see how KATbot actually filled in the template. Since I do not have the hardware, I cannot provide a picture, but with Abha’s help, I tested it on the pi. I also made some new templates. I think I am done with the storytelling algorithm unless we get new feedback during the demo next week.

I am still on schedule. Next week will be dedicated to testing and creating the final presentation. I may also work with Abha, if time permits, to do something about the robot’s expressions in relation to the story, but this is outside the scope of the MVP.

Team Status Update for April 18

Team Status Update for April 18

This week, all three of us worked on integration and improving our system to prepare for the demo next week. Ashika and Abha worked on improving the display, and Jade worked on improving the pi to laptop socket communication. Abha also continued to work on 

Jade’s Status Update for Apr 18

Jade’s Status Update for Apr 18

This week I worked on improving the pi’s main function as well as cleaning up the socket code. Specifically I worked on making it so that the program would close the sockets when done running and also not hang if something happened to a socket 

Abha’s Status Update for April 11

Abha’s Status Update for April 11

On Sunday, I cut out the cardboard pieces for the head, and one of the arms.I also found a mini servo motor in my home, so I attached that to the arm. I soldered the motor shield that came in last week, and attached it to the Raspberry Pi. I ran the servo motor code that I wrote on Friday for the mini servo, and debugged a few errors with it. I also programmed four emotions on the eyes display: happiness, sadness, anger, and worry. I made a video which showcased the progress I had made so far for the interim demo on Monday.

On Monday, I assembled the pieces for the arm. I connected the mini servo to it, and was able to rotate the arm via the motor code I wrote. After the interim demo, I tested the arm out more, and realized that the mini servo did not have enough torque for the arm, as I had hoped it would. To get around not having servo horns for the regular servos, which have enough torque, I hotglued a piece of cardboard to the servo gear. I can attach the servo to the arm via the cardboard piece, which will let me move the arm with a normal servo motor.

On Wednesday, I downloaded the libraries needed for the ML and audio parts on my laptop, which took much longer than I anticipated, since these libraries were large and I had many issues through out the process. After I setup my laptop with the appropriate libraries and environment, I was able to get Ashika’s ML code and Jade’s audio code working on my laptop. This is essential since the final demo would be running on my laptop and hardware.

On Friday, I downloaded the libraries needed for the audio part on the Raspberry Pi, which also took longer than anticipated. After setting up the Raspberry Pi, I attached the audio hardware to the Pi, and got the audio code working through the hardware and Pi on my end. This is essential since the final demo would be running on my laptop and hardware. I also tried to get the text display code working with the Pi and text display hardware. However, there were a few issues with the libraries and environment setup, and Ashika and I were not able to finish debugging it.

In the upcoming week, I would like to finish debugging/updating the text display code with Ashika, write code to get the face display working, since it came in on Friday, and assemble the head and arms and attach it to the body.

Team Status Update for April 11

Team Status Update for April 11

This week, Jade created a server/client program for laptop to raspberry pi communication, primarily for the audio input/output. Abha worked on installing all the necessary packages and tools, both on her laptop and the raspberry pi, to run the storytelling algorithm and the audio components.