Weekly Status Reports

Team Status Update for April 18

Team Status Update for April 18

This week, all three of us worked on integration and improving our system to prepare for the demo next week. Ashika and Abha worked on improving the display, and Jade worked on improving the pi to laptop socket communication. Abha also continued to work on 

Jade’s Status Update for Apr 18

Jade’s Status Update for Apr 18

This week I worked on improving the pi’s main function as well as cleaning up the socket code. Specifically I worked on making it so that the program would close the sockets when done running and also not hang if something happened to a socket 

Abha’s Status Update for April 11

Abha’s Status Update for April 11

On Sunday, I cut out the cardboard pieces for the head, and one of the arms.I also found a mini servo motor in my home, so I attached that to the arm. I soldered the motor shield that came in last week, and attached it to the Raspberry Pi. I ran the servo motor code that I wrote on Friday for the mini servo, and debugged a few errors with it. I also programmed four emotions on the eyes display: happiness, sadness, anger, and worry. I made a video which showcased the progress I had made so far for the interim demo on Monday.

On Monday, I assembled the pieces for the arm. I connected the mini servo to it, and was able to rotate the arm via the motor code I wrote. After the interim demo, I tested the arm out more, and realized that the mini servo did not have enough torque for the arm, as I had hoped it would. To get around not having servo horns for the regular servos, which have enough torque, I hotglued a piece of cardboard to the servo gear. I can attach the servo to the arm via the cardboard piece, which will let me move the arm with a normal servo motor.

On Wednesday, I downloaded the libraries needed for the ML and audio parts on my laptop, which took much longer than I anticipated, since these libraries were large and I had many issues through out the process. After I setup my laptop with the appropriate libraries and environment, I was able to get Ashika’s ML code and Jade’s audio code working on my laptop. This is essential since the final demo would be running on my laptop and hardware.

On Friday, I downloaded the libraries needed for the audio part on the Raspberry Pi, which also took longer than anticipated. After setting up the Raspberry Pi, I attached the audio hardware to the Pi, and got the audio code working through the hardware and Pi on my end. This is essential since the final demo would be running on my laptop and hardware. I also tried to get the text display code working with the Pi and text display hardware. However, there were a few issues with the libraries and environment setup, and Ashika and I were not able to finish debugging it.

In the upcoming week, I would like to finish debugging/updating the text display code with Ashika, write code to get the face display working, since it came in on Friday, and assemble the head and arms and attach it to the body.

Team Status Update for April 11

Team Status Update for April 11

This week, Jade created a server/client program for laptop to raspberry pi communication, primarily for the audio input/output. Abha worked on installing all the necessary packages and tools, both on her laptop and the raspberry pi, to run the storytelling algorithm and the audio components. 

Ashika’s Status Update for April 11

Ashika’s Status Update for April 11

This week, for the storytelling component alone, I finally finished grammar correction for nouns and I worked on creating a few more templates. I changed the format slightly to improve sentence tokenization, which makes it easier to send the story sentence by sentence to the 

Jade’s Status Report for Apr 11

Jade’s Status Report for Apr 11

On Monday I worked on cleaning up the socket interface code. I wrote read and write functions for both the server and the client.

On Wednesday I worked on integrating socket code into our program, this consisted of creating a new main file for the pi and modifying some of the user program on the ML side. I also worked on installing the packages needed for the ML code on the pi in a last ditch attempt to get the ML code running on the pi. The ML package installs did not work, so we will be running the ML code on the laptop and passing data to the pi.

On Friday I worked on integrating the storytelling program with the socket code with Ashika. Ashika and I got the program working to the point where you can type inputs to the pi and get them to be passed to the laptop and the laptop will pass back the next line of dialog. The storytelling program works when all inputs are match, but sometimes the communication protocol gets messed up and will hang when there are timeout or unknown word errors.

I am behind because integration is taking far longer than I thought. Mostly I am behind because I need to make sure the socket code works well and won’t hang, I also need to make sure that the socket code also works with audio inputs.

Next week I will work on making sure that the socket code won’t hang as well as ensuring that the whole audio input/output system works during the storytelling process.

Abha’s Status Report for April 4

Abha’s Status Report for April 4

On Sunday, I tried to debug the issues with showing an image on the eyes display through an SD card. Initially, my plan was to put an image of a cat eye on the SD card, and the eye display would read it. Most of 

Ashika’s Status Update for April 4

Ashika’s Status Update for April 4

This week I started integration with Abha for the text display. On my end, I created a display window (shown above) in Python that runs on a separate thread from the main program. This window is the same size as the pi and displays the 

Team Status Update for April 4

Team Status Update for April 4

What we did:

Abha worked on the face display on the hardware side, updated the CAD files to reflect the design changes for the head, and wrote code to move motors via a Raspi. Ashika wrote code to display the sentence output on a graphic screen (which will later be shown on the text display) and created templates. Jade finished getting the audio input and output working on the pi, and also started working on getting the ML algorithm running on the raspberry pi.

Major design changes:

When Abha worked on displaying eyes through the text displays, we realized that an important part of being able to show expressions is the way that the mouth is shaped (smile, frown, neutral, etc). With just the eyes changing shape, it is really hard to discern sentiments.

Since the goal of the eyes display was to be able to change the eye shape to show sentiments, we needed to change the scope of this aspect so that we could accomplish our goal of showing sentiments. Now, we will be having a full face display, that displays eyes, nose, and mouth. We will be able to update the eyes and mouth to better display sentiments.

Risk management:

The risks haven’t changed since last week.

Jade’s Status Report for April 4

Jade’s Status Report for April 4

On Sunday I decided to tackle the choppy audio quality that I was getting from the audio playback. I was using pydub’s AudioSegment to play the sound back, but it was choppy, so, I investigated a few other audio processing packages including pyglet, pygame, tkSpeak,