Weekly Status Reports

Ashika’s Status Update for Mar 7

Ashika’s Status Update for Mar 7

I spent the first part of this week editing and formatting the design report. Once that was finished, I started creating a user program for the storytelling algorithm. The user interface is command line at the moment, and this is how the program works: Program 

Abha’s Status Update for March 7

Abha’s Status Update for March 7

The first part of my week was spent working on the design report. After that, since the SD cards had come in, I set up a Raspberry Pi with Jade’s help. Then I worked on setting up the display for the text. Although I didn’t 

Jade’s Status Report for Mar 1

Jade’s Status Report for Mar 1

This week I primarily spent time working on our design presentation. I created a new system diagram for us to use which is displayed below. I also spent a decent amount of time practicing for the presentation.

As for what I’ve accomplished regarding the project, I wrote some code that will take a template that Ashika has been using and parse it for all user and generated inputs and then say it out loud to the user. This was done on my laptop. I’m still working on integrating the speech recognition with this program and hope to do so.

I also have been working on the design report.

I currently am behind because I wanted to finish my speech recognition and text to speech program this week but was more busy than anticipated.

I hope to be able to finish the speech recognition and text to speech program and put it on the raspberry pi and get it working with the microphone and speakers.

Abha’s Status Report for Mar 1

Abha’s Status Report for Mar 1

This week, I focused on working on the design presentation/report and CADing the robot arm and body. I had a rough CAD for the presentation done. However, there were a few issues with that design as I didn’t have time to finalize it before the 

Team Status Update for Mar 1

Team Status Update for Mar 1

We have no significant changes to our schedule and the risks of our project mostly have remained the same. This week, we worked on the design presentation and report, as well as our individual tasks. For the design submissions, we created a new system architecture 

Ashika’s Status Update for Mar 1

Ashika’s Status Update for Mar 1

This week, I focused on two aspects of the project. First, I finished up the storytelling portion of the design report. I already finished ironing out the details and conducting research on the metrics and validation prior to the design presentation, but I edited the report in accordance with the feedback we received (i.e. being more clear about where design choices came from and detailing how we plan to keep our validation objective).

In addition, I created a program to generate synonyms or antonyms of a word given the part of speech. NLTK already has a tool that does this, but the tool itself does not always output the words you are looking for. For example, when looking for the antonyms of ‘sad’, the tool does not output ‘happy’. Since synonym generation is quite crucial to the performance of the story generating module, I created an algorithm that does a breadth first search for any possible synonyms and antonyms. Since the output of this algorithm will be fed into FitBERT, which ranks the words based on best fit, it is better to generate a longer list than move on after just one possible word is found. I am not yet sure how many word possibilities to generate, and I will play around with this list size once I start working with FitBERT.

Even with a BFS algorithm, the ML model is having a hard time linking some obviously related words. For example, “enormous” and “big” are not synonyms of each other, even at a distance of 2 or 3. As I work on the FitBERT portion, I think I might need to use a different synonym generation tool to supplement this program. I think the best option would be to just consult thesaurus.com, but that slows down the program and requires wifi. I might create multiple versions of this program and test them for speed and accuracy in conjunction with the actual stories we will be using, just to ensure we are still meeting our requirements.

Here are some examples of the inputs and outputs this program generates:

input: angry, synonyms, adjective

output: {‘wild’, ‘furious’, ‘raging’, ‘tempestuous’, ‘angry’}

input: sad, antonyms, adjective

output: {‘glad’}

input: quickly, synonyms, adverb

output: {‘promptly’, ‘chop-chop’, ‘rapidly’, ‘quickly’, ‘speedily’, ‘apace’, ‘quick’, ‘cursorily’}

Next week, I plan to start writing the algorithm that fills in the blank based on previous user input. I am a little bit ahead of schedule since I’ve already finished the groundwork for the other components, but the extra slack time will allow me to go back and fine-tune them. I also need to create a lot more templates to do so, so I will spend the first half of the week creating at least 5 more templates for testing.

 

Abha’s Status Report for Feb 22

Abha’s Status Report for Feb 22

The first thing I did this week was work with Jade to determine the communication protocols between all of the hardware in this project. After we determined them, I created the system architecture diagram for our design report/presentation and for future reference. This week, my 

Jade’s Status Update for Feb 22

Jade’s Status Update for Feb 22

This week I investigated many different text to speech packages. I evaluated TTS packages on ease of install and use, whether the TTS required internet connection, and the quality of the voice. A lot of the text to speech packages had robotic or choppy voices, 

Team Status Update for Feb 22

Team Status Update for Feb 22

We have no changes to the existing design, and are currently on schedule. We have been deep diving into figuring out how specifically we will be implementing our requirements as well as working on our design report and presentation. We finalized the communication protocols we will be using to communicate between the peripherals, Raspberrry Pis, and laptop. This led to finalizing the system architecture diagram as well (image above). In addition, this week, Abha worked on the robot arm design and overall robot design and began CADing, Jade worked on determining speech processing/TTS packages and setting up the Raspberry Pis, and Ashika worked on story template generation and parts of speech detection.

Currently the most significant risks remain the same as what they were last week. The only change is with the speech recognition and TTS system. We chose to use packages for speech recognition and TTS that rely on an internet connection. This poses a risk as our robot should be able to work even when not connected to the internet. To mitigate these risks we will be relying on internet based speech recognition and TTS and should the Raspberry Pi disconnect we will have backup speech recognition and TTS packages that can operate without an internet connection. The reason we aren’t using packages that operate without an internet connection in the first place is that they often have worse speech recognition and a very robotic/choppy synthesized voice.

 

Ashika’s Status Update for Feb 22

Ashika’s Status Update for Feb 22

This week, I started writing the code for story creation. I wrote two python programs: the first checks that the part of speech of a word matches the expected part of speech given a sentence for context. This will be used for error detection on