Rachel’s Status Report for 12/4

This week, I mainly focused on collecting more real-time data to test our glove’s performance under different conditions as well as creating the slides in preparation for our final presentation. I tested the glove’s performance by changing the number of consecutive predictions that the glove must see before outputting the signal as well as seeing our glove’s performance under different signing rates (speed of user). In doing so, I realized that the amount of consecutive predictions we require and how much buffer we allow between signs are both going to have to be dependent on our target user– if our target user is a novice learner, we would probably allow for more buffer time while if our target user is someone who is well-versed in the ASL letters, this buffer time should be reduced so that they can sign smoothly. I also created other plots (such as points of two entirely overlapping signs) to show in our presentation why our glove performs more poorly on some of the letters.

In collecting data for these graphs, I also found some errors in the serial communication code that would occasionally cause the script to hang, so I also fixed that this past week. This coming week, I will work with the rest of my team to complete the final report as well as create a video for our final demo. The glove also needs to be fixed and the sensors secured, so I will also help with the job of securing the sensors so that the data we get from them remain consistent (which was another problem we found throughout the semester). After securing the glove, I will re-collect data from several people, so that we can train our final model. Since everything we have left is relatively outlined and what we expected to be doing this upcoming week, I would say we are right on track!

Team’s Status Report for 12/4

In the last week we worked mostly on making the final presentation slides together. We also started making further improvements that are beyond our original scope to the glove such as adding contact sensors and changing the connection into wireless using Bluetooth if possible. There were some issues in transmitting the data from the Arduino to the PC but that has been resolved in this week.

In this week, we have decided to go along our original approach of using wired connections since the Bluetooth technology adds more delay and we may not meet our goal for latency. We also worked on stabilizing and sewing the sensors down more tightly. Now that we have made adjustments, we will be collecting data next week.

We are on schedule since we have finished the product of our original scope. Our current work are addons made to improve the glove. As mentioned before, we will be collecting data from various people in the next week and working on finishing our final video and report.

Stephanie’s Status Report for 12/4

Last week, I helped making the final presentation slides and worked on generating graphs for our machine learning models to compare the trade-offs and accuracies from different rounds of testing for our final presentation.

In this week, we’re mostly making adjustments to the physical components of the glove, so there isn’t much to do on the software end. I wrote a new script for data training since the format of the data we collect after on will be different from the previous format as we added more sensors. With this, we can get started right away with model training after data collection.

Next week, we’ll be meeting up to collect data after the sensors are all tightly attached and finish up our final report and video. I have also contacted others for data collection.

 

Sophia’s Status Report for 12/4

For the last two weeks, I worked on making some more repairs to the glove and adding touch sensors. The ring finger on the glove was not responding so I switched it out with a new one and stitched it down very tightly. I borrowed a sewing kit from IdEATE and sewed down it down with the full thickness embroidery thread.

I also created a new protoboard to make room for pins for touch sensors.

For touch sensors, I decided to use some conductive tape.

Unfortunately, it seems there are issues with the newprotoboard and some more of the flex sensors are not working. I will need to replace the thumb and fourth finger flex sensors next and debug my work on the protoboard.

Team Status Report for 11/20

This week we implemented the audio portion of our project. Now when a sign is made, the letter will be spoken by the computer. And the audio output is at the rate that we set in the requirements at the beginning of the semester. We also implemented the suggestion we got during interim demos about creating another model to differentiate between letters that are commonly mistaken for others (R, U, V, W and M, N, S, T, E). However, this did not noticeably improve the functionality. We will continue analyzing the data from those easily confused letters to see if we can improve the accuracy. Adding contact sensors may be the best solution.

The Arduino Nano BLE arrived and we are working on integrating Bluetooth into our system so the glove can work wirelessly. There have been some issues uploading scripts to Arduino Nano BLE and will hopefully be resolved next week.

The reading for the flex sensor on the ring finger of the glove also stopped varying (it gives roughly 256 degrees no matter how much you bend it), so we replaced the flex sensor. There is also some calibration we should do in the Arduino script to improve the conversion from voltage to degrees. Now that we have repaired the glove, we will need to recollect data.

We are on schedule since the audio integration is now complete. In the final weeks, we will continue refining the system by changing the hyper-parameters of our model.  We will also create plots showing performance to add to our final presentation, paper, and video.

Stephanie’s Status Report for 11/20

This week I worked on integrating the letter-specific models with our old classifier. The original classifier is responsible for identifying the input data as one of the 26 letters. If the predicted letter is in the list of similar letters that the original classifier easily gets confused on, notably R, U, V, W and M, N, S, T, E, the data will then be passed to one of the smaller models for further classification. This gave slightly better results and was able to make more correct classification.

I performed further data analysis to see if there’s any other way I can improve the model. I did find some data contained invalid values (negative values from flex sensors) that my old outlier removal function did not pick up, hence I was able to refine that to get better data. The sensor values for U and V are rather similar so it could be hard to have a classifier to distinguish between these two. R’s range of values is quite different from the rest of the letters for the middle finger, hence to improve future models for this specific group of letters, I may only train with data of important features, e.g., the flex sensor values.

During this, I also found the ring finger’s flex sensor value for W (finger is straight) is quite similar to that of R,U, and V (finger is bent). Upon further testing with the glove, I found that the flex sensor on the ring finger is not working as intended and is only giving about the same value no matter the bending angle, so we are looking to replace that sensor.

I believe we are still on schedule. In the next week, I’ll be collecting new data after the flex sensor has been fixed.  I’ll also be doing some real time testing and experimenting with new ideas of the smaller model (e.g., training on only important features, changing hyperparameter to produce models with less accuracy on the collected data but more generalizability). After that, we’ll be working on our final presentation and report.

Sophia’s Status Report for 11/20

This week I repaired the glove and worked on connecting with the Arduino Nano BLE over Bluetooth.

The flex sensor on the ring finger of the glove stopped reading values. There was no variation in reading when the sensor was bent, so I replaced that flex sensor. We also noticed there was some calibration in the Arduino script we could have completed to make the transformation from voltage to angle of bending more accurate. Since we had to repair the glove, we will do the calibration before collecting the next set of data.

The Arduino Nano 33 Sense with BLE arrived. I tried for a considerable amount of time to upload a basic sketch to it, but for some reason, it won’t upload. I’ll continue trying to work with the new Nano next week. I also looked deeper into how to connect the Arduino Nano 33 BLE to the PC over Bluetooth. It turns out BLE (Bluetooth Low Energy) is not compatible with classic Bluetooth and is most easy to interface with over a mobile device. There are smartphone apps with the sole purpose of reading from Arduino Nano BLE’s or other BLE devices. It is a bit more difficult to connect and read its serial output from a PC, but not impossible. One of the drawbacks of connecting this way is that the transmission rate is slower. Once we get the transmission working we can test to see just how much slower the data streams are and if that will affect whether we reach our requirements.

Rachel’s Status Report for 11/20

I spent most of this week trying to figure out what was going wrong with the audio output for our system. When I made a gesture, it would consistently say “A” repeatedly and nothing else would be outputted until the stream of “A”s was finished outputting. I thought that this was because the glove was continually making predictions and the audio outputs were overlapping each other. However, after trying lots of different things, I figured out that the audio files themselves were corrupted. I’m not really sure what happened, but I generated the audio files for each letter again and they’re fine now, so the audio integration is complete. Currently, our system is able to make approximately 20 predictions per second, but will only output the audio if it gets 7 of the same predictions in a row. This number was chosen to achieve our specification 2 gestures per second while leaving some room for transitions.

I also modified the script to utilize smaller models for the letters R, U, V, and W as well as M, N, S, T, and E. When our main model outputs any of those letters, the data gets put through another model for further classification. These smaller models only select from the group of letters that are easily confused. We found that this gives ever so slightly better results and are still in the process of doing data analysis to determine which data values to train on and/or if these letters are even distinguishable based on our sensor data.

I believe we are still on track since the functionality of our glove is complete. However, we did find that one of the flex sensors was not outputting useful data (it consistently outputs values around 256 degrees no matter the amount of bend), so we will need to replace that sensor and recollect data. After that, though, all that is left is to prepare our final presentation, make the final video, and add + make changes to our design review report for the final report. After Thanksgiving, I will collect data with our fixed glove and do all the final report/video/presentation things!

Team’s Status Report for 11/13

This week we had our interim demo presentations. For this presentation, we showed our glove classifying in real-time the ASL alphabet and speaking out the letter. There were some issues with latency as the speech output of the letter would continue to output even when a different letter was being signed. There were also some issues differentiating between similar signs. We received a lot of great feedback for fixing these problems. To fix the latency we will try to improve the handshaking protocol. To improve classification, we can build a model around the letters that are easily confused and use this more specific model anytime the general model classifies a letter that falls into these groups of similar signs.

The professors also recommended we improve our product to work wirelessly so that it is more usable/portable. As per their feedback, we have ordered a Bluetooth-enabled Arduino Nano to replace our current Nano.

Additionally, we are working on collecting data from a wide variety of people. Each team member is taking turns bringing the glove home to collect data off of roommates. We will also bring our glove to Daniel Seita, an ASL user at CMU to train our model.

Finally, we will begin working on the final presentation and report.

Sophia’s Status Report for 11/13

This week we had our interim demos. From the feedback, we have decided to try and make our glove wireless with our remaining time to make it more usable. I ordered a new Arduino Nano with BLE and also some batteries as power sources. Next week, I will need to take the glove and modify it so that the battery can be secured on the glove and correctly connect to the Arduino Nano. I will also need to work on pairing the Arduino Nano and the computer running the script. I also plan to add touch sensors to help differentiate between letters e, m, n, s, and t. This could require us to collect a new set of data, however, after discussing with the others, it seems we could also add the data from the touch sensors as conditionals into the script.

We are on schedule. We adjusted our old schedule to reflect our new plans. The changes are basically that we are extending data collection and making modifications based on feedback.