Team’s Status Report for 12/4

In the last week we worked mostly on making the final presentation slides together. We also started making further improvements that are beyond our original scope to the glove such as adding contact sensors and changing the connection into wireless using Bluetooth if possible. There were some issues in transmitting the data from the Arduino to the PC but that has been resolved in this week.

In this week, we have decided to go along our original approach of using wired connections since the Bluetooth technology adds more delay and we may not meet our goal for latency. We also worked on stabilizing and sewing the sensors down more tightly. Now that we have made adjustments, we will be collecting data next week.

We are on schedule since we have finished the product of our original scope. Our current work are addons made to improve the glove. As mentioned before, we will be collecting data from various people in the next week and working on finishing our final video and report.

Team’s Status Report for 11/13

This week we had our interim demo presentations. For this presentation, we showed our glove classifying in real-time the ASL alphabet and speaking out the letter. There were some issues with latency as the speech output of the letter would continue to output even when a different letter was being signed. There were also some issues differentiating between similar signs. We received a lot of great feedback for fixing these problems. To fix the latency we will try to improve the handshaking protocol. To improve classification, we can build a model around the letters that are easily confused and use this more specific model anytime the general model classifies a letter that falls into these groups of similar signs.

The professors also recommended we improve our product to work wirelessly so that it is more usable/portable. As per their feedback, we have ordered a Bluetooth-enabled Arduino Nano to replace our current Nano.

Additionally, we are working on collecting data from a wide variety of people. Each team member is taking turns bringing the glove home to collect data off of roommates. We will also bring our glove to Daniel Seita, an ASL user at CMU to train our model.

Finally, we will begin working on the final presentation and report.

Team’s Status Report for 11/6

This week, we implemented our new data collection process (collecting a window size of three snapshots) and collected data from each of our team members. We also integrated the real time testing with the glove so the software system can record the accuracies of each letter tested. We found that letters with movements (J and Z) perform much better and the classifier recognize it more frequently. We also found that the classifier recognized dissimilar letters better but it does get confused on similar letters (e.g., M and N). In order for the classifier to discern between them better, we will need to get more data. Overall, using a window of three for our data points have improved accuracy compared to our older model. It’s also recognizing about 3-4 gestures per second with the new data collection process. This rate is more suitable for actual usage since the average rate of signing is about 2 signs per second.

We also met up with an ASL user and an interpreter. Both gave useful feedback pertaining ethics, use cases, and capability of the glove for our project.

In terms of schedule, we are on the right track. Next week we will be meeting with the ASL users that we have contacted to collect data from them. Ideally, we will be getting people of different hand sizes for data collection. We will also start on refining our classifier to better recognize similar letters and implementing the audio output.

 

Team’s Status Report for 10/30

This week, we collected data from each of our team members and integrated the glove with the software system so that the classification can be done in real-time. We found that some letters that we expect to have high accuracy performed poorly in real time. Namely, the letters with movement (J and Z) did not do well. We also found that different letters performed poorly for each of our group members.

After our meeting with Byron and Funmbi, we had a bunch of things to try out. To see if our issue was with the data we had collected or perhaps with the placement of the IMU, we did some data analysis on our existing data as well as moved the IMU to the back of the palm from the wrist. We found that the the gyroscope and accelerometer data for the letters with movement are surprisingly not variable– this means that when we were testing real time, the incoming data was likely different from the training data and thus resulted in poor classification. The data from the IMU on the back of the hand has a 98% accuracy from just the data collected from Rachel; we will be testing it in real time this coming week.

We also found that our system currently can classify about 8.947 gestures per second, but this number will change when we incorporate the audio output. This rate is also too high for actual usage since people cannot sign that fast.

We are also in contact with a couple of ASL users who the office of disabilities connected us with.

We are still on schedule. This week we will work on parsing the letters (not continually classify them). We are also going to take data from a variety of people with different hand sizes, ideally. We will also experiment with capturing data over a time interval to see if that yields better results. We will also be improving the construction of the glove by sewing down the flex sensors more (so that they are more fitting to the glove) and doing a deeper dive into our data and models to understand why they perform the way they do. We will also hopefully be able to meet with the ASL users we are in contact with.

Team’s Status Report for 10/23

This week, we focused on maintaining the glove as well as moving toward real-time usage of the glove. Some of the connections on our first iteration of the glove came loose as we made gestures that required more movement, so we went back and secured those connections.

We did some analysis on the fake data that we had generated and the real data we collected from the gloves to gain some clarity as to why our models with the real data outperformed out models with the fake data despite having a smaller dataset. The conclusion is that there is a much stronger correlation between the fingers and IMU data in the real data, which makes sense since we were having a hard time modeling the IMU data when we were attempting to mimic the data.

We also added a “random” and “space” category to the classes that our model can output. The “space” class meant to help us eventually allow users to spell words and have those full words outputted as audio (rather than just the individual letters). The “random” class was added to help suppress the audio output as the user transitions between signs. These additions will allow us to move onto using our glove real-time, which is our next step.

We are on schedule and plan on putting the glove together with our software system to use in real-time this week. We are also going to collect data from people with several hand sizes to use to see how well we can get our glove to generalize.

Team’s Status Report for 10/9

This week, the team worked on finishing up the glove fabrication, writing the design review report, and testing on real data. Sophia worked on building the glove prototype with the system built on a protoboard so that we can start testing earlier as we wait on the PCB order. While the construction is not yet perfect by the team’s standard, we were able to get a preliminary set of data. Rachel took on the code implementation for communicating between the Arduino and the computer and recorded a set of ASL letters with an ample amount of samples for each letter. She also made some suggestions for the glove to collect data better, which the team will work on in the next week. This first set of data was then passed onto Stephanie, who did some preliminary training and testing using the best-performing models that were trained on the generated data. The results look quite promising and showed high accuracy metrics that meet our requirements. Overall, we are ahead of the schedule as the individual tasks all took less time than expected.

Next week, we will be spending the early half to finish up the design review report and other times for refining the glove system and getting data. Since we will be meeting up during lab times, we will work together on stabilizing the glove parts and getting data from Sophia and Stephanie. If time permits, we will get data from others and finalizing the ML models to use, which can put us ahead of schedule.

Team’s Status Report for 9/25

This week we worked on designing the PCB as well as fake data for choosing an ML model. Sophia took lead in designing the PCB since she has the most experience with building circuits and is currently taking a course specifically for PCB design. There were some obstacles in getting the PCB correctly designed, but Sophia was able to find ways around it. Stephanie and Rachel both worked on creating fake data; Stephanie and Rachel researched the sensors and created fake data based off of that using two slightly different approaches. We decided that making these datasets separately would be a good idea so that we have a little more variance in the data and can more confidently choose the model for our use case. For more details on the PCB design or fake model generation, please read our individual status reports.

For the next week, we are planning on having the PCB completely designed and sent to manufacture. We also will test and settle the ML model for our project– Stephanie has already done some preliminary testing and all of our candidate models seem to be sufficient, but more robust testing is necessary. From the software perspective, we are right on schedule with determining the ML model. However, from the circuits perspective, it seems that our order for the parts was not seen until recently and we also do not know how long the PCB board will take to print, so these things could set us back a bit. In addition to solidifying the ML model, while we wait for the parts to arrive, we can work on preparing our design presentation for the following week.

Team’s Status Report for 9/18

This week, the team worked on planning out a schedule for our project as well as cut down on the scope of our project. Initially, we had wanted to have our glove determine all the letters of the alphabet in ASL as well as a few common phrases, but after closer evaluation and more thorough planning, we determined that we don’t have enough time for such a large scope project.

Another change to our project is that our glove will no longer be wireless. Having a glove that can’t reliably transmit data to compute would be detrimental to our project and having the glove wired to our computer would not be changing the fundamental function of the glove.

We also spent a lot of time thinking about the requirements for our products, originally from an implementation perspective then shifting to a user-experience perspective. After shifting perspectives, we had to redo our research so that our decisions on the requirements were educated.

For next week, we plan on ordering parts (ordering extras for back up in case something breaks) as well as determining the ML model we will use to classify each gesture and creating a dummy dataset to identify which model fits our use case the best. We will also be giving our proposal presentation during this week.