Sophia’s Status Report for 10/23

This week I focused on maintenance for the glove. Some of the connections between the flex sensors and the board hosting the Arduino Nano came loose due to poor crimping job so I redid the connection. I also ordered embroidery thread to better secure the flex sensors. The stitches keeping the flex sensors in place currently are coming loose.

I also finalized the gerber, BOM and PLC files required to order the connective PCB. I want to order the PCB from jlcpcb, but I want to hold off on ordering a little longe. The connective PCB design that I plan to order is the same size as the perf board, so there is not much added advantage of having this PCB. I did some research on possibly making the PCB smaller and in the next week I plan to make a second design and then order the connective and new design in one batch to save on shipping costs.

In addition to creating a new PCB design, I plan to reach out to the office of disabilities to contact some fluent ASL users and interpreters who we can test our device on.

I think we are on schedule, but I’m not sure how smoothly replacing the PCB will be if we decide to use it.

 

Team’s Status Report for 10/23

This week, we focused on maintaining the glove as well as moving toward real-time usage of the glove. Some of the connections on our first iteration of the glove came loose as we made gestures that required more movement, so we went back and secured those connections.

We did some analysis on the fake data that we had generated and the real data we collected from the gloves to gain some clarity as to why our models with the real data outperformed out models with the fake data despite having a smaller dataset. The conclusion is that there is a much stronger correlation between the fingers and IMU data in the real data, which makes sense since we were having a hard time modeling the IMU data when we were attempting to mimic the data.

We also added a “random” and “space” category to the classes that our model can output. The “space” class meant to help us eventually allow users to spell words and have those full words outputted as audio (rather than just the individual letters). The “random” class was added to help suppress the audio output as the user transitions between signs. These additions will allow us to move onto using our glove real-time, which is our next step.

We are on schedule and plan on putting the glove together with our software system to use in real-time this week. We are also going to collect data from people with several hand sizes to use to see how well we can get our glove to generalize.

Stephanie’s Status Report for 10/23

There has been a change of plan from what I wanted to do from last two weeks. My original plan was to collect more data for model training, however, our glove needed more fabrication work to ensure the sensors are well attached. We also plan to enhance the glove’s data collection process. Our first set of data (done by Rachel) had to be collected by pressing buttons to determine the time duration of when the data will be read in. So for this week, we are trying to integrate real-time data collection and interpretation. More data is collected for hand gestures in-between each letter gesture (these gestures are mostly random since they are just transitions from one letter to another). A ‘space’ letter is added in case the models can not categorize the ‘random’ gestures well. First round of testing shows promising result. With the random forest model, which has been the model with highest accuracy so far, the accuracy for recognizing these two new labels are quite high.

I also found that with these two new labels added, the neural network accuracy have increased by 10%. This is an interesting finding and I plan to look further into why this is the case. Before this, I have done much tweaking to the model and validating with different hyperparameters, but the network’s accuracy seemed to be capped around 80%, however, with this new dataset, its accuracy reached around 88% and that is something I would like to find out why to help our future classification works.

I would say we are still on schedule since we planned a lot of time for the software implementation. As for next week, I’ll be looking more into the models and working on make our data collection work more smoothly, and if possible, starting to collect data from others.

Rachel’s Status Report for 10/23

I did not get as much work done as I had planned this week due to a heavy workload from other classes. I had originally planned to do some data analysis on our real and generated data as well as come up with/test multiple methods of detecting what is a real gesture.

I spent most of this week doing some analysis on the data we had collect compared to the fake data we had generated to try to understand why our real data performed so much better than our fake data. I did this by plotting several combinations of three axes (x, y and size) for the letter L. I chose to do the letter L because the sign for this letter a good variety of finger angles. To begin, we had more fake data than real data to train our models on, which is interesting considering our real data trained a model with higher accuracy. Based on my plots, I believe the reason for better results with the real data is that we have more consistent consistent correlation between the finger angles and the IMU data. For example, the plot I created comparing Middle Finger vs. AccelerometerZ vs. GyroscopeY showed a very scattered plot that does not imply any correlation for the fake data. The real data, on the other hand has a much more clear pattern (eg. bigger AccelerometerZ values correlates to a larger GyroscopeY value). Below are those two comparisons. This makes sense because when generating the fake data, we were unsure how to model the IMU data since we weren’t able to find good documentation on the values they would output based on certain movements or orientations.

I also collected some extra data, since I am in possession of the glove, for a “space” character as well as some random gestures so that the system does not try to identify letters while the user is transitioning between letters.

I would say we are on schedule since we had taken much less time than anticipated for the glove construction. The next steps are to integrate the glove with the software system so that it can do real-time interpretation of the data collected from the glove, so I will be working on this with Stephanie this upcoming week.

Team’s Status Report for 10/9

This week, the team worked on finishing up the glove fabrication, writing the design review report, and testing on real data. Sophia worked on building the glove prototype with the system built on a protoboard so that we can start testing earlier as we wait on the PCB order. While the construction is not yet perfect by the team’s standard, we were able to get a preliminary set of data. Rachel took on the code implementation for communicating between the Arduino and the computer and recorded a set of ASL letters with an ample amount of samples for each letter. She also made some suggestions for the glove to collect data better, which the team will work on in the next week. This first set of data was then passed onto Stephanie, who did some preliminary training and testing using the best-performing models that were trained on the generated data. The results look quite promising and showed high accuracy metrics that meet our requirements. Overall, we are ahead of the schedule as the individual tasks all took less time than expected.

Next week, we will be spending the early half to finish up the design review report and other times for refining the glove system and getting data. Since we will be meeting up during lab times, we will work together on stabilizing the glove parts and getting data from Sophia and Stephanie. If time permits, we will get data from others and finalizing the ML models to use, which can put us ahead of schedule.

Stephanie’s Status Report for 10/9

In this week, Sophia finished building the glove and Rachel was able to get our first set of real data. Since I have already performed validation tests on the models I have used with the generated fake data, I decided to use these tuned models on the real data after preprocessing them. Surprisingly, the results were overall better than that of the fake data. This shows that our fake data was not well generated. One possibility is that we included too much variance in generating sensor values. However, though the accuracy metrics were quite different, the trends remain the same. Random forest classifier achieved highest accuracy while perceptron had the lowest. I also did some extra tuning with neural net, but there wasn’t any significant improvement in accuracies, likely because our data isn’t high dimensional. One thing I would like to add is that this set of real data is only from Rachel, so there could be a possibility of overfitting which explains the high accuracy metrics.

In terms of schedule, we are actually ahead. We were able to get data from both type of sensors. We do need to work on getting consistent data and ensure the craftsmanship of the glove since Rachel mentioned some parts came undone. We will need to make sure that the sensors on the glove are stabilized before moving on to collect data from others.

Next week, I’ll be working on fixing the glove with the team and gathering more training data, starting from Sophia and me. If time and resources permit, we will try to find others who can sign for us. We will also working on finishing up the design report.

Sophia’s Status Report for 10/9

This week I focused on building the glove prototype since all of our parts and sensors came in.  I had started at the end of last week and wanted to finish by the end of the weekend but unfortunately it took longer than expected.

No description available.

I soldered the connections on a protoboard. I made sure to place all the components so that the Arduino USB port would connect the USB cord parallel with the arm. And I also soldered 90 degree male pinouts so that the flex sensors can be removed from the Arduino. And The Arduino and IMU breakout are sitting on some female pinouts so they can be easily removed later as well.

My sewing skills are not very good. It was difficult to sew the flex sensors onto the glove in the perfect position so that they remained aligned along the finger when the fingers bent. Some of the stitches will probably have to be redone in the future.

I also wrote the Arduino sketch to read and output all the values from the sensors at the same time.

Since we built a working prototype on protoboard, we’re not sure if we even need to order a PCB. However, we could make the hardware a lot smaller if we do design one.

Rachel’s Status Report for 10/9

This week, I presented our Design Review to the class and started writing the Design Review written report. I also got the glove that Sophia built to communicate with the python program and made sure that the data is read in correctly. Throughout the process, I found that some of the connections were slightly loose, so for letters that require movement (such as the letter “j”), sometimes the connection would become undone giving -63.0 as the angle read in from the flex sensor. This is something we will need to improve upon. We could also make the sensors fit more tightly to the glove so that it is more sensitive to bending. I also collected data for Stephanie to test the ML models on. However, since we were not able to find time to meet this week, this data is only from me making the gestures while varying my poses slightly, so we cannot be sure that this ML model will be generalizable just yet– this would require us to get data from more people.

Since putting together the glove and implementing the serial communication took less time than expected, we are actually ahead of schedule. This is good because there are still some things we need to fix with the current system (eg. wire connections and recognition of gesture beginning and end), which may take longer than the time we had left for software+hardware integration and adjustments.

For collecting this preliminary set of data, I used a key press to indicate that I am done making the gesture, but we do not want this to be part of the user experience in the final product, so next week I will work on figuring out how the system will recognize that a new gesture has arrived. I will also work on developing the data normalization algorithm.

Team’s Status Report for 10/2

This week, we worked on the design review, refining the fake data generation algorithms, and beginning the fabrication of the glove system. Next week we need to give our design review presentation and so Rachel focused on putting together the slides, bringing up any questions for discussion regarding details in our proposed solution and testing plans and completing additional research on the performance of different ML algorithms. Also, since last week, our scope changed so that instead of sensing just 5 gestures, we are back to sensing the ASL signs for all letters in the alphabet. Stephanie worked on changing her data generation algorithm to include the ASL letters. She also experimented with  different parameters, to see if different models would perform better than the default models given. Sophia made the necessary edits to the PCB and is planning to order the PCB by the end of the weekend. She also almost completed building the circuit on a perf board so that next week the team can start testing with the glove.

Next week, we will work on figuring how we are going to get the model to determine when a gesture starts and ends by doing more research on past projects.  Sophia will also complete a preliminary fabrication of the glove by the end of the weekend so that we can begin working with real data streaming in from the sensors. Schedule wise, we are a little bit behind due to slow orders, however, we will have a prototype of the glove to work with which will allow us to make larger strides, despite not having the PCB. We are confident we can make up time in the schedule this upcoming week with the barrier of not having any sensors to use.

Stephanie’s report for 10/2/2021

This week, my team and I worked collaboratively on the design review slides. Since we changed the number of gestures to recognize from 5 common signs to 21 ASL letters, we had to make sure to include our new scope. With the expansion of the scope, I worked on changing my data generation algorithm to include the ASL letters. I have also examined best performing models I have used in depth, such as using different parameters, to see if they can perform better than the default models given.

We are a bit behind schedule because our orders did not all arrive until Friday, hence putting us behind in making the glove and getting real data. We may have to speed up and do some more work next week to ensure we can get consistent data from both types of sensors. This setback is quite minor in my opinion since we have already gotten started on glove building and pre-determining ML models can save us time in the future.

Next week, we will have the glove built and we will be able to get real data. I will work on processing those data to ensure they are suitable for model training. We will also sign some gestures to obtain a preliminary set of data. Using this data, I’ll be testing the models that I have identified to have the best performances with generated data to find which one does well on the real data and perform further fine tuning to improve their accuracies.