Team Status Report for 4/27/2024

The most significant risk is the integration of the PCB. Because we collected training data from the glove when the ARDUINO was attached to the protoboard, some of the IMU data was adjusted for data similar to that. Right now, we are hoping that by fixing the PCB case more securely on the glove, we can hopefully fix accuracy issues. The contingency plan is to reattach the protoboard which we are sure works.

 

There are no major changes at this stage.

 

There are no schedule changes at this stage.

 

The ML model had a unit test where we evaluated performance by seeing the accuracy on a separate test set. We were content with the performance as the model always had a performance of above 97% on different subsets of the data.

 

The ML model also had a unit test for latency using Python’s timing module. This timing system almost always reported an infinitesimal amount of time, so we were happy with the performance. 

 

We also had some unit tests for the data that entailed looking over the flex sensor readings and comparing them to what we expected. If they deviated too much, which was the case for a few words, we took the time to recollect the data for that word or simply removed outlier points.

 

Overall accuracy was tested by having us sign each word in our selected vocabulary 12 times each in random order and evaluating the number of times that it was correct. Initially, performance was rather poor at around 60% accuracy. However, after looking at the mistakes and developing a smaller model for easily confused words, we were able to bump performance up to around 89%.

 

Overall latency was tested at the same time by calculating the time difference from the initial sensor reading until the system prediction. With our original classification heuristic, the time it took was around 2 seconds per word which was way too slow. By changing the length of predictions it needs, we were able to bring overall latency down to around 0.7 seconds which is closer to our use-case requirement.

 

As for sensor testing, we would take a look at the word prior to data collection, and then manually inspect the data values we were getting to ensure they corresponded with what we expected based on the handshape. For example, if the sign required the index finger and thumb to be the only two fingers bent, but the data vector showed the middle finger’s flex sensor value was significantly low, we would stop data collection and inspect the circuit connections. Oftentimes a flex sensor had come a little loose and we would have to stick it back in to get normal readings. In addition, post data collection we would compare the feature plots for all three of us for each sign, and make note of any significant discrepancies and why we were getting them. Most often it would be due to our hand sizes and finger lengths being different, which is to be expected, but occasionally there would be feature discrepancies that indicate someone was moving too much or too little during data collection, which would then let us know that we should recollect data for that person. 

 

In terms of haptic feedback unit testing, we wrote a simple Python script and Arduino script to test sending a byte from the computer to the Arduino over Bluetooth, and whether the Arduino was able to read that byte and create a haptic pulse. Once we confirmed this behavior worked through this simple unit test, it was then easy to integrate this into our actual system code.



Team Status Report for 4/20/2024

The most significant challenge we are facing in our project is the ML algorithm’s detection of our chosen set of words. Currently we have okay accuracy, but prediction is inconsistent, in that depending on the user the accuracy changes, or the model is able to recognize the word, but not enough times in a row to meet the criteria for the classification heuristic. In addition, it is common for the double-handed words in ASL to incorporate some type of motion. This is significant because currently, our model relies on prediction based on a single frame, and not a windowing of the frames. This is probably affecting accuracy, so we are looking into mitigating this by incorporating some type of sampling of the frame data, but the tradeoff to this may be a decrease in latency.

 

In terms of changes to the existing design of the system, we have decided to nix the idea of using the 26 letters in the BSL alphabet. This change was necessary because in the BSL alphabet, for a significant portion of the letters, the determining factor is based on touch, and our glove doesn’t have any touch sensors. So instead, we found a 2017 research paper that found the top 50 most commonly used ASL words and split them into five categories: pronoun, noun, verb, adjective, and adverb. From each of these categories, we picked 2-3 double-handed words to create a set of 10 words. 

 

No changes have been made to our schedule. 



Team Status Report for 4/6/2024

One of the most significant risks is the performance of the ML model given the set of sensors. While collecting data this week, we were concerned about the large set of letters in the BSL alphabet that have extremely similar hand positions to each other but vary based on touch which our gloves cannot detect. We have not performed the development of the model so we are unsure if this will be a problem. If it is, we will look to test a variety of ML models to see if we can boost performance. Worst case, we will look to shrink the set of letters we hope to detect to something that our gloves can more feasibly distinguish.

As mentioned above, we will evaluate the performance of the ML model into next week and make changes to the vocabulary requirement based on feasible performance. We also added a calibration phase into the software before the ML model to help with adaptability to different users. In addition, we are in the process of ordering a PCB for the circuitry component. We will integrate that if possible during the last week.

Please look here for updated Gantt Chart: Gantt

In terms of verification of the sensor integration, we have added a calibration phase at the start of the data collection for each letter. What this entails is the user having their hands at their most flexed state and most relaxed state for five seconds. We then calculate the maximum and minimum data points from the sensor values collected during this phase, and check that they appear as expected. This forms a delta that is then used to normalize future data to be in the range of 0-1, normalized by the calibrated delta. In terms of verification of the ML model, we will look into the performance of ML models on various test data in offline settings. We will be able to collect relevant data on accuracy and speed of inference. In terms of verification of the circuit that integrates the sensors together with the compute unit, we take a look at the sensor/IMU data that is collected as the person is signing, and sometimes we will notice discrepancies that cause us to reexamine the connections on our circuit. For example, one time we had to change the pin one of the V_out wires was in because it was rather loose and affecting the data we were getting. 

With regard to overall validation, we will look to test real-time performance of the glove. For accuracy, we will have the user sign out a comprehensive list of the letters and evaluate the performance based on accuracy. We will also measure the time from signing to speaker output using Python’s timing modules.

We also plan on conducting user testing. We will have several other people wear the gloves and then complete a survey that evaluates their comfort, the visual appeal, and the weight of the gloves. They will also be allowed to make any additional comments about the design of it as well.

Team Status Report for 3/30/2024

The most significant risk that we face right now is the timeline for collecting the data and the robustness of the ML model. We hope to start collecting data next week which gives us ample time to collect more data if the initial dataset needs to be increased. Ricky also has plans for testing multiple different architectures and hyperparameters if the performance is insufficient. The other risk involves just keeping the readings consistent. This risk is being mitigated through some newly implemented calibration techniques and the introduction of a PCB.

The major change to the design is that we will as of now stick to the laptop as the main speaker component. This is due to our choice of ARDUINO which limits the speaker capability from the Arduino chip. We will circle back to this idea if time permits after testing and model tuning.

We will be proceeding with the original schedule. Ricky has adjusted a bit of his timeline to reflect some of the delays in Bluetooth integration and Ria has added the PCB creation timeline but it is relatively similar to the original schedule.

Team Status Report for 3/23/2024

One significant risk that we face right now is the development of the synchronization algorithm/heuristic. We were able to establish a Bluetooth connection from a laptop to the gloves via the Bleak library in Python. We are currently doing some preliminary research into connecting two Bluetooth connections at once to the laptop. We have a contingency plan of connecting the gloves and sending one stream of data to the laptop if our original plan seems infeasible. We are also exploring different strategies in generating audio output from the glove itself. We seem to be slightly blocked by our choice of ARDUINO so our contingency plan is maintaining speaker functionality from the laptop. 

There have been no major updates to the design as of right now. We will closely monitor the situation with the speaker and Bluetooth in the upcoming week to see if any adjustments need to be made.

We are on schedule as we have started working on the construction of the second glove. We have essentially finished prototype 1 as well, making it ready for presentation during the interim demo.

Team Status Report for 3/16/2024

The major risk right now is that the data is insufficient to distinguish the various signs for our single-gloved design. To address these issues, we plan on adding additional feature extraction based on our heuristics which could increase the complexity of the data for training. We also have included all 9 DOF of the IMU even though they might not all be needed for feature distinction. This will be something else we look at as part of the risk mitigation. We are also slightly concerned about distinguishing signs that rely less on bending the fingers and more on one finger touching the rest. Our contingency plan is to incorporate 1-2 touch sensors into our design, which if needed, should be a simple addition circuitry-wise. Below are some pictures of the glove fabrication process. 

The only major change is that we have decided to pursue a PCB design for the glove, to simplify the Opamp and Arduino structure on the glove

We have pushed back everything one week, we will shoot to have the first prototype to be done by around Mar. 21. 

Team Status Report for 3/9/2024

        Our team spent our meeting time reviewing each other’s writing on the design report and discussing more details about the content we are including in it. After receiving feedback on our design review slides, we realized that there are a lot of things we still had to churn out, so we effectively spent our time ironing out all of the gaps in our design and transferred that into writing for the report. Things like resistor values, how we will digitize our data, and more detailed testing and validation are a few of the important things we refined. These changes did not incur any additional cost. 

        The most significant risk right now to our project is not meeting deadlines and having good time management. We now have a clear plan of how we want to execute our design, and we have to stay diligent and move forward and meet all of the deadlines. It sounds trivial to just say “we have to stick to our schedule,” but trusting our design process (adjusting as necessary) and being very methodical is what will allow us to do that. From a technical standpoint, a potential risk for our project is our sensor data not giving our ML model enough data. In that case, we would have to order more sensors and wait for those to arrive. We will experiment with our model to find out the outcome of this, but till then we will just be aware of this risk. 

        We have not made any changes to our schedule, and hope to make significant progress on Rapid Prototype I with all our parts finally in one place. 

 

ABET Questions: 

A was written by Somya, B was written by Ria, C was written by Ricky 

Part A: Our project heavily requires the consideration of a global context, because at its heart is the pursuit of better communication, which is a goal that is universal regardless of what language is spoken. There’s some grounding in the fact that even though there are hundreds of languages, there will always be the challenge of making communication between deaf and non-deaf persons as easy as possible. We quickly learned that sign language is also not a universal language—there are somewhere between 138 to 300 variants of sign languages that exist. This information should be factored into the design of our product in the sense that it should be as adaptive and as sensitive as possible. The ultimate end goal would be for any deaf person to slip on these gloves and be able to sign in their own version of sign language, and communicate with someone who may not even speak that language. In essence, it would be just like how Google Translate works, but now with an entire additional layer of recognizing global needs by having that language barrier be extended to not just which country that language is, but the form of the language itself. 

Part B: From our previous analysis of cultural factors, we have adapted our design to be a bit more equitable for users by having the speaker mounted on the glove and not the computer on the final design. We understand that it would be cumbersome to have a deaf speaker have to prop up a laptop or take out a phone when trying to communicate so we want to make that process as seamless as possible. We also want to use a haptic feedback system instead of an LED feedback system for the same reason, maintaining eye contact with someone you are conversing with is something that people should not have to give up just to use our product. 

Part C: Our product doesn’t directly require a heavy consideration of environmental factors, but does so in a few more indirect ways. Right now, since our main goal is to just develop the glove and have it work, the cost is more on the back burner—we are perfectly fine with spending $200+ on all the different components and aren’t too focused on the materials, more so that the materials work. But later down the line, the material choice and cost we do want to more closely consider, because we want our product to be as accessible as possible. This ties into a discussion on natural resources, because we want our product to be easy to manufacture and not require a super complicated and environmentally taxing production process. Our choice of final material for the glove should also be biodegradable and be able to be resistive enough against everyday wear and tear so the user doesn’t need to keep buying them, thus offloading environmental waste. 



Team Status Report for 2/24/2024

Right now the most significant risks revolve around getting our parts in time so we can begin construction of the glove and collecting data. We were unfortunately unable to place our orders in time for this week so we will hope to get our parts next week and start assembly of the glove. Meanwhile, development of the major software components (communication, ML) has started.

No major changes have been made to the system. We did buy multiple computing units but we will not choose one of the two until we can test their performance. 

There is a slight pushback in the timeframe to make the first glove but everything else is on schedule.

Team Status Report for 2/17/2024

The most significant risk right now is waiting for parts to arrive promptly as well as the potential risks of relying on Bluetooth for communication. We can’t do anything about the parts, so we will proceed by working on as many software components as possible. To mitigate the dependence on Bluetooth we bought an additional chip that allows for wifi connectivity which will serve as an alternative if Bluetooth doesn’t work out.

There were several changes to the requirements. In our design, we decided to use a laptop to handle the machine learning prediction and speaker output. We moved away from using a smaller computing unit because we believed that the main goal of our project should be on the wireless and doubly nature of our gloves. We also have decided to use the British sign language alphabet as our goal set. This is because it provides a standardized set of 26 double-handed gestures that can be used with each other. We also reduced accuracy goals to 85%. This is to reflect the increased complexity of the double-gloved design as well as past projects’ results (Gesture Glove achieved 75% real-time accuracy with a single glove). We also are looking into reducing latency requirements due to the potential slowdown caused by Bluetooth transmission. We haven’t nailed down an exact number right now but are doing research into it. No major cost updates as we order our initial parts.

We are on schedule. No major changes to schedule.

Part A (by Somya)

Our product will enhance the welfare and safety of the deaf community because of its usability quotient. By having a discrete pair of gloves that can translate sign language to speech, they will facilitate communication between the deaf and non-deaf population. As such, the product will minimize the need for separate structures to be established for deaf people so that they can go about their day-to-day tasks in a more convenient manner. The better the communication between the deaf and the non-deaf community is, the more integrated and less alienated the former population will be. In terms of safety, our product could be really useful for the deaf to communicate in an emergency situation that could arise in any location where there may not be anyone that understands sign language. In such situations, timely communication is of utmost importance, and our product will be designed with this use case in mind. 

Part B (by Ria) 

Our product will be impactful for the deaf community and potentially those who are hard of hearing. Current society is progressing towards providing various disabled communities with tools that they can use to seamlessly navigate daily life. Our goal is to extend this mission to users who speak sign language wanting to communicate with people who don’t. This is a step towards establishing better conversions between the deaf community and those who can hear. 

Not only does this product attempt to solve a complex problem, but it also raises awareness about the nuanced challenges that the deaf communities face every day. By learning more about their needs and struggles, we are aiming to spark conversations – pun intended – about how we can go even further. This product can be useful in education, video games, and even during emergency situations. We hope that this project adds a few cobblestones to the road being paved towards a more inclusive society. 

Part C (by Ricky)

Our product is not meant to be marketed as something to be developed for significant profit. I claim that there are not a lot of economic factors that concern us because our main purpose isn’t to make a product cheaper or easier to produce. We would urge potential investors against trying to profit from our product because it is meant to bridge the communication gap between deaf individuals and non-deaf individuals. Inevitably, there will be a cost associated with purchasing our product but we hope that it will remain limited to the cost of parts. Due to its lightweight design we hope that the distribution of the product will be relatively seamless and costless. Basically, our product is not intended to be profitable. Instead, it is meant to help a disadvantaged community.

Team Status Report for 2/10/2024

The major risk that we are facing right now is concerning quickly finalizing our design. Over the week, we hashed out exact details about what our product will look like including specific parts we are interested in ordering. This is a small risk because we are close to finalizing our design and intermediate prototypes. The risk will continue to be managed as we enter design week and 100% finalize design ideas and part ordering. There are minimal contingency plans as the plan is just to finalize our design. 

 

Instead of having the data processing unit be in a separate unit, we have decided to run our processing algorithms on a PC. The rationale behind this change is that having our device work in a self-contained manner (i.e. without the user needing to carry around their computer) is not a top priority of ours—our main goal is to show proof of concept of the gloves working. 

 

We also made some soft deadlines that we want to finalize by next week for the prototypes we have planned. Each prototype should be standalone and work, and each one builds on the last. Below are the details of each rapid prototype as well as when we anticipate finishing them.

 

  • RP 1 due March 20:
    • Phase1:
      • Create one glove, sensor detection is reliable, IMU data is gathered
      • Wired connection to laptop, data send directly to laptop where we can monitor sensor data
      • No speaker, no bluetooth
    • Phase 2:
      • Test Bluetooth capabilities and add battery, and create a glove that can transmit data through bluetooth
      • Maintain same performance as phase 1 glove if possible
    • Phase 3:
      • Train the ML model and finish the product 
      • Add speaker and haptic feedback
  • RP 2 due April 3:
    • Phase 1a:
      • Duplicate glove into second glove 
      • Figure out how to send both gloves information to laptop via bluetooth
    • Phase 1b:
      • Make PCB only if we are somewhat ahead of schedule
    • Phase 2:
      • More gathering of data and training
  • RP 3 due April 14:
    • Phase 1a:
      • Expand vocabulary
    • Phase 1b:
      • Experiment with making this a distributed system using some form of communication protocol (need to iron this out)