Somya’s Status Report for 4/27/2024

This past week I gave the final presentation, so a good portion of my time was spent working on the slides and preparing for the talk itself. In addition, I’ve finished the haptic unit tests and have made the edits to Ricky’s Bluetooth code. Tomorrow when we are done we will verify that the haptic pulses are synchronized with real-time sign detection when Ricky updates his code with my edits. We decided to implement the haptic feedback this way, because it wouldn’t have made sense for me to download all the ML packages to test this feature that is ideologically separate from the ML detection. This decision saved us time which is good. One issue I had to debug was that when sending a byte signal from the computer, the byte would send over and over again (as verified by debug statements in the Arduino code) and the pulse would happen repeatedly instead of just once. To fix this I had to send an ACK back from the Arduino to signal to the Python code to stop signaling. 

 

My progress is on schedule. 

 

This week, I hope to hash out the script/layout of the final video as well as complete my action items for the final report. We also plan on meeting tomorrow to do a final full system test now that all parts of our final product deliverable are in place.



Team Status Report for 4/27/2024

The most significant risk is the integration of the PCB. Because we collected training data from the glove when the ARDUINO was attached to the protoboard, some of the IMU data was adjusted for data similar to that. Right now, we are hoping that by fixing the PCB case more securely on the glove, we can hopefully fix accuracy issues. The contingency plan is to reattach the protoboard which we are sure works.

 

There are no major changes at this stage.

 

There are no schedule changes at this stage.

 

The ML model had a unit test where we evaluated performance by seeing the accuracy on a separate test set. We were content with the performance as the model always had a performance of above 97% on different subsets of the data.

 

The ML model also had a unit test for latency using Python’s timing module. This timing system almost always reported an infinitesimal amount of time, so we were happy with the performance. 

 

We also had some unit tests for the data that entailed looking over the flex sensor readings and comparing them to what we expected. If they deviated too much, which was the case for a few words, we took the time to recollect the data for that word or simply removed outlier points.

 

Overall accuracy was tested by having us sign each word in our selected vocabulary 12 times each in random order and evaluating the number of times that it was correct. Initially, performance was rather poor at around 60% accuracy. However, after looking at the mistakes and developing a smaller model for easily confused words, we were able to bump performance up to around 89%.

 

Overall latency was tested at the same time by calculating the time difference from the initial sensor reading until the system prediction. With our original classification heuristic, the time it took was around 2 seconds per word which was way too slow. By changing the length of predictions it needs, we were able to bring overall latency down to around 0.7 seconds which is closer to our use-case requirement.

 

As for sensor testing, we would take a look at the word prior to data collection, and then manually inspect the data values we were getting to ensure they corresponded with what we expected based on the handshape. For example, if the sign required the index finger and thumb to be the only two fingers bent, but the data vector showed the middle finger’s flex sensor value was significantly low, we would stop data collection and inspect the circuit connections. Oftentimes a flex sensor had come a little loose and we would have to stick it back in to get normal readings. In addition, post data collection we would compare the feature plots for all three of us for each sign, and make note of any significant discrepancies and why we were getting them. Most often it would be due to our hand sizes and finger lengths being different, which is to be expected, but occasionally there would be feature discrepancies that indicate someone was moving too much or too little during data collection, which would then let us know that we should recollect data for that person. 

 

In terms of haptic feedback unit testing, we wrote a simple Python script and Arduino script to test sending a byte from the computer to the Arduino over Bluetooth, and whether the Arduino was able to read that byte and create a haptic pulse. Once we confirmed this behavior worked through this simple unit test, it was then easy to integrate this into our actual system code.



Somya’s Status Report for 4/20/2024

This week I got the haptic feedback to work. Now, I am able to send a signal via bluetooth from a Python script to the Arduino at the start and end of calibration, as well as when a signed word/letter has been spoken out. The Arduino is able to case on the type of signal it receives and produces an apt haptic feedback pulse. My next steps would be to measure latency, experiment with which type of signal is most user-friendly, and see how far away the sender and receiver have to be in order for this mechanism to work in a timely manner. 

My progress is on schedule. 

This week I am giving the final presentation, so my main goal is focusing on that. In addition, we have our poster and final paper deliverable deadlines quickly approaching, so I plan to dedicate a significant amount of time to that. I also want to help with the overall look of what we will showcase on demo day, as one of our focus points from really early on was try to make sure our gloves are as user-friendly and not clunky as possible, which is why we’re designing a case to hold the battery and PCB, as well as experimenting with different gloves.



Team Status Report for 4/20/2024

The most significant challenge we are facing in our project is the ML algorithm’s detection of our chosen set of words. Currently we have okay accuracy, but prediction is inconsistent, in that depending on the user the accuracy changes, or the model is able to recognize the word, but not enough times in a row to meet the criteria for the classification heuristic. In addition, it is common for the double-handed words in ASL to incorporate some type of motion. This is significant because currently, our model relies on prediction based on a single frame, and not a windowing of the frames. This is probably affecting accuracy, so we are looking into mitigating this by incorporating some type of sampling of the frame data, but the tradeoff to this may be a decrease in latency.

 

In terms of changes to the existing design of the system, we have decided to nix the idea of using the 26 letters in the BSL alphabet. This change was necessary because in the BSL alphabet, for a significant portion of the letters, the determining factor is based on touch, and our glove doesn’t have any touch sensors. So instead, we found a 2017 research paper that found the top 50 most commonly used ASL words and split them into five categories: pronoun, noun, verb, adjective, and adverb. From each of these categories, we picked 2-3 double-handed words to create a set of 10 words. 

 

No changes have been made to our schedule. 



Somya’s Status Report for 4/6/2024

This week I finished collecting data for all BSL letters. We found out that USB collection was much faster than bluetooth, so initially this process was proving to be quite a bit of a time sink until we made the switch halfway through. In addition, I looked into the haptic feedback circuit and will place an order for the linear vibration motor on Tuesday. The circuitry doesn’t look too complicated, the only thing I’m worried about is defining what haptic feedback even means in the context of our product. Ideally, we would want a different feedback to be generated based on how certain the word the user signed was transmitted, so something along the lines of if >90% a certain type of pulse, <30% another, and a maybe range as well, but I’m not sure if this is even feasible so plan to bring this up next meeting. 

 

My progress is on schedule. 

 

This upcoming week, in addition to implementing haptic feedback, I want to look into how we can augment our data as we transition to BSL. After this initial round of data collection, we are finding that a lot of the signs have very similar degrees of bending, and since we have decided not to go with any additional type of sensors, e.g. touch sensors, this will likely lead to a lower accuracy than ideal. This might involve going through the individual csv files for letters that are very similar and quantifying just how similar/different they are, and manipulating the data post-collection in some way to make them more distinct. Once Ricky trains the model over the weekend, I’ll have a better idea on how to more specifically accomplish this.



Somya’s Status Report for 3/30/2024

This past week, I made some changes to the glove and helped figure out some bugs we were having with the bluetooth. One thing we’re noticing in our sensor data is that we get some discrepancies depending on who is using the glove. This is to be expected, as we all have different hand sizes as well as slight variation in the way we make each sign. I’m trying to come up with ways we can make the data more consistent besides post-processing cleanup. In our weekly meeting, we discussed adding a calibration phase as well as a normalization of the data which should definitely help but I still think securing the sensors at additional points than what they are now will also make a difference. I had a few stacked midterms this past week so while my progress is still on schedule, I didn’t make as much progress as I would have liked. This upcoming week, however, I should be able to dedicate a lot more time to capstone, especially with the interim demo around the corner. 

 

More specifically, this upcoming week I would like to add the haptic feedback code to our data collection scripts. Our current plan for MVP is to have the LED on the Arduino blink when the speaker (either on the computer or the external speaker) outputs the signed letter/word and more importantly, that it outputs it correctly. I think we should color code the output based on the success of the transmission: red for didn’t go through, yellow for possible but might want to resign, and green for successful transmission. I also want to order some vibrating motors because for our final prototype we want to have this type of feedback so the user doesn’t have to constantly look down at their wrist. Finally, I want to bring up changing/adding to what position we deem to be “at rest”. Right now, we just have the user holding up their unflexed hand as at rest, and the model is pretty good at recognizing this state, but this isn’t really practical—people’s at rest is typically with their hands at their side or folded in their lap, or moving around but not actually signing anything. The model sort of falls apart with this notion of at rest, and I think adding this to our training data will make our device more robust. 



Somya’s Status Report for 3/23/2024

I finished fabrication of the second glove as well as work on debugging the transmission of the bluetooth data from the Arduino to the laptop that is running the Python script used to collect the data and compress into a csv file. In addition, I brainstormed various ways we can remove the laptop as being a required component of our final demo. 

My progress is on schedule, but we are slightly behind on the testing of the double glove due to waiting for the second Arduino BLE compute unit to arrive. Once it does arrive however, we should have all the moving parts in place to begin testing integration of data from both gloves immediately. In the meantime we are collectively working on other features like the speaker and haptic feedback, as well as cleaning up of noisy data to improve the ML model.

This next week, I hope to finish the circuit with both the speaker and haptic feedback, as well as be fully finished collecting data for the BSL double-handed alphabet so I can see what issues the synchronization of the two input streams brings and start debugging that. 


Ria’s Status Report for 3/23/2024

This week I focused on creating the PCB layout for our circuit thus far. I started off by drafting the circuit with each flex sensor and op amp. Then we had to finalize which board we are using which we did altogether (determining if Bluetooth would meet our latency needs). We finally decided to stick with the Arduino Nano BLE Sense Rev2 and I added in those headers into the schematic. Finally, I added headers for a battery, and headers for the speaker and haptic motor. 

The next thing I did this week was solder five more flex sensors to red white wire pairs so that they can be ready to attach to the second glove. Now that our first glove works, we decided to parallelize the tasks needed to be done to duplicate it. The next thing I’ll focus on for the following week is getting the speaker and haptic motor driver to work on one glove. 



Team Status Report for 3/9/2024

        Our team spent our meeting time reviewing each other’s writing on the design report and discussing more details about the content we are including in it. After receiving feedback on our design review slides, we realized that there are a lot of things we still had to churn out, so we effectively spent our time ironing out all of the gaps in our design and transferred that into writing for the report. Things like resistor values, how we will digitize our data, and more detailed testing and validation are a few of the important things we refined. These changes did not incur any additional cost. 

        The most significant risk right now to our project is not meeting deadlines and having good time management. We now have a clear plan of how we want to execute our design, and we have to stay diligent and move forward and meet all of the deadlines. It sounds trivial to just say “we have to stick to our schedule,” but trusting our design process (adjusting as necessary) and being very methodical is what will allow us to do that. From a technical standpoint, a potential risk for our project is our sensor data not giving our ML model enough data. In that case, we would have to order more sensors and wait for those to arrive. We will experiment with our model to find out the outcome of this, but till then we will just be aware of this risk. 

        We have not made any changes to our schedule, and hope to make significant progress on Rapid Prototype I with all our parts finally in one place. 

 

ABET Questions: 

A was written by Somya, B was written by Ria, C was written by Ricky 

Part A: Our project heavily requires the consideration of a global context, because at its heart is the pursuit of better communication, which is a goal that is universal regardless of what language is spoken. There’s some grounding in the fact that even though there are hundreds of languages, there will always be the challenge of making communication between deaf and non-deaf persons as easy as possible. We quickly learned that sign language is also not a universal language—there are somewhere between 138 to 300 variants of sign languages that exist. This information should be factored into the design of our product in the sense that it should be as adaptive and as sensitive as possible. The ultimate end goal would be for any deaf person to slip on these gloves and be able to sign in their own version of sign language, and communicate with someone who may not even speak that language. In essence, it would be just like how Google Translate works, but now with an entire additional layer of recognizing global needs by having that language barrier be extended to not just which country that language is, but the form of the language itself. 

Part B: From our previous analysis of cultural factors, we have adapted our design to be a bit more equitable for users by having the speaker mounted on the glove and not the computer on the final design. We understand that it would be cumbersome to have a deaf speaker have to prop up a laptop or take out a phone when trying to communicate so we want to make that process as seamless as possible. We also want to use a haptic feedback system instead of an LED feedback system for the same reason, maintaining eye contact with someone you are conversing with is something that people should not have to give up just to use our product. 

Part C: Our product doesn’t directly require a heavy consideration of environmental factors, but does so in a few more indirect ways. Right now, since our main goal is to just develop the glove and have it work, the cost is more on the back burner—we are perfectly fine with spending $200+ on all the different components and aren’t too focused on the materials, more so that the materials work. But later down the line, the material choice and cost we do want to more closely consider, because we want our product to be as accessible as possible. This ties into a discussion on natural resources, because we want our product to be easy to manufacture and not require a super complicated and environmentally taxing production process. Our choice of final material for the glove should also be biodegradable and be able to be resistive enough against everyday wear and tear so the user doesn’t need to keep buying them, thus offloading environmental waste. 



Somya’s Status Report for 3/9/2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I finished the circuit diagram for the sensor wiring (with two sensors displayed as opposed to five for ease of looking at); this schematic is one that I will keep adding to/modifying as our design needs change. 

In addition, I did the mathematical calculations for determining the options for what resistor we want to use in series with each flex sensor. This is quite important to get right because one of the biggest challenges we anticipate after looking at Gesture Glove’s challenges/advice from our TA is flex sensor sensitivity. To address this, we need an output range of values as wide as possible, so if someone signs an ‘a’ versus a ‘b’ the voltage outputs aren’t something like “1.200V” and “1.215V”. As such, we decided that our ideal voltage range would be 1V-4V. This creates the below inequality:

The tricky thing is about having a variable resistor (i.e. the flex sensor), is that the value that satisfies the two equations that can be formed from the above inequality is negative. So, the best thing you can do in practice is form a range of resistor values and play around with multiple resistors within that range to see which ones produce the widest output range. Through my calculations, I found this range to be . As such, I tested five of the most common resistor values within this range: 2.7kΩ, 3.3kΩ, 4.7kΩ, 10kΩ, and 47kΩ. Of course we will test these in the actual breadboard circuit, but I also manually tested them and found that the range reaches maximum of ~0.8V with ~10kΩ of resistance, which isn’t great. So it’s looking like we will need to use an op-amp. I looked at Spectra Symbol’s data sheet, which listed LM358 or LM324 op-amps as suggested. Below is what the circuit diagram for that would look like: . Lastly, I finished all my tasked sections for the Design Report that was due last Friday.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is on schedule. 

 

What deliverables do you hope to complete in the next week? 

Now that all of our parts have arrived, I hope to accomplish two tasks in particular. First, I want to do some flex sensor unit testing by building a simple voltage divider circuit and seeing which of the five selected pull-down resistors will give the widest range of V_out. I want to do these units test first so that way if further sensitivity is required by way of an op-amp, I know that before launching into building the entire circuit to accommodate the five flex sensors. Second, after I’ve determined the pull-down resistor through educated trial and error, I would like to have the five flex sensor circuit built by Wednesday. That way we can get to checking if the Arduino code for reading out from the sensors works, and possibly even start the data collection process this week. All our moving parts in preparation for the model training are really close to being done, so hopefully the time we’ve spent preparing everything pays off and data collection goes smoothly. I also want to bring up making a PCB board with the team on Monday’s meeting.