Ricky’s Status report for 4/6/2024

Most of the week, I was focused on collecting data and everything data-related. At the start of the week, we initially tried to start collecting data. However, there were some issues with the left glove. This ranged from some of the wire connections being loose to various parts of the flex sensor being caught in the string. I assisted in hashing those out and added my opinion about additional changes. While this happened, I adjusted the scripts to handle the two streams of data from the gloves. Then, once everything was set, we collected data for both me and Somya. We collected for the whole BSL alphabet which included 26 letters and an additional sign for the resting state. With around 1200 points for each letter, we were able to collect a total dataset of size 64800. I also handled cleaning up the data and visualizing it before we train a model with it next week.

Ideally, we would’ve also collected data for Ria, but due to time constraints, we were unable to. I believe that the best course of action is to proceed with this slightly smaller dataset and collect more if the performance is not good enough. With that in mind, I am on schedule.

Next week, I will develop a new ML model with the data we collected, evaluate performance, and make any additional changes as needed. I do foresee potential classification issues with several letters of the BSL alphabet, but I will hopefully discuss that with my partners on Monday. We might need to curate our vocabulary more rigorously or find another solution if the ML model does not reach our desired accuracy metrics. I will hopefully be performing verification tests next week to ensure both the accuracy and speed of the ML model and the overall classification system.

Team Status Report for 4/6/2024

One of the most significant risks is the performance of the ML model given the set of sensors. While collecting data this week, we were concerned about the large set of letters in the BSL alphabet that have extremely similar hand positions to each other but vary based on touch which our gloves cannot detect. We have not performed the development of the model so we are unsure if this will be a problem. If it is, we will look to test a variety of ML models to see if we can boost performance. Worst case, we will look to shrink the set of letters we hope to detect to something that our gloves can more feasibly distinguish.

As mentioned above, we will evaluate the performance of the ML model into next week and make changes to the vocabulary requirement based on feasible performance. We also added a calibration phase into the software before the ML model to help with adaptability to different users. In addition, we are in the process of ordering a PCB for the circuitry component. We will integrate that if possible during the last week.

Please look here for updated Gantt Chart: Gantt

In terms of verification of the sensor integration, we have added a calibration phase at the start of the data collection for each letter. What this entails is the user having their hands at their most flexed state and most relaxed state for five seconds. We then calculate the maximum and minimum data points from the sensor values collected during this phase, and check that they appear as expected. This forms a delta that is then used to normalize future data to be in the range of 0-1, normalized by the calibrated delta. In terms of verification of the ML model, we will look into the performance of ML models on various test data in offline settings. We will be able to collect relevant data on accuracy and speed of inference. In terms of verification of the circuit that integrates the sensors together with the compute unit, we take a look at the sensor/IMU data that is collected as the person is signing, and sometimes we will notice discrepancies that cause us to reexamine the connections on our circuit. For example, one time we had to change the pin one of the V_out wires was in because it was rather loose and affecting the data we were getting. 

With regard to overall validation, we will look to test real-time performance of the glove. For accuracy, we will have the user sign out a comprehensive list of the letters and evaluate the performance based on accuracy. We will also measure the time from signing to speaker output using Python’s timing modules.

We also plan on conducting user testing. We will have several other people wear the gloves and then complete a survey that evaluates their comfort, the visual appeal, and the weight of the gloves. They will also be allowed to make any additional comments about the design of it as well.

Somya’s Status Report for 3/30/2024

This past week, I made some changes to the glove and helped figure out some bugs we were having with the bluetooth. One thing we’re noticing in our sensor data is that we get some discrepancies depending on who is using the glove. This is to be expected, as we all have different hand sizes as well as slight variation in the way we make each sign. I’m trying to come up with ways we can make the data more consistent besides post-processing cleanup. In our weekly meeting, we discussed adding a calibration phase as well as a normalization of the data which should definitely help but I still think securing the sensors at additional points than what they are now will also make a difference. I had a few stacked midterms this past week so while my progress is still on schedule, I didn’t make as much progress as I would have liked. This upcoming week, however, I should be able to dedicate a lot more time to capstone, especially with the interim demo around the corner. 

 

More specifically, this upcoming week I would like to add the haptic feedback code to our data collection scripts. Our current plan for MVP is to have the LED on the Arduino blink when the speaker (either on the computer or the external speaker) outputs the signed letter/word and more importantly, that it outputs it correctly. I think we should color code the output based on the success of the transmission: red for didn’t go through, yellow for possible but might want to resign, and green for successful transmission. I also want to order some vibrating motors because for our final prototype we want to have this type of feedback so the user doesn’t have to constantly look down at their wrist. Finally, I want to bring up changing/adding to what position we deem to be “at rest”. Right now, we just have the user holding up their unflexed hand as at rest, and the model is pretty good at recognizing this state, but this isn’t really practical—people’s at rest is typically with their hands at their side or folded in their lap, or moving around but not actually signing anything. The model sort of falls apart with this notion of at rest, and I think adding this to our training data will make our device more robust. 



Team Status Report for 3/30/2024

The most significant risk that we face right now is the timeline for collecting the data and the robustness of the ML model. We hope to start collecting data next week which gives us ample time to collect more data if the initial dataset needs to be increased. Ricky also has plans for testing multiple different architectures and hyperparameters if the performance is insufficient. The other risk involves just keeping the readings consistent. This risk is being mitigated through some newly implemented calibration techniques and the introduction of a PCB.

The major change to the design is that we will as of now stick to the laptop as the main speaker component. This is due to our choice of ARDUINO which limits the speaker capability from the Arduino chip. We will circle back to this idea if time permits after testing and model tuning.

We will be proceeding with the original schedule. Ricky has adjusted a bit of his timeline to reflect some of the delays in Bluetooth integration and Ria has added the PCB creation timeline but it is relatively similar to the original schedule.

Ricky’s Status Report for 3/30/2024

This week I did a whole restructuring of our codebase. Originally, we had the Bluetooth and USB scripts written in the same file. But for ease of use, I decided to refactor the code so that the pattern in many parts of the code was separated. I also spent the bulk of the week fixing our Bluetooth system. When running the bleak package, I realized that its performance was very inconsistent for my Windows laptop. Further research identified that Bleak was not fully developed for Windows. As a result, I was forced to look into other Bluetooth packages. I eventually found the simplepyble package. I was able to integrate it and test that it properly connects to our gloves and ARDUINO. However, further tests showed that it was incompatible with the pyttsx3 package. This was because the prior demands a multi-threaded setting while the latter demands a single-threaded setting. I eventually found another speech package “win32com.client” using the Dispatch object. This worked! I integrated the Bluetooth, Speaker, and ML systems and did some benchmarking for performance. Quick tests identified that the time from sign to speech for USB is around 0.6 seconds and Bluetooth is 0.82 seconds. There are some quick hyperparameter checks that we can change to improve time performance at the risk of accuracy which we will look into later. I also helped assemble the circuit for the second glove. Please look below for our public repository for the codebase or future reference: TStar111/EchoSign (github.com)

The schedule is right on schedule.

Next week, the bulk of the time will be spent on collecting data. If time permits, I will start training and benchmarking the ML model.

 

Ria’s Status Report for 3/30

This week, I started by deciding if the speaker was a feasible endeavor for our team. Ricky mentioned a library called Talkie that does text to speech on an Arduino. I thought about how we can use that library on the glove and output speech through a speaker after sending that audio data through an amplifier circuit that I built. Unforuntately, when trying to run the test script I wrote, I found that Talkie does not support the BLE board we have. I assumed since the BLE board has the same architecture as the other boards Talkie does support, it would be worth testing it out, but the source code cases on the board in use and BLE was not on that list. Instead of modifying the source code for the library, we decided that for MVP, it was the best use of our time to keep the speech output coming from the computer.

My next task for this week was figuring out how to power the board. The LiPo 3.7V batteries are the ideal size for our portable product, but the BLE board takes in 4.5V-21V into Vin. I thought about if a larger bulkier battery could work but could not find a battery that was both cheap and was lightweight enought to be mounted on a wearable device. I decided to look into a step up dc dc converter instead. The capacitors on them are a bit bulky, but they will do the trick and the MT3608 chip can output a voltage in the range we need. I found the schematic for this board, added it to our custom pcb, ordered the part, and plan to test the wireless powering of our board on Monday.

Somya’s Status Report for 3/23/2024

I finished fabrication of the second glove as well as work on debugging the transmission of the bluetooth data from the Arduino to the laptop that is running the Python script used to collect the data and compress into a csv file. In addition, I brainstormed various ways we can remove the laptop as being a required component of our final demo. 

My progress is on schedule, but we are slightly behind on the testing of the double glove due to waiting for the second Arduino BLE compute unit to arrive. Once it does arrive however, we should have all the moving parts in place to begin testing integration of data from both gloves immediately. In the meantime we are collectively working on other features like the speaker and haptic feedback, as well as cleaning up of noisy data to improve the ML model.

This next week, I hope to finish the circuit with both the speaker and haptic feedback, as well as be fully finished collecting data for the BSL double-handed alphabet so I can see what issues the synchronization of the two input streams brings and start debugging that. 


Ria’s Status Report for 3/23/2024

This week I focused on creating the PCB layout for our circuit thus far. I started off by drafting the circuit with each flex sensor and op amp. Then we had to finalize which board we are using which we did altogether (determining if Bluetooth would meet our latency needs). We finally decided to stick with the Arduino Nano BLE Sense Rev2 and I added in those headers into the schematic. Finally, I added headers for a battery, and headers for the speaker and haptic motor. 

The next thing I did this week was solder five more flex sensors to red white wire pairs so that they can be ready to attach to the second glove. Now that our first glove works, we decided to parallelize the tasks needed to be done to duplicate it. The next thing I’ll focus on for the following week is getting the speaker and haptic motor driver to work on one glove. 



Ricky’s Status Report for 3/23/2024

At the start of the week, I handled the training of the model with the dataset that we collected from the previous week. I was able to train a simple NN architecture to a high level of test accuracy. (See below for training visualization). Afterward, I was able to go back and test real-time performance. I didn’t test to a rigorous degree but I was able to replicate a high degree of performance on the glove as well. I was also able to quickly check the speed of the ML classification step. This step identified that the inference step takes an infinitesimal amount of time. This allows me to meet my design requirements of an accuracy above 90% and inference time of less than 25ms. I also took some time to add an additional classification heuristic on top of the ML model. This was implemented using an additional wrapper on the ML model, which checks for repeated occurrences of the same letter before firm prediction. I also added speaker capabilities from the pyttsx3 library in Python. I also spent a significant portion of the week debugging Bluetooth capabilities. We faced specific limitations due to our laptop not being Linux, but we were eventually able to identify the Bleak library as a reliable source for Bluetooth capability.

My progress is on schedule, though I anticipate that I will have less time next week due to upcoming exams.

Next week, I hope to assist with the development of the second glove as needed. I hope to also establish Bluetooth communication between the two gloves and the laptop at the same time. I will also hope to figure out what we plan on doing with the speaker. If time permits, I hope to start collecting data from the two-glove setup next week.

Team Status Report for 3/23/2024

One significant risk that we face right now is the development of the synchronization algorithm/heuristic. We were able to establish a Bluetooth connection from a laptop to the gloves via the Bleak library in Python. We are currently doing some preliminary research into connecting two Bluetooth connections at once to the laptop. We have a contingency plan of connecting the gloves and sending one stream of data to the laptop if our original plan seems infeasible. We are also exploring different strategies in generating audio output from the glove itself. We seem to be slightly blocked by our choice of ARDUINO so our contingency plan is maintaining speaker functionality from the laptop. 

There have been no major updates to the design as of right now. We will closely monitor the situation with the speaker and Bluetooth in the upcoming week to see if any adjustments need to be made.

We are on schedule as we have started working on the construction of the second glove. We have essentially finished prototype 1 as well, making it ready for presentation during the interim demo.