Team Status Report for 3/16/2024

The major risk right now is that the data is insufficient to distinguish the various signs for our single-gloved design. To address these issues, we plan on adding additional feature extraction based on our heuristics which could increase the complexity of the data for training. We also have included all 9 DOF of the IMU even though they might not all be needed for feature distinction. This will be something else we look at as part of the risk mitigation. We are also slightly concerned about distinguishing signs that rely less on bending the fingers and more on one finger touching the rest. Our contingency plan is to incorporate 1-2 touch sensors into our design, which if needed, should be a simple addition circuitry-wise. Below are some pictures of the glove fabrication process. 

The only major change is that we have decided to pursue a PCB design for the glove, to simplify the Opamp and Arduino structure on the glove

We have pushed back everything one week, we will shoot to have the first prototype to be done by around Mar. 21. 

Ricky’s Status Report for 3/9/2024

This week, I primarily worked on the design document. I was responsible for large chunks of the text regarding the machine learning training/testing and the data collection and labeling. I touched upon the use-case requirements, some design trade studies, and big parts of the system implementation and testing. Additionally, I also finalized what would be appropriate for EDA. Specifically, I want to be able to graph the average values of each sensor for each letter and also calculate relevant summary statistics like mean, median, variation, etc.

The software development is on schedule but the actual data collection component is a bit behind. I will just wait for the glove to be developed to catch up. I will keep working on the software until then.

Next week, I will hopefully help assemble the glove and data collection. I will also implement the EDA stated above as well as finish off the last two ML model training modes.

Ricky’s Status Report for 2/24/2024

This week, I primarily focused on implementing the script that will train an NN architecture and save the model for future inference use. This implementation has a variety of features including variable choices of input data, cross-validation raining to identify the optimal stepsize, and usage of the wandb package to track training performance through the epochs. I have also implemented the SVM model trainer. This was pretty straightforward with a heavy dependence on the sklearn package. I also tuned up the data collection for an automated collection of 26 letters in one shot. I performed some research into pre-training data analysis based on my “Estimation, Detection, and Learning” class. Based on the research, I added some potential action items to prepare a clean dataset. 

I am on schedule as of right now.

Next week, I hope to put significant work into the design document. In addition, I hope to flesh out a robust testing module, finalize dataset analysis, and begin collecting data with the glove (dependent on the glove’s progress).

Team Status Report for 2/24/2024

Right now the most significant risks revolve around getting our parts in time so we can begin construction of the glove and collecting data. We were unfortunately unable to place our orders in time for this week so we will hope to get our parts next week and start assembly of the glove. Meanwhile, development of the major software components (communication, ML) has started.

No major changes have been made to the system. We did buy multiple computing units but we will not choose one of the two until we can test their performance. 

There is a slight pushback in the timeframe to make the first glove but everything else is on schedule.

Team Status Report for 2/17/2024

The most significant risk right now is waiting for parts to arrive promptly as well as the potential risks of relying on Bluetooth for communication. We can’t do anything about the parts, so we will proceed by working on as many software components as possible. To mitigate the dependence on Bluetooth we bought an additional chip that allows for wifi connectivity which will serve as an alternative if Bluetooth doesn’t work out.

There were several changes to the requirements. In our design, we decided to use a laptop to handle the machine learning prediction and speaker output. We moved away from using a smaller computing unit because we believed that the main goal of our project should be on the wireless and doubly nature of our gloves. We also have decided to use the British sign language alphabet as our goal set. This is because it provides a standardized set of 26 double-handed gestures that can be used with each other. We also reduced accuracy goals to 85%. This is to reflect the increased complexity of the double-gloved design as well as past projects’ results (Gesture Glove achieved 75% real-time accuracy with a single glove). We also are looking into reducing latency requirements due to the potential slowdown caused by Bluetooth transmission. We haven’t nailed down an exact number right now but are doing research into it. No major cost updates as we order our initial parts.

We are on schedule. No major changes to schedule.

Part A (by Somya)

Our product will enhance the welfare and safety of the deaf community because of its usability quotient. By having a discrete pair of gloves that can translate sign language to speech, they will facilitate communication between the deaf and non-deaf population. As such, the product will minimize the need for separate structures to be established for deaf people so that they can go about their day-to-day tasks in a more convenient manner. The better the communication between the deaf and the non-deaf community is, the more integrated and less alienated the former population will be. In terms of safety, our product could be really useful for the deaf to communicate in an emergency situation that could arise in any location where there may not be anyone that understands sign language. In such situations, timely communication is of utmost importance, and our product will be designed with this use case in mind. 

Part B (by Ria) 

Our product will be impactful for the deaf community and potentially those who are hard of hearing. Current society is progressing towards providing various disabled communities with tools that they can use to seamlessly navigate daily life. Our goal is to extend this mission to users who speak sign language wanting to communicate with people who don’t. This is a step towards establishing better conversions between the deaf community and those who can hear. 

Not only does this product attempt to solve a complex problem, but it also raises awareness about the nuanced challenges that the deaf communities face every day. By learning more about their needs and struggles, we are aiming to spark conversations – pun intended – about how we can go even further. This product can be useful in education, video games, and even during emergency situations. We hope that this project adds a few cobblestones to the road being paved towards a more inclusive society. 

Part C (by Ricky)

Our product is not meant to be marketed as something to be developed for significant profit. I claim that there are not a lot of economic factors that concern us because our main purpose isn’t to make a product cheaper or easier to produce. We would urge potential investors against trying to profit from our product because it is meant to bridge the communication gap between deaf individuals and non-deaf individuals. Inevitably, there will be a cost associated with purchasing our product but we hope that it will remain limited to the cost of parts. Due to its lightweight design we hope that the distribution of the product will be relatively seamless and costless. Basically, our product is not intended to be profitable. Instead, it is meant to help a disadvantaged community.

Ricky’s Status Report for 2/17/2024

My main task for this week was to prepare slides for the design presentation. About this, I added more input into which ML models we would use, how much data we would need, how to collect the data, and the general data flow. I also set up GitHub and added the general file structure for the different scripts we would need. This includes gathering data, training the model, testing the model, and the general runtime routine. I also wrote the first draft of the script that is responsible for collecting the data via Bluetooth. I also revamped our entire Gantt chart to reflect the changes to our development cycle (prototypes). I am continuing to think about how we will store the saved models so that different models can be loaded based on the demo goals (NN, SVM, vocabulary)

My progress is on schedule.

Next week, I hope to draft up the scripts for training the various ML models. I hope to also finalize where we will store the models and how to reuse them when they have been thoroughly trained. I will also set up the testing routine to evaluate the performance of the models. I will also explore automatic testing for optimal hyperparameters. I will also assist in developing the physical glove as well as add my input to the way the data should be formatted from the glove

Ricky’s Status Report for 2/10/2024

This week, I primarily focused on researching how the ML model will be incorporated into the design. The first major issue I looked into was the logistics of collecting the training data. Recall that we will be collecting our dataset to train the model. Past similar projects don’t offer too much information about this. Instead, I came up with the idea of using a Python script to automate collecting sensor readings while another person signs a specific word. Then, after exporting to an Excel file, we can go back and systematically label all the data with the specific word label. This is probably the plan for now and will be adjusted based on its performance when we get to that stage. I also considered additional processing on the input before the ML. Past projects seemed to preprocess the flex sensor output to a predetermined range of angle values. I believe that this is something worth looking into and will add that to our design, and have the raw input as a backup plan. Processing outputs is something I also took inspiration from previous projects. They used a general heuristic of requiring repeated patterns of the same predicted word before speaker output. I believe that this idea will also help us cut down false positives when the signer is in a transition state. In summary, I was able to narrow down the complexity and logistics of using the ML model for prediction.

My progress is on schedule. I am confident that I will wrap up the planning and design logistics soon.

Next week, I hope to present my ML findings to my partners and start working on the design slides that are focused on the ML model. I will also look into writing a Python script for data collection based on the glove compute that we decide on. I will also need to look into Bluetooth modules for Python.

Ria’s Status Report for 2/10/2024

I spent this week doing a ton of research on what compute to use for our design as well as some lower level ideas for the software implementation. These were all things I presented in the team meetings so that we could further our design into a more complete product.

I first looked at the macro level of the project and found gloves that we could iterate on. I found some very basic ones on amazon that were cotton and we decided to use those for now.

Then I researched a few designs for how the compute would be set up. The full fleshed idea would ideally have two boards, one on each hand doing all compute and all interactions with the world. But since this is a really complex design, we decided to break our project up into rapid prototypes with the first one being a single working glove communicating to a computer via bluetooth. Thinking through the processes involved and breaking down the problem into subcomponents helped me better visualize the future of the class and product.

I proposed a few boards: Jetson Nano, Arduino Nano 33, and the Pi Zero. With the information each of us provided to the conversations helped us prepare for next week when we do finalize a design.

Somya’s Status Report for 2/10/2024

What did you personally accomplish this week on the project? Give files or

photos that demonstrate your progress. Prove to the reader that you put sufficient

effort into the project over the course of the week (12+ hours).

This week I mainly researched the sensors we plan on using for the gloves. There were a number of factors to consider, including cost, durability, and sensitivity. I compiled a list of sensors from various research groups/previous capstone projects used for similar gesture recognition applications, and then pitched various ideas in one of our team meetings. We have decided to go with the traditional one flex sensor per finger approach to start, namely Spectra Symbol’s SpectraFlex sensors, which are an improved design that are lightweight, less thick, and aim to minimize drift (i.e. unexpected change in sensor output despite the input having returned to an initial state). 

 

Is your progress on schedule or behind? If you are behind, what actions will be

taken to catch up to the project schedule?

My progress is on schedule. 

 

What deliverables do you hope to complete in the next week? 

I plan on helping complete the slides for the upcoming Design Review proposal, as well as research more into how we plan on addressing sensitivity issues with the flex sensors, as this was mentioned in the feedback we received from our initial proposal. In addition, I would like to have our sensors ordered by the end of the week, as well as start thinking about the pinout of the sensors to the Arduino.

Team Status Report for 2/10/2024

The major risk that we are facing right now is concerning quickly finalizing our design. Over the week, we hashed out exact details about what our product will look like including specific parts we are interested in ordering. This is a small risk because we are close to finalizing our design and intermediate prototypes. The risk will continue to be managed as we enter design week and 100% finalize design ideas and part ordering. There are minimal contingency plans as the plan is just to finalize our design. 

 

Instead of having the data processing unit be in a separate unit, we have decided to run our processing algorithms on a PC. The rationale behind this change is that having our device work in a self-contained manner (i.e. without the user needing to carry around their computer) is not a top priority of ours—our main goal is to show proof of concept of the gloves working. 

 

We also made some soft deadlines that we want to finalize by next week for the prototypes we have planned. Each prototype should be standalone and work, and each one builds on the last. Below are the details of each rapid prototype as well as when we anticipate finishing them.

 

  • RP 1 due March 20:
    • Phase1:
      • Create one glove, sensor detection is reliable, IMU data is gathered
      • Wired connection to laptop, data send directly to laptop where we can monitor sensor data
      • No speaker, no bluetooth
    • Phase 2:
      • Test Bluetooth capabilities and add battery, and create a glove that can transmit data through bluetooth
      • Maintain same performance as phase 1 glove if possible
    • Phase 3:
      • Train the ML model and finish the product 
      • Add speaker and haptic feedback
  • RP 2 due April 3:
    • Phase 1a:
      • Duplicate glove into second glove 
      • Figure out how to send both gloves information to laptop via bluetooth
    • Phase 1b:
      • Make PCB only if we are somewhat ahead of schedule
    • Phase 2:
      • More gathering of data and training
  • RP 3 due April 14:
    • Phase 1a:
      • Expand vocabulary
    • Phase 1b:
      • Experiment with making this a distributed system using some form of communication protocol (need to iron this out)