Weekly Status Report #2 9/29

Ordering Parts (Sean & Bobbie)

This week we ordered several parts to begin creating our physical test environment and directional hearing aid.  For our test environment we ordered dual speakers along with a USB hub power source and USB 7.1 Channel Audio Adapter to output our test recordings to individual speakers.  We also ordered a mannequin torso to mount our directional hearing aid on. The total budget spent on this test environment so far is $140.65. (Bobbie)


For our directional hearing aid, we purchased 8 omnidirectional microphones to mount on our test torso.  We also ordered a Teensy 3.6 microcontroller, 4 Teensy Audio Boards to receive microphone inputs, and 8 headers to attach the audio boards onto the microcontroller.  The total budget spent on our hearing aid components is $118.65. It is important to note that we ordered extra parts in case of changes or issues moving forward. (Sean)

Generating Test Environment Recordings (Bobbie)

We moved forward with our test environment planning.  Per discussion with Professor Sullivan in lab, we decided to build out basic matlab simulations to generate signals and sound delays given arbitrary speaker & microphone placements.  Below is a screenshot of sample MATLAB code written by Bobbie Chen:

Signal Processing (Kevin)

We are continuing with our research on adaptive noise-cancellation techniques to implement on Teensy microcontroller.  Several research papers have been read in order to get a better understanding of different potential algorithms that we could attempt to use in our MATLAB simulations.  Some research papers of note include Adaptive Noise Cancelling: Principles and Applications (Widrow, McCool, Wililams).  Looking forward, we aim to have some implementation of ANC for our MATLAB simulation by the end of the week.

Circuitry and Embedded Software (Sean)

We have a general design for using hardware microphones wired directly to the Teensy board’s I/O pins. We are currently doing research on how to read Pulse Density Modulation (PDM) data from the microphones onto the Teensy board. We will start working on assembling the microphones once they arrive, and we will aim to get a signal through the circuitry.

Goals for Next Week

For this following week, we wish to assemble the test environment.  We also aim to begin assembling the hardware for the hearing aids, hopefully getting a loopback from mic to earphones.  With regards to signal processing, our goal is to have some implementation of adaptive noise cancellation implemented by the end of next week in our MATLAB simulation.  

 

Weekly Status Report 9/23

This week we made progress on planning the schedule, ordering parts, working on the testing setup, and background research on signal processing.

Schedules

While a lot of the work for our project will be in the signal processing side, we wouldn’t be able to verify that it works without having the required hardware and a way to test that it is working. Therefore, in the beginning of our schedule we are focusing on getting the hardware and testing environment correct.

We are currently on schedule. One potential risk is the TOC early next week, and interviewing season soon after, which could lower the amount of work that we accomplish in the two weeks.

Ordering Parts

We assembled a list of parts needed for hardware and test setup. These represent the minimal working set of materials we need to get started with actually getting input in a controlled test environment. The parts include the Teensy microcontroller board as well as audio jacks for I/O, speakers and power supply for the test environment, and a torso mannequin to mount the mic array to.

Testing Setup

We intend to administer the Hearing In Noise Test (HINT) using a circle of speakers to express the directionality.

Over the course of this week, we contacted institutions which administer the HINT. Some offered stereo recordings for administration with headphones, but unfortunately that would not be helpful for our test setup; many did not respond.

Because of this, we are planning to generate our own 7.1 sound recordings using Audacity and FFmpeg. This week we created an example of a HINT recording. However, the HINT test is an adaptive test, where signal-to-noise ratio is dynamically adjusted based on the test subject’s responses. Therefore, we are also looking into ways to automate the generation of many HINT recordings with varying signal-to-noise ratios.

Signal Processing Research

We also continued research into signal processing algorithms, mostly in adaptive nulling. We found that it has been tried in the context of hearing aids  unsuccessfully, at least in part because the space constraints of traditional hearing aids were too limiting.

Planned Deliverables This Week

  • Order all parts which we have decided on
  • Generate more test environment recordings and automate the process
  • (If parts arrive) begin assembling the hardware and test environment
  • Meet with Professor Sullivan regarding I/O concerns for signal processing
  • Settle on a prototype signal processing block diagram for an adaptive nulling process

 

Introduction and Project Summary

Hi everyone! For our ECE capstone project, we are working on a directional hearing aid, using a wearable microphone array.

Inspiration

The inspiration for this project came from meeting a dance instructor, Dan, who is legally deaf and relies on a hearing aid to communicate with his students. Many conventional hearing aids only amplify all nearby sound, making hearing in a noisy environment like a dance studio with loud music and multiple conversations near impossible. As a result, Dan usually resorts to lip reading or turning his earpiece to a higher volume, which only damages his hearing further.

Solution

To improve on conventional hearing aids in noisy environments, our group proposes a directional hearing aid.

Our implementation will consist of a microphone array worn by a hearing-impaired person. The microphones will be connected to a microcontroller, which then applies signal processing algorithms to focus on the sound from the area in front of the user. We assume that the user’s torso is facing in the direction that they want to hear.

 

We chose the form of a microphone array worn on the torso because this gives us the space to place more microphones with greater separation. This allows for certain signal processing methods which are not effective in traditional small in- and on-ear hearing aids.

Problems We’re Not Trying To Solve

We picture a specific use case here, where the user is trying to hear a person in front of them in various noisy conditions like a crowded room. In particular, we do not aim to help users listen to things like music or public speakers in a crowded venue – this is because the standard solution for these is to use a hearing loop system to transmit the sounds wirelessly, directly to a user’s hearing aid.

And also, we are not trying to handle cases where the sound source is moving around the user – it would be very difficult to predict which sounds a person actually wants to listen to in a whole soundscape. With our project, we envision that the user can simply turn their body to look at whatever they want to hear.

Implementation Details

Physical Implementation

We will create an array of multiple (likely 4) microphones, mounted somehow onto the front of the user’s torso (for example, with a vest). These microphones will then interface with the Teensy 3.6 microcontroller – we chose this particular model because it is the base component for the Tympan open-source hearing aid board.

After signal processing is performed on the mic input, we will send the output to a pair of headphones. This is because we do not have the time or resources to build our own in-ear hearing aid or reverse-engineer a commercial one.

Signal Processing Implementation

Our goal is to reduce the background noise and improve the directionality of what the user can hear in real-time, especially for speech. We will implement algorithms in C and run them on the Teensy microcontroller to transform the sounds from our microphones into a better version that the user can listen to on the headphones. Potential methods for doing this include multi-channel Wiener filtering and adaptive nulling, though we have not yet decided on any particular algorithm.

Conclusion

We’re working on an interesting project with the potential to help countless hearing-impaired people around the world. If you would like to help, please contact us at seanande@andrew.cmu.edu, bobbiec@andrew.cmu.edu, and kmsong@andrew.cmu.edu!