Multichannel Wiener Filter Demos

Example 1:

Input: uniform white noise

Output:  attenuation of ~30 dB with noticeable distortion

Example 2:

Input: speech babble noise, directional in east and south-east directions

Output: attenuation of ~6-8 dB, minimal noticeable distortion

Performance here suffers because the voice activity detector has poorer performance in babble noise.

Weekly Status Report #11: 12/1

Kevin

Done this week

This week, I implemented and tested the beamforming algorithm described last week in Matlab.  After finishing the matlab code, I was able to run preliminary tests to approximately see what signal attenuation we got.  With one noise signal, we were able to achieve approximately 30 dB attenuation.  I also worked with Bobbie on translating it to C.

Goals for next week

  • Finish translation to C
  • Test C-code on pre-recorded microphone inputs (until we get real-time microphone input working)
  • Help finish wearable microphone array setup
  • Play around with # of filter taps and array
  • Create poster for demonstration

Bobbie

Done this week

This week I have put our old mic input problems to rest and now have new and exciting mic input problems.

I found that some of the input selection functions actually returned status codes; we were getting I2C write errors (status code 4: other unknown error). In a long story short, after much de- and re-soldering and purchasing a new Teensy in case our first one had some flaw, I resolved the I2C error, but mic input still did not work.

This entire search for audio input in the last few weeks came to an abrupt halt when I read all the Teensy documentation I could find and realized that the audio jack we were trying to use on the audio shield was never meant to take input (only headphone-level output).

On this realization I bought a few breakout boards for splitting out audio jack line-in into individual pins. While I did not get the lapel mic to work, using the boards and the line-in holes on the audio shield I was able to use my computer as an audio source to send audio to the Teensy.

The resulting output audio had interesting properties: it worked perfectly on a pure sine wave besides being inaudible for all but the few highest ticks of my computer’s output volume; for speech, microphone loopback, and music it was completely inaudible; and for sine wave + other audio, it produced a recognizable sine wave with added unrecognizable distortion from the other audio.

This suggested to me that the voltage ranges that the Teensy was looking for were not correct for the mic-level input. Line-out at headphone level is a far higher voltage range than line-in at mic level. However, when I tried to adjust the audio chip to a smaller voltage range using sgtl_5000.lineInLevel(N), this produced the same results.

To summarize the current mic issues:

  • Pure sine wave playable via computer line-out with high volume settings
    • I interpret that the audio shield ADC is looking in the wrong voltage range
  • But can’t play speech or music at the same volume settings
    • Which is just strange. I didn’t measure the voltages at line-out generated by the speech/music, but I would expect them to be similar.
  • When playing sine wave + music, get sine wave + distorted when the music plays
    • I interpret that the (base?) voltage needs to be high, yet the range around that voltage is actually very low.

I have not resolved the input issues yet and if I don’t in the next few days, I will probably pivot to working on code for moving 4-channel audio on and off the board via USB or SD card, to run the signal processing there and output the result via the (working) headphone jack.

I also worked with Kevin on translating his Matlab code to C.

Goals for next week

  • Get everything working

Sean

Done this week

I spent a significant amount of time, along with Bobbie, trying to get four-channel audio through the board using the audio shields. Bobbie discovered that the Shields weren’t meant to take input through the audio jack,

This week I hooked up mobile power using a battery to the Teensy. I did this by connecting a lithium ion battery to an Adafruit micro-USB charging port, then connecting the charging port to the external power pins on the Teensy.

Goals for next week

  • Get everything working

 

Weekly Status Report #9: 11/17

Bobbie

Done this week

This week I worked on getting SDW-MWF to work with the Teensy Audio library. This required changing many constants like the buffer length in each audio frame. However, this ran into problems (segfaults) when I tried to actually integrate it. I spent a good amount of time trying to debug the generated C code (from Matlab Coder), when Professor Sullivan suggested just rewriting it directly.

So I rewrote the code for the Teensy Audio library as a new object (mwf.h and mwf.cpp). This also required implementing a voice activity detector, which I did using the algorithm described in “A Simple But Efficient Real-time Voice Activity Detection Algorithm” (Moattar and Homayounpour, 2009). I tested the voice activity detector against some WAV inputs, and it appears to work with good accuracy. The main time spent here was learning the bit of C++ needed to integrate with the library, and familiarizing myself with the CMSIS DSP Software Library (“arm_math.h”), which is a DSP library optimized for the ARM architecture. For example, a few times I wrote a loop to calculate some simple property like the mean, only to find that I could have just called arm_max_q15(buf, len, &max_size).

I also talked to Kevin about material choice for the microphone rig and decided on mounting the microphones to Lexan polycarbonate, as a material which is strong, stiff and slightly flexible. This is decided over acrylic because it will not shatter under load, and over wood because it stronger. We also decided against 3D-printed ABS because it is less flexible, and our geometry is not complex enough to warrant the time and expense.

Goals for next week

  • Acquire scrap Lexan from my shop and cut/drill it to the sizes we need
  • Work with Sean to get the 4-mic input to the MWF working on the Teensy board

 

Kevin

Done this week

This week I worked on converting the MATLAB code for beamforming to C code.  I am currently working on a delay sum beamformer for our microphone array.  I am still currently working on a working C implementation partially delayed because this weekend was spent in Ohio.  I currently have code for microphone array initialization and microphone delay calculations.  I have added screenshots of sample code below.

Goals for next week

  • Frostbeamforming code conversion to C-code
  • Finish building microphone array on test environment
  • Test microphone inputs on frost beamformer

Sean

Done this week

I received and tested the PDM microphones this week, but unfortunately translating the data uses too much CPU power. Reading from 1 mic used about 37% CPU and 2 mics used 81%, so we won’t be able to translate data from 4 microphones and process all the audio. Therefore, we’re going forward with using I2S communication via the Teensy audio shields. I found some left/right mono to stereo audio adapters, as well as a microphone/ headphone splitter so we can record and output data on the same I2S port. This setup will allow us to have 4 input and 1 output with only 2 audio shields attached to the Teensy.

I rewrote my Teensy startup code to take in I2S inputs and connect to an empty filter, which will eventually become Kevin’s and Bobbie’s algorithms.  I also implemented a simple program that records for 5 seconds, then plays that audio back. I am using this to test different microphone setups.

Goals for next week

Next week, I want to build the mic setup and test multiple input and single output via I2S. I hope Bobbie and I will be able to implement and test his noise cancelling algorithm on the Teensy board. We will initially try this with only one microphone input.

Weekly Status Report #8: 11/10

Kevin

Done this week

I started this week off by looking into converting our MATLAB code into C/C++.  I utilized MATLAB coder, however quickly realized that the coder would not provide direct code that we could utilize.  The biggest issue that I came across was that a number of built-in MATLAB functionalities could not be directly converted.  After changin a lot of the MATLAB built-ins to hand-coded functions, I ended up with one final error which was in coder’s conversion of our frostbeamformer.

While waiting for microphones, I also decided to do more research on beamforming.  More specifically, I have not read too much literature on how reverberant noise affects beamforming algorithms.  Below I included some of the notes I took on different online literature in order to keep notes not only for the progress of this project but also for the future report we will have to write.

 

NOTES:

  • What is beamforming
    • Beamforming is achieved by filtering the microphone signals and combining the outputs to extract (by constructive combining) the desired signal and reject (by destructive combining) interfering signals according to their spatial location.
  • How to model reverberant noise
    • MSS (multichannel source separation): process of estimating signals from N unobserved sources, given M microphones
  • Si and xj are the source and mixture signals respectively, hji is a P-point Room Impulse Response (RIR) from source i to microphone j, P is the number of paths between each source-microphone pair and ∆ is the delay of the pth path from source j to microphone i
  • Question: how to model each different path given an unknown room?
  • Time-domain vs. frequency-domain beamforming
    • Broadband speech signals can utilize either beamforming techniques

GOALS FOR NEXT WEEK

  • Frostbeamforming conversion to C-code
  • Start building rig for microphones

Bobbie

Done this week

This week I worked primarily on documenting our existing Matlab code and using Matlab Coder to convert my SDW-MWF implementation into C code. Matlab Coder has certain limitations that make it different from running Matlab directly. For example, system objects cannot take variable length inputs, and variable types must be explicitly declared and cannot change.

The resulting code (excerpt above) is also ugly. This means that it’s more important than ever to have properly documented Matlab source code before generating to C (as comments are preserved).

I also wrote a quick Makefile for actually compiling code which calls into the shared library.
This took an embarrassing amount of time (a few hours) to figure out.

 

I also tweaked the HINT runner to interactively prompt as if a real HINT test is being run, i.e. with binary search on the example tests to find the proper range of SNRs to run it at.

On the chest mount, since our second PCB arrived and was also not correct, I did not have mics to mount. However, the chest mount we ordered did arrive and I took a look at it; it works very well for our purposes and should only require drilling a couple holes to bolt on a rigid surface like polycarbonate to get the microphones on.

Goals for next week

  • Work with Sean to get audio input from our newly ordered mics through the Teensy audio shield.
  • Process the input on the SDW-MWF C code, and evaluate maximum sampling rate for real-time operation.

 

Sean

Done this week

I ordered a few different microphones to try some simple audio input. I ordered some I2S microphones with breakout boards so we can easily get inputs to the board. These microphones have been tested with the Teensy board and there are tutorials on how to set the up with the board. I also ordered breakout board for the current PDM microphones we have, in case we want to go forward with that.

I soldered the board together and got audio output working through the audio shield this week. I also was able to generate a low-jitter 2 MHz clock on the board to a digital pin, which we can use to drive the microphone data. I finished most of the set up code for outputting the clock, left/right select lines, data in lines, and audio output through the audio shield. I’ve also started on processing 4 audio inputs at once.

Goals for next week

Next week, I need to get an audio signal into the board through our microphones. I will test both types of microphones after they come in, and continue to work on C code adaptations of the Matlab algorithms in the meantime. The microphone integration is vital, and I need to get that done as soon as possible after they arrive.

I will also be working on multithreading the audio input on the Teensy board. There are a few ways this can be done, and I’m currently deciding which option to pursue.

Weekly Status Report #6 10/27

Bobbie

Done this week

I successfully reduced the distortion in the SDW-MWF implementation. During lab on Monday, Professor Sullivan helped identify the problem as a windowing issue: previously there was no overlap between windows, which caused boundary effects at the edge of each window. Using a 50% window overlap drastically reduced the distortion. This is at the tradeoff of now needing to process the next (overlapping) frame before producing output, which adds 5-9 ms delay to the output. This is within our timing budget, so it is okay. The new output is below.

For the wearable mic array mount, I ordered this phone chest mount from Amazon: https://www.amazon.com/Samsung-Holder-Record-Awesome-Action/dp/B00MYN0CGI/ref=cm_cr_arp_d_product_top?ie=UTF8

This will serve as the basic model of our wearable mount. The main problem we anticipate from this mount is rotation about the central mounting point. Ideally there should be two solid pieces upon which the mics are mounted, with straps connecting as well. This will consist of:

  1. Two rigid mount holders (metal, laser-cut acrylic, or similar)
  2. Webbing
  3. Buckles/straps

I also wrote code to generate the HINT test recordings. However, this is not completely practical to use yet, because there is not an interface for quickly switching between various SNRs – this is needed because the HINT is an adaptive test, switching SNR levels based on the subject’s  success rates.

Goals for next week

  • Build mount for microphones to attach to the purchased chest harness
  • Create interface for running HINT given recordings

Kevin

Done this week

One of my goals for this week was to come up with a finalize an initial physical setup for the microphone array.  I decided that for a linear array of n omnidirectional microphones with equal inter element spacing, the distance between each microphone element should be d=c/(2*f0)where c is the speed of sound and f0 is the midband frequency.  Because the human voice frequency in telephony ranges from approximately 300 Hz to 3400 Hz, I chose 1800 Hz to be the midband frequency.  As a result, the distance between each microphone was chosen to be 9.5cm apart.  This means with a 4 microphone setup, we would require an array of 38 cm in length.

Additionally, regarding how each microphone input would be processed, the delay Tn between the desired signal arriving at the nth microphone and the signal arriving at the first microphone (at the origin of the coordinate system) is Tn =(n-1)*d*sin(ø)/c where d=9.5 cm and c is the speed of sound (343 m/s).

For signal processing, I have an initial MATLAB program which implements Frost Beamforming to try on our microphone setup.  I have not done extensive testing yet on the adaptive beamforming code. This following week will be spent testing and modifying the beamforming code with (hopefully) real microphone input signals.

On the hardware side of things, I have worked in lab with Sean to create a temporary microphone setup for testing using microphones that were available from the past. The microphones we used for now are PM: WM-55D103.  The specific circuit we made for each condenser microphone is shown in the figure below for testing.

Goals for next week

  • Start building a setup rig with the harness that Bobbie purchased to create wearable microphone array
  • Test adaptive beamforming algorithm on ideal generated signals
    • Possibly move to real-input signals

 

Sean

Done this Week

This week, we put together a circuit for the microphones Professor Sullivan gave us and successfully got an analog signal read on the Teensy board. I just used simple analog read functions from the Arduino library to read sound levels. I also starting writing code using the PDM library to process the input from our surface-mount microphones

We decided to go forward with a rechargeable battery pack for external power, pictured below. In order to prevent damaging the board from supplying USB power and external power, I cut the trace between the two pads, to isolate the voltages. I will also cut the power line of the USB cord, just in case the trace isn’t completely cut.

I found tutorials for outputting high speed clocks from the Teensy, and will talk to Professor Sullivan more about these methods.

Goals for Next Week

  • Read digital signal on board
  • Decide on mic driver clock method based on usability, jitter, and accuracy

Weekly Status Report #5 10/20

Bobbie

Done this week

This week I implemented speech-distortion weighted multi-channel Wiener filtering as described in “Multichannel Filtering For Optimum Noise Reduction In Microphone Arrays” (Florêncio and Malvar, 2001). The results are very strong in reducing noise, but also introduce a very noticeable distortion in the output signal. See the attached audio for an example.

This implementation processes the audio frame-by-frame, writing results to a buffer. For now, the buffer is simply written to file at script completion, but in an actual program it would be read from continuously for output. The 38 seconds of input were processed in ~3 seconds including file reading and writing; this implies that processing a frame takes at most about 70 milliseconds, which is within our time budget. The result shown involved some tweaking of the parameter P as well as the parameters for the least mean squares adaptive filter. I don’t see any glaring errors in my implementation of the algorithm, but the distortion levels are currently quite high and distracting. I will continue looking at this and trying to reduce the distortion, because the noise is reduced by about 24 dB.

Original input:

Post-MWF output:

I also generated some working C code using Matlab Coder. This was quite straightforward to do; Matlab Coder was able to automatically detect the types based on a script which used the relevant functions. We have the option of using Matlab Embedded Coder to generate directly for a ARM Cortex-M target, which may run faster because it will optimize for the particular processor we are using. However, we are not over on our timing budget yet, and it is useful to be able to see the C source code, so we will just use source code generation for now.

 

Goals for next week

  • Reduce the distortion in the SDW-MWF implementation to a more manageable level
  • Generate one HINT test to be run on the test environment
  • Draft a parts list and diagram for the wearable mic array mount

Kevin

Done this week

This week I looked into how our team would go from our basic Matlab implementation, which consisted of LMS noise cancellation using a reference and primary signal, to using a microphone array.  After looking through research papers, adaptive beamforming seemed to be a very promising.

The basic principle of adaptive beamforming is as follows.  Beamforming microphone arrays are spatial filters that take multiple microphone signals as input and combine them to a single output signal. Usually the combined output is calculated by filtering each microphone signal through a digital FIR filter and summing the output of all filters as shown in the figure. The filters are designed so that their output add constructively when sound is coming from a specific direction (main lobe) and add destructively when sound is coming from all other directions. This creates the spatial filtering effect of focusing at the sound that is coming from the main lobe direction while attenuating sounds coming from all other directions.

Goals for next week

  • Have a working implementation of beamforming
  • Build temporary microphone circuit for testing (soldering microphones and designing circuit)
    • Test on microphones that we received in lab
  • Design actual microphone array for wearable
    • (distance between microphones, what materials for mounting, etc.)

Sean

Done this Week

This week I fixed the PCB board so that we can order it through PCBway. This included manually adding layers that weren’t generated by the Eagle software and changing file encodings. I will order that as soon as I hear back from Quinn.

I have starting writing code for the Teensy to get simple inputs from the digital pins, where we will soon be inputting a signal from the microphones given to us by Professor Sullivan.

I did research on outputting a low-noise clock from the Teensy board, which can be done by clock-dividing the 16MHz I2S master clock. I also researched pulse density modulation on the Teensy board, which I found is included in the Audio library for the Teensy. Finally, I looked into power options for the Teensy. For now, we will use USB power, but the final product will need to have a portable battery pack, which we can set up with 2 or 3 AA batteries. Ideally, we’ll use a lower-profile, rechargeable battery for convenience and wearability.

Goals for Next Week

  • Get signal through board using larger microphones
  • Generate clock output on Teensy
  • Begin using PDM library on Teensy
  • Connect Teensy Audio Shield for audio output to headphones
  • Figure out what method to use for external power so as not to damage the board when it is hooked up to the USB cable and an external power source.

Schedule

We have just missed a couple milestones on our schedule.

We have not sent a signal through the mics into the hardware system yet. The main delay here has been the very small mics we initially ordered and follow-on issues with PCB design. The PCB issues have been fixed and we should be able to order this week (waiting to hear back from Quinn). In the meantime, on Monday Sean will put together the larger microphones that Professor Sullivan lent us and get an actual input signal by then.

We also have not yet created a HINT Test which we can run on our test environment yet. This is mainly because there is not much point in creating one when the microphone-to-hardware connection to receive the signal does not exist yet. Bobbie will do this over the coming week.

Although these milestones were missed, we are not far behind in the project overall. This is because other non-blocked tasks in signal processing made progress when the tasks for the milestones above were blocked.

As another schedule-related concern, Sean will be travelling most of this week for interviews, and that may put his work slightly behind schedule. He will work diligently while he is gone to try to keep the project moving forward.

Weekly Status Report #4 10/13

Bobbie

Done this week

This week I generated test audio samples which would simulate the delays involved in sound arriving at our microphone array, using the Matlab script I wrote last week.

 

I also explored alternative signal processing techniques beyond adaptive nulling which could also be useful in our project. The most promising of these were Speech-distortion weighted multi-channel Wiener filtering (Ngo 2011), and the acoustic rake receiver (Dokmanic, Scheibler, and Vetterli 2015).

Over the course of this research I identified a few common useful utilities and prepared them for use in our project. This includes implementing the short-time Fourier transform and its inverse in Matlab, and finding the documentation for the Matlab “snr” function.

Goals for next week

  • Implement SDW-MWF in Matlab and evaluate its performance relative to our existing Matlab code.
  • Generate a working C implementation of the better-performing noise cancellation algorithm using Matlab Coder.

Kevin

Done this week

This week I explored the specific use case of white-noise adaptive cancellation with the implementation currently in place.  More specifically, trying to pinpoint the conditions that cause the filtering to work well vs. not well.

 

Additionally, I worked with Bobbie to come up with a quantitative way to analyze our results/ANC output and compare it to our desired signal.  We settled on using the Matlab “snr” function to evaluate the signal-to-noise ratio in our algorithm output relative to the original signal.

 

Also, I emailed professor Cameron Riviere to meet as a group and discuss his previous experience with adaptive noise cancelling.  He is a professor from the Robotics department with whom I am currently taking a sensors and sensing class with. We believe he will be a good resource for creating an effective actual microphone array as well as being able to receive feedback on our ANC implementation and figuring out how to integrate our current basic MATLAB implementation with a microphone array.  His previous ANC experience does not involve audio-signal inputs, however we should still be able to receive helpful feedback.

Goals for next week

  • Create a concrete geometric layout for how we want the microphone array with justification
  • Meet with professor Riviere to get a clearer direction on future endeavors
    • Clear concept on how to integrate our basic ANC code with multiple reference microphones.  (i.e. how to apply ANC to multiple reference signals)
    • Feedback on current ANC implementation
    • Feedback on our current testing environment

Questions to explore

  • How will multiple different noise signals affect our reference signals?
  • What do we do if our primary signal has a noise input that is much greater in magnitude (10+ dB) than the signal desired?
    • Our guess would be that this is not as important of a use-case because even a person without hearing impairments would have a difficult time in this scenario

Sean

Done this week

This week I noticed some crossing over on the routing microphone surface-mount PCB that I designed in Eagle. It looks like it might not be necessary, so I want to take a look at these and change the routing to avoid crossing over if possible.

I also did some research into audio on the Teensy board and found the Teensy Audio System Design Tool, which provides a GUI for drawing audio block diagrams and exporting Arduino code.

While our signal processing is very likely to be too complex to model graphically using these blocks, this will still be a useful tool that we can use to generate code and simplify getting audio in and out of the Teensy board.

Goals for next week

  • Finalize and send out the microphone mount PCB
  • Get simple input and output through the Teensy Audio Shield

 

Weekly Status Report 9/23

This week we made progress on planning the schedule, ordering parts, working on the testing setup, and background research on signal processing.

Schedules

While a lot of the work for our project will be in the signal processing side, we wouldn’t be able to verify that it works without having the required hardware and a way to test that it is working. Therefore, in the beginning of our schedule we are focusing on getting the hardware and testing environment correct.

We are currently on schedule. One potential risk is the TOC early next week, and interviewing season soon after, which could lower the amount of work that we accomplish in the two weeks.

Ordering Parts

We assembled a list of parts needed for hardware and test setup. These represent the minimal working set of materials we need to get started with actually getting input in a controlled test environment. The parts include the Teensy microcontroller board as well as audio jacks for I/O, speakers and power supply for the test environment, and a torso mannequin to mount the mic array to.

Testing Setup

We intend to administer the Hearing In Noise Test (HINT) using a circle of speakers to express the directionality.

Over the course of this week, we contacted institutions which administer the HINT. Some offered stereo recordings for administration with headphones, but unfortunately that would not be helpful for our test setup; many did not respond.

Because of this, we are planning to generate our own 7.1 sound recordings using Audacity and FFmpeg. This week we created an example of a HINT recording. However, the HINT test is an adaptive test, where signal-to-noise ratio is dynamically adjusted based on the test subject’s responses. Therefore, we are also looking into ways to automate the generation of many HINT recordings with varying signal-to-noise ratios.

Signal Processing Research

We also continued research into signal processing algorithms, mostly in adaptive nulling. We found that it has been tried in the context of hearing aids  unsuccessfully, at least in part because the space constraints of traditional hearing aids were too limiting.

Planned Deliverables This Week

  • Order all parts which we have decided on
  • Generate more test environment recordings and automate the process
  • (If parts arrive) begin assembling the hardware and test environment
  • Meet with Professor Sullivan regarding I/O concerns for signal processing
  • Settle on a prototype signal processing block diagram for an adaptive nulling process

 

Introduction and Project Summary

Hi everyone! For our ECE capstone project, we are working on a directional hearing aid, using a wearable microphone array.

Inspiration

The inspiration for this project came from meeting a dance instructor, Dan, who is legally deaf and relies on a hearing aid to communicate with his students. Many conventional hearing aids only amplify all nearby sound, making hearing in a noisy environment like a dance studio with loud music and multiple conversations near impossible. As a result, Dan usually resorts to lip reading or turning his earpiece to a higher volume, which only damages his hearing further.

Solution

To improve on conventional hearing aids in noisy environments, our group proposes a directional hearing aid.

Our implementation will consist of a microphone array worn by a hearing-impaired person. The microphones will be connected to a microcontroller, which then applies signal processing algorithms to focus on the sound from the area in front of the user. We assume that the user’s torso is facing in the direction that they want to hear.

 

We chose the form of a microphone array worn on the torso because this gives us the space to place more microphones with greater separation. This allows for certain signal processing methods which are not effective in traditional small in- and on-ear hearing aids.

Problems We’re Not Trying To Solve

We picture a specific use case here, where the user is trying to hear a person in front of them in various noisy conditions like a crowded room. In particular, we do not aim to help users listen to things like music or public speakers in a crowded venue – this is because the standard solution for these is to use a hearing loop system to transmit the sounds wirelessly, directly to a user’s hearing aid.

And also, we are not trying to handle cases where the sound source is moving around the user – it would be very difficult to predict which sounds a person actually wants to listen to in a whole soundscape. With our project, we envision that the user can simply turn their body to look at whatever they want to hear.

Implementation Details

Physical Implementation

We will create an array of multiple (likely 4) microphones, mounted somehow onto the front of the user’s torso (for example, with a vest). These microphones will then interface with the Teensy 3.6 microcontroller – we chose this particular model because it is the base component for the Tympan open-source hearing aid board.

After signal processing is performed on the mic input, we will send the output to a pair of headphones. This is because we do not have the time or resources to build our own in-ear hearing aid or reverse-engineer a commercial one.

Signal Processing Implementation

Our goal is to reduce the background noise and improve the directionality of what the user can hear in real-time, especially for speech. We will implement algorithms in C and run them on the Teensy microcontroller to transform the sounds from our microphones into a better version that the user can listen to on the headphones. Potential methods for doing this include multi-channel Wiener filtering and adaptive nulling, though we have not yet decided on any particular algorithm.

Conclusion

We’re working on an interesting project with the potential to help countless hearing-impaired people around the world. If you would like to help, please contact us at seanande@andrew.cmu.edu, bobbiec@andrew.cmu.edu, and kmsong@andrew.cmu.edu!