Week of 9/22

Status Report

On Thursday 9/26, we met with Professor Sullivan to finalize our capstone project. After being unable to identify a cohesive core product, we have finally decided on beat/tempo detection with much help from Prof. Sullivan. Since this was our most important meeting, and happened only three days ago, we do not have much to report on in our first week. As such, one of our biggest risks is falling behind and not being able to complete the project on time. However, we have several measures in place and steps we have taken to mitigate this problem.

First, we had a week of slack built into each person’s portion of the project as well as over two weeks at the end (after a final product and optimization) for final testing and working on our demo. In total, this amounts to more than three weeks of built-in slack. In addition, each team member has become more committed to the project after finalizing our product. We each feel more motivated and focused now that we have a clear goal in sight. We will use this motivation to make sure each week’s deliverables are on time.

Even though we switched from chord detection to beat detection, the design change did not cause significant change in other areas as it was a clean switch, with the web app and noise filtering staying intact.

However, another product risk we have is getting beat detection to work consistently for every input beat. As with any signal project, unreliability and noise introduce significant problems. We will be consulting with past research done on this and with professors to determine what range of accuracy we can obtain.

Week of 9/22 – Jiahao Zhou

This week was spent finding suitable algorithms for beat detection since we switched from chords. I have primarily settled upon comb filters and autocorrelation to detect song tempo and beats. While both work in similar ways, by comparing delayed versions of a signal to itself, a comb filter algorithm will detect the tempo based on the total energy after a few iterations while autocorrelation will compare differences in the lagged signal. I will begin implementing both in MATLAB to do some initial testing on which is more accurate and which is faster.

As for detecting the tempo of the person rapping into the mic, I have decided upon using the energy spectral density. Using the average energy is much easier and faster to compute than other more computationally intensive algorithms. Since it is important for this to be real-time, we want the delay between when a person raps and the tempo detection update to be as low as possible.

I have already began re-familiarizing myself with MATLAB. As of right now, I am still on track given my week of slack time. I plan to have comb filters and autocorrelation done and start testing by next week.

Week of 9/22 – Saransh Agarwal

 

This week was spent on finalizing the details of our project. I personally researched the software stack including-

open source audio frameworks we can use to implement audio input and decided on using Wad.js.

Flask backend as most of our functionality will be on the frontend and flask provides a barebones, highly customizable framework meeting our needs.

React Frontend as I have some experience using it, and it is very well documented.

I am on schedule and plan to design and freeze the APIs by the end of the week, so that other parts are not dependent on changes in implementation but only on design.

 

 

Week of 9/22

  • Spent time brainstorming on how to filter out background noise for the microphone input we are going to use. Planning on using circuitry to make the filter at this stage of the stage, because of increased speed.
  • Progress is on schedule
  • For the upcoming week I plan to have components gathered, and a schematic of the desired circuit drawn up.