Shreya’s Update 2/26

Last week, I was having issues with determining the beat amplitude (general). I am continuing to use the amplitude threshold-based onset beat detection algorithm. One idea for this algorithm is to determine a subset of the audio file where beat detection is definitely occurring (done using the BPM). On the subset, the algorithm appears to be working alright. Unfortunately, I am still working on setting up the testing sequence, but on first glance, the time stamps from the subset of the audio file created by my script appear to match what I expected it to. One thing is I do believe it is overdetecting some beats, which is due to the nature of the sampling rate of the wav file. I need to figure out how to sort through these detections to get it into a user friendly pattern.

The next steps are to again verify the subset’s beat detection and then attempt to extend it throughout the entire instrumental file. Because this is the instrumental file, further testing will need to occur to understand the limits of the algorithm. We were asked a question during the presentation over what exactly the limits of the algorithm are, so testing for whether it can handle short clips, or long clips, etc will have to begin next week if the algorithm can handle the entire wav file.

Currently, I believe I am still on schedule because the instrumental piece is close to being done. I think we should be okay to reach our goals set on the gantt chart.

Shreya’s Update for 2/19

At the beginning of this week, I was beginning to experiment with the exporting of time stamps for an instrumental version. In order to determine the desired amplitudes for the beats, I visually determined (from the time-domain representation attached below), the correct peaks of the beats and then searched through the wav file for the timings related to the desired peaks at the time difference per beat determined by the BPM.

Because I hardcoded the desired amplitudes for beats (I used absolute peaks) last week, I was trying to determine an algorithm for getting the amplitude of beats in a particular song. That amplitude would be used to search for the time stamps in a different file. However, this has been slower going. Because I can’t rely on the beats being the highest amplitude and instead, it needs to be primarily based on the BPM. For this week, I have been getting more false positives than actual positives.
My deliverable for next week hopefully will be an accurate amplitude determination for beats (automatic, not hardcoded) integrated into a different file that exports time stamps. So far, I am on schedule for the project as a whole. I have several weeks to finish up an accurate beat tracker.

Team Status Report for 2/12

The first part of this week was dedicated to the proposal presentation. Together, we determined the user requirements and the design requirements, as well as general ideas for implementation. Our presentation was given on Monday. We have also attached the slide deck to the website for perusal.  Overall, as long as we can maintain some closeness to schedule, we feel that this is a viable project.

Here’s a sketch of our current project design:

Currently, we are on schedule to complete our design by the deadline with one week of slack time. In the next week, we are looking to begin to determine the final packages and equipment we will be using for this project for all three portions of the project. There are several risks associated with this project that we are looking to mitigate at the moment:

  1. Determining which packages we need to use for beat tracking. With our latency constraints, we are worried about the usage of base packages like numpy to iterate through the song, which was explained more in the personal sections.
  2. Slow delivery of the hardware components. Given that they are currently expected to arrive on 2/17 (with 2-day shipping), there will only be 2 days before the next status update to assess the feasibility of our hardware plans and decide if we need to make any changes going forward. As such, we’ve ordered several different piezoresistive sensors as a precaution against this, as if our currently targeted sensor is not satisfactory for our needs, we’ll have a backup set of piezoresistors to allow us to continue testing while we spec out a preferable alternative (assuming the backups are also not satisfactory). 

No changes have occurred to the design project as of yet. The schedule will remain consistent with the next weeks being dedicated to determining if our individual solutions are viable as well as determining the final packages we want to use in our design project.

Here is the current version of our Gantt chart:

Shreya’s Status Report for 2/12

This week, I looked into various methods of implementing the beat tracker algorithm. I looked at the fft’s from a basic example of a simple beat as well as the fft of a more complicated, multi instrument song. I attached both here. It can be seen that the complicated song has more noise. A majority of algorithms do not depend on filtering out higher frequencies. Instead, they suggest looking at consistency. A popular method is to determine the spikes in frequency and consider that the beat. For this game, this should hopefully be sufficient. 

I put this in lower quality for the post, so it’s difficult to see. This is the basic fft for about three minutes. As it can be seen, there is very little noise and the spikes are relatively clean.

Here, we can see the fft for a complicated song (“Drive By” by Train). Again, because of the low quality of the post, the noise levels are difficult to see, but this is significantly noisier than the previous graph, something that will have to be considered in the future.

There are several challenges that can be considered when looking at this. The examples up above show the majority of the song. Although the noise is not as apparent in the full sections of the songs, the complicated example is far noisier. Taking out this noise would take too long and be too complex to consider. However, one idea I have is to have an input (user-generated) beat pulse and use that to detect the specific beat of the current song. This is hopefully only for the beginning stages, and should be gone later as the algorithm advances. This week, I worked on looking at the implementation with this on numpy. Even processing the data is very slow on my personal device, so I’m looking at using other processing tools that would work better with our user requirements.

Project Summary

Rhythm games, like Dance, Dance Revolution or Guitar Hero, that are currently on the market are expensive and hard to set up. Other games lack the external apparatus to create an interesting and active experience for the user. “Hit It!” is a new rhythm game that combines a dynamic game interface  with a portable drum apparatus that the user can hit in time with the inputs on the screen. The user can input their own song into the game and a specially created beat map will appear on the screen, allowing the user to play the rhythm to their favorite songs on a portable apparatus. The game is great for all ages and skill levels, allowing anyone from a child to an adult to play their favorite songs.