Rachana’s Status Report for Apr 29

Rachana, Weekly Status Reports
Personal Accomplishments   This week, I was working on the UI functionality. I had alot of other commitments, so working on the recommendations was a little difficult. The integration for the three different modes of audio input work well so it allows me to record, search or upload a file. The MediaRecorder stream was fun to play around with.    I want to currently work on getting recommendations based on the features of the song extracted on spotify, and display that for the user. We prefetch these songs for the user so in the event the user chooses to select these songs, it's going to be quick to get the signal parameters for the lighting engine.    Testing wise, we tested for concurrency for user commands and main queue commands,…
Read More

Rachana’s Status Report for Apr 22

Rachana, Weekly Status Reports
Personal Accomplishments   This week I was mainly working on getting the signal parameters to match up with the beat time stamps, and implementing user feedback by process control switches. The user queue allows us to have real time feedback on our lighting interface, making it more user friendly. We are basically replacing the main commands at a beat timestamp with the user command that comes in, and then going on to process the rest of the user commands at the right places.    I modified our spotify integration and feature query system to be like “I play a bit of the song”, or I can search up using the song name, or I can upload a song to test it, and the interface goes into Shazam recognizes the song…
Read More

Rachana’s Status Report for April 8

Rachana, Weekly Status Reports
Tests I have been running my signal processing params module on a song that can be located on spotify and shazam to see whether those API calls are robust. Moreover, I have run them on pitch modulated files of 15 seconds, and amplitude modulated files of the same length. I have also been able to specify the resolution of the values i.e do i want to get 5 second averages or 3 second averages or just stick to my 1 second averages. They are currently all working fine. I want to be able to run some tests on voiced and unvoiced components, and also run beat detection on an unplugged file and make sure it is not sensing the percussive elements.  Personal Accomplishments Alot of this week went into working…
Read More

Rachana’s Status Report for April 1st, 2023

Rachana, Weekly Status Reports
Personal accomplishments  After a lot of deliberation,we decided to remove certain features that were not affecting our audio signals as much. I thought alot about the use of MFCC as well, and I don't think that was adding as much value as isolating frequencies. So, the newer approach to chunk these dataframes is to use peak frequencies, beat detection, and amplitudes. We also thought about isolating parts of the chorus, but quickly realized that amplitudes are reflective of  While combining these with global parameters, we can see that this gives us quite a bit of information to work off of. This was a necessary reduction in the number of features because we were getting a bit of relevant information. We also increased the resolution of the graphs to incorporate values…
Read More

Rachana’s Status Report for March 25th, 2023

Rachana, Weekly Status Reports
Personal Accomplishments I am able to split the song into chunks, and I run song detection on each of the chunks. Currently chunks are split by the 10000 frames. The documentation for Librosa is confusing, and I spent alot of time understanding what the library was helping accomplish. I am able to get MFCC coefficients for a few chunks of 5 second intervals, and relative beat differences for those 5 second intervals.    This allows us to win back more time as we dont need to extract features in every chunk iteration. We also want split signal processing into another thread as this does not depend on song selection and spotify audio features. We are able to process each individual chunk effectively, and notice beat differences. I used asyncio functions…
Read More

Rachana’s Status Report for 3/18

Rachana, Weekly Status Reports
Personal Accomplishments  I went over the design report that had a lot of feedback for the signal processing section, and I worked on the Ethics assignment that allowed me to examine ethical considerations behind our project a little more. With respect to the design report, there were a few concerns on sampling rate that I had to address. With the default sampling rate being 44.1KHz. Since we are chunking and never really outputting the audio itself, we don't care about the end result of the audio, but we care about the statistics we are able to generate from the audio. Moreover, I realized that in the design report, there were way too many features I wanted to iterate over for a data frame for a singular chunk. I realized that…
Read More

Rachana’s Status Report (3/11 & 3/4)

Rachana, Weekly Status Reports
Personal Accomplishments I do not have any updates on the project from Spring break. I was briefly messing around with the chunks of a song using python asyncio functions. Currently it's able to iterate over multiple chunks, and when it senses a match of song title between two chunks. Once it finds a match it can exit out of the for loop, and doesn't have to iterate over all the chunks for the selected song.    I was also looking into accessing the chunks every 30 seconds. This needs to happen over the whole course of the audio because even if we identified the song in the first two chunks, the song might change over the course of the compilation, and we need to account for the song inflections.   …
Read More

Rachana’s Status Report (2/25)

Rachana, Weekly Status Reports
Personal Accomplishments https://drive.google.com/drive/u/0/folders/16YJbrFB_6vTO6P4SGzJ2yftce_aSI6WH  A large part of this week went into the proposal presentation slides. I presented this week, so a lot of detail needed to be fleshed out, and the presentation had to be understandable in a way that the audience could understand it without having a lot of background information about the numerous subsystems.    The genre classifier is able to output the danceability, valence, liveness, energy, tempo, and loudness attributes. I used Shazam’s song detection API, and used the song track title to poll spotify with a song query to get the audio features.    I found song features that can be split up into Timbral, Pitch, and Rhythmic features, and a way to construct data frames with these features embedded in it. We want to be…
Read More

Rachana’s Status Report (2/18)

Rachana, Weekly Status Reports
Personal Accomplishments https://colab.research.google.com/drive/15RFAHvDop2-Yh4cbhbS3MnSKOgIHlB2V#scrollTo=1YY6_6ebt0I8  A simple Shazam API test to extract the song title from a song uploaded. It uses a signature generator object, and you can parse and extract different features from the Shazam object. In this case, it is able to get the song Teenage Dream in one iteration correctly in 10 seconds. A major part of my work was to figure out what the genre detection and feature extraction from Spotify would consist of. Very early on in the week I realized that an ML model for genre detection is not as accurate. Extracting the genre incorrectly would be the first level of uncertainty, and then working with this uncertainty further to extract other features like danceability, valence, loudness, and liveness based on genre would introduce more unpredictability in…
Read More

Rachana’s Status Report (2/11)

Rachana, Weekly Status Reports
Personal Accomplishments Worked on the proposal slides at the start of the week. I worked on the Genre classification part of the project. This was mainly looking into the K nearest neighbors algorithm.  The algorithm currently gives me a 70-73% accuracy rate on the test dataset which may or may not be sufficient. I looked into the CNN algorithm, and it gives a much better accuracy rate of around 92%. I will look into this over the course of next week.  https://colab.research.google.com/drive/1hctVbgbCxK8SNuVWfW2e4kA7DtC2hZNC (This is the file where you can see it run on GTZAN dataset)  On Track? I was sick for most of the week, so was not able to work on classes as much. According to our Gantt Chart, I think I am still on schedule because we are…
Read More