Alejandro’s Status Report for 4/1

This week I implemented the SNR-rejection system of our project. It will display a red alert in the front-end of the app if the audio is rejected due to it having too much noise. If the SNR-rejection system requirements are satisfied, the audio will be processed as usual (see here).

I also focused on testing our systems. I first created a bunch of piano files in Garageband containing different audios of monophonic piano files. The audios at first were pretty simple and short for us to be able to use these and see how well our systems work initially. Then, I made a couple of longer and more complex audios to be able to test our systems with something more realistic. 

 

I run these files on our systems and they seem to work fine on our rhythm and pitch processors but not in the integrator. At first one issue was the integrator was outputting all rests. I realized the issue was that we changed code to normalize the signal before putting it through the rhythm process and therefore I had to modify a parameter that sets the height required of a signal for something to be a peak to a much lower value since the signal output normalized ranges only from 0 to 1. 

 

After fixing that running the integrator is not being too accurate as of right now. It seems like some notes detections are innacurate in terms of pitch. We also discussed with Tom that the way we are currently integrating the systems might not be the best so we might have to re-implement the integration system. 

 

This week I will be focusing on the testing part and try to find what issues are going on with our integrator as well as possible fixes. I think we are on track.

 

Kumar Status Report 4/1

This week I also worked on the Vexflox API and node.js integration in the webapp. This was needed so that we can generate a pdf of the output music score. It required installing and backend integration setup with Django. This was my primary focus other than helping Aditya with the Stave Note functionality. There was significant debugging that hasn’t been resolved yet but should be this week. The code for the pdf generation was in the Vexflox Tutorial but required some changes and modifications that I worked on. 

 

Next week I will complete this debugging and work further in assisting in the rhythm processor integration – I would say I’m slightly behind schedule but once this bug is resolved it should be back on track.

Aditya’s Status Report for 4/1

This week I worked on using the Vexflow API to transcribe the output of the pitch and rhythm sub-processors. A lot of this involved translating the data from our own Note design structure to fit into the StaveNote class provided by Vexflow. I also had to determine a robust method of dividing the processor outputs into smaller sets of data, because Vexflow is implemented in a stave-by-stave method, meaning you only draw 4 beats at a time before rendering that section of the sheet music. There’s a lot of in-between calculation here as I need to determine whether or not the 4 beats have been completed within a number of notes ranging from 1 to 8. Getting just one calculation error means the whole chart will be off.

By next week I hope to have the rhythm processor fully integrated, as I’m falling behind on that front due to the learning curve of the Vexflow API. I’ve figured out the library now, so things should be smoother from here. I also hope to be able to have a proper song, such as “twinkle twinkle little star” as input instead of brief sequences of notes.

Team Status report for 4/1

Design Changes:

We discovered some needed changes to our design in the process of implementing it. We noticed that even signals with little dead air before the music begins can have a lot of needless rests at the beginning of the transcription. To account for this, we truncated the audio signals to remove any silent audio from before the user starts playing their instrument. We also added a feature that gives the user an option of selecting the tempo at which the audio is played, as we found that attempting to automatically detect the tempo was incredibly complex and unreliable. However, to keep our target users’ limited means in mind, we keep this feature optional because many will not have access to a metronome to ensure they stay on tempo.

Risks:

The largest risks currently posed to our project is the error present in our calculations of the duration of each note and it’s placement within the sheet music. We find ourselves having to modify the note durations calculated by the rhythm processor in order to have the data fit into Vexflow’s API. This leaves a lot of room for error in the output; for example, 3 eighth-notes could be transcribed as 3 quarter-notes due to compounding change in each note’s length in the process of sending information from back-end to front-end.

Another problem is that the transcription of a very short piece tends to result in a very long output, resulting in a file that people won’t be able to read conveniently as you can’t scroll down a computer screen while playing an instrument.

 

Our current status is that we are able to display the pitches of each note with very high accuracy, and we are able to accurately detect and transcribe rests, but the rhythm of each note is currently treated as every note being an eighth note.

(Image of transcription being displayed)