Design Changes:
We discovered some needed changes to our design in the process of implementing it. We noticed that even signals with little dead air before the music begins can have a lot of needless rests at the beginning of the transcription. To account for this, we truncated the audio signals to remove any silent audio from before the user starts playing their instrument. We also added a feature that gives the user an option of selecting the tempo at which the audio is played, as we found that attempting to automatically detect the tempo was incredibly complex and unreliable. However, to keep our target users’ limited means in mind, we keep this feature optional because many will not have access to a metronome to ensure they stay on tempo.
Risks:
The largest risks currently posed to our project is the error present in our calculations of the duration of each note and it’s placement within the sheet music. We find ourselves having to modify the note durations calculated by the rhythm processor in order to have the data fit into Vexflow’s API. This leaves a lot of room for error in the output; for example, 3 eighth-notes could be transcribed as 3 quarter-notes due to compounding change in each note’s length in the process of sending information from back-end to front-end.
Another problem is that the transcription of a very short piece tends to result in a very long output, resulting in a file that people won’t be able to read conveniently as you can’t scroll down a computer screen while playing an instrument.
Our current status is that we are able to display the pitches of each note with very high accuracy, and we are able to accurately detect and transcribe rests, but the rhythm of each note is currently treated as every note being an eighth note.
(Image of transcription being displayed)