Angela’s status report, 2022/10/29

This week, I completed writing the preliminary code for the note scheduler. This module will parse the output of the signal processing module and convert it into discrete keypresses on the piano. This was done in Python. I have committed the code to Github for view by my teammates.

Firstly, the way we’re currently separating syllables is a bit clumsy. We only look at the differences between volumes for each time period. I believe that a more elegant solution is to use something like a k-dimensional difference in volume between one note and the next (where each dimension is a frequency on the piano). We could also employ machine learning for this. A K-th nearest neighbours algorithm would work well to determine the k-dimensional difference in volume. We could also use a neural network since there are so many inputs. I initially proposed a decision tree but I decided there were too many keys/frequencies for this.

Secondly, during my work, I made note of the spatial and temporal locality of the list accesses. Should we decide to convert our project to C to improve timing, I will make sure to take advantage of these to optimize the runtime. There is also much potential for multithreading since we do the same operations on different entries in the list.

 

Leave a Reply

Your email address will not be published. Required fields are marked *