Team’s Status Report for 12/9

This week saw the a lot of progress across our subsystems. Specifically, we have made significant progress in the integration of our subsystems. Additionally, we worked on making our eye-tracking heuristic and audio alignment model more robust.

Upon seeing that our cursor measurements were not as accurate as we would have liked, we continued iterating on our audio alignment algorithm. In particular, we tested and changed our backend algorithm in attempts to achieve a lower latency. When testing, we realized that we recorded our latency wrong the first time. Now, our revised system for audio alignment on audio recordings of 50 frames compiles and gives us the expected output within 20ms, which is much higher than we thought possible!

These are the tests we’ve conducted so far. We may add tests as we add the last few features.

  1. Audio Alignment w cursor < 1 bar
  2. Audio robustness for missed notes
  3. Audio robustness for wrong notes
  4. Audio robustness for time skipping
  5. Quiet environment tests, audio backend @60, 90, 120, 150, 180, 210, 240 BPM
  6. Noisy + metronome environment tests, audio backend @60, 90, 120, 150, 180, 210, 240 BPM
  7. SNR
  8. Eye tracking accuracy
  9. Eye tracking latency
  10. Eye tracking overrides
  11. Head tracking overrides
  12. Page flipping accuracy (audio, eye, audio + eye)

The tests revealed a lot of information to us which we used to continue adapting our scripts to. In particular, we needed audio alignment to be faster and more robust. Unfortunately, dynamic time warping took too long, inspiring us to write our own MidiAlign algorithm. This function uses a two pointer approach to align the recorded snippet to the reference MIDI file. We used the confidence of various possibilities of matched sequences to determine where the user is currently playing as well as the number of missed notes in the sequence. Therefore, even if the user plays a wrong note, the function will not align to a drastically different section of the piece.

Another change we made was a step towards fixing our harmonic problem. We’re now using sound dampening in order to reduce the interference from the 1.5x frequency of resonant violin notes.

Overall, some of our biggest findings include that our system improves upon current solutions because of its ability to align music in a non linear fashion; users can go back to a previous section and our algorithm will still align them accurately! The system is novel, and achieves its goals well. We are very proud of our project this semester, and are excited to present it at the demos in the upcoming week!

Leave a Reply

Your email address will not be published. Required fields are marked *