Rohan’s Status Report for 11/11

This week my team and I worked on our interim demo. Our demo consisted of showcasing our eye-tracking, audio signal capturing and slight-preprocessing, and a test version of the front-end.

After our day 1 demo, I worked on trying to add thresholding to the eye-tracking system, while also helping further implement the front-end. In terms of thresholding, I essentially wrote a script that outputs a stream of 1s or 0s if the user is looking at the last two bars of sheet music. The 1, here represents, a positive page turn signal, and a 0 represents a negative page turn signal. Essentially, if the user is staring at the last 2 bars for a specific threshold time, then it’s time to turn the page from the Eye-tracking side. The threshold time I chose was based off the tempo. The script I wrote ended up working. Now, I need to figure out how to send this data signal to the front-end.

Additionally, I did some front-end work this week. I helped debug and fix a .css style issue for our front-end. Essentially, the .css file was not properly uploading to the HTML, but Sanjana and I fixed this. I also helped Sanjana work on the Music/Midi upload page through django. This is the page where the user needs to upload their Midi file and a pdf of their sheet music. We haven’t finished this HTML page, but we got most of the functionality of it working. Just need to format it and add style. The biggest challenge right now is integrating the three systems together.

In terms of eye-tracking data analysis, I’ve been doing some tinkering. The eye-gaze stream script I wrote prints the user’s eye gaze between 0 and 1. This is because I was using the Tobii SDK where the x,y coordinates of the user’s eyes are represented as a vector between 0 and 1.  I plan to scale this data by 1000 to have more precise measurements and data. One important design case requirement is the eye-tracking latency. The latency must be within 1 beat of a given music’s tempo. I need to make sure the data stream to the front-end must fit this latency. I will have to look into faster sampling rates and refresh rates.

So far I’ve made decent progress, and I am on schedule currently. For next week, I plan to look into web-sockets to try to have all three systems able to communicate with each other. In other words, I need to make sure that the eye-tracking system is able to send data to the Front-end and vice versa.

 

Leave a Reply

Your email address will not be published. Required fields are marked *