Melina’s Status Report for 04/12/2025

Schedule

Official Project Management Schedule Updates

  • IN PROGRESS Backend Testing
    • DOING Pitch analysis testing
    • NOT STARTED CQ change analysis testing
  • IN PROGRESS Inform about Change in CQ

Pitch Analysis

There have been delays in receiving most of the pitch data from this week for testing. One important recording, C Major Scale by one of our opera singers, was obtained and testing yielded ~97.83% accuracy. This task is a little behind, so I will be following up with the team to get access to more crucial tests, such as Happy Birthday.

CQ Analysis

Currently the program now extracts CQ values at timestamps that correspond to the time a given pitch is from. The next step will be to implement a way to choose a set of CQ data based on an identifier (such as the name of a repertoire or the key of a scale for “name-less” warm up recordings). The idea is have a system that allows the front end to pass in these identifiers and receive the clean sets of data to pass on to DPG for graphing.

I conducted interviews with the vocalists this week to gain a better understanding of what changes in CQ they would find helpful to know about. The main takeaway was that they would like help in identifying “flutters” and “wobbles”. I am currently working on understanding how a CQ measurement might indicate one of these problems. There do not appear to be research papers on this specific topic, but if we take CQ measurement recordings of these problematic samples and compare them with ideal samples, there may be a pattern we can discern.

Regarding the issue with collecting audio and EGG signals at the same time, I have proposed transferring ownership of the Voce Vista access key to my computer which has a USB-B port that may solve the issue. Unfortunately, this might mean limiting Tyler’s ability to collect data. For the time being, we have decided to focus on implementing the first upcoming task mentioned in this section so that the front-end has something tangible to demo for the vocalists.

Susanna’s Status Report for 4/12/25

Revisiting last week’s goals

  • Use sample data to more clearly demonstrate the layered table rows in the playback view (for interim demo)
    • Done– there were some glitches here that I wasn’t expecting, but I basically just added a sine wave overlaid with the original audio signal. Tyler is working on syncing up our audio signal and EGG signal from our singer session on Thursday of this week, and once that’s complete I’ll simply use the average CQ data as the graph here instead.
  • Enable cursor scroll
  • Expand page menus and navigation
    • Partially done – This is sort of a vague task and I should have fleshed out more of what I was expecting it to encompass. I have created menus to link to the different instructional/playback pages, but since those pages aren’t completed yet, it’s not much of an accomplishment to note here.
  • Hammer down specific parameters for data analytics views, and complete one view (eg, average CQ over time graph) using arbitrary sample data
    • Partially done – Melina and I worked on establishing explicitly what we want some of our data views to look like, particularly with allowing the user to view multiple CQ waveforms simultaneously and specifically how to display CQ per pitch over time.
    • Partly still in progress – I haven’t actually coded the sample view itself. It’s a bit of a chicken and egg scenario between gathering data and coding data views, but I was hesitant to dive in head first as I don’t actually know what form the pitch data will look like once Melina has processed it.

Repertoire Library Wireframe

Additionally, I worked on the actual flow of the application as the user navigates it, with some help from our School of Music friends, since we realized that some of our initial plans were a bit hand-wavey. Specifically, how do vocalists view their library of repertoire that they’ve recorded and navigate it easily, especially given that this is a key element of what sets our application apart? My solution was to sort the repertoire at the high level by the title of the piece, and then at a lower level by the date each instance was recorded. Perhaps this is another case where it’s easier to see what I mean than for me to try to describe it. This is the Repertoire Library view, where clicking on each recorded instance will open up the Playback View that we know and love:

Verification

Our verification testing for the frontend will be done manually. My main goal is to ensure that the playback page functions given various parameters (files that have varying lengths, files that are missing either the EGG signal or the audio signal) and that unexpected inputs are handled gracefully with the app’s error handling. Specifically:

  • Input 5 songs of various lengths. Successfully play through, replay, and scroll through each song.
  • Input 2 songs without EGG signal. Successfully play just the audio.
  • Input 2 songs without audio signal. Successfully view the signal without playing audio.

Goals for next week

  • Use real EGG output data on playback page
  • Implement basic repertoire library page
  • Complete one data analytics view

Tyler Status Report 4/12/2025

On Monday during class I was able to get a scraper in order for us to get the EGG data output available to us, I used the pytesseract library in order to do so. Tuesday I had a meeting with the manufacturers of the EGG and was able to debug and get the EGG signal and the microphone signal to work for the EGG. However, an issue that we ran into is that the EGG LED sensor for the electrode placement is broken, meaning that we will not be able to determine if we are getting the best electrode signal from the larynx placement while calibrating to the user. Another issue is that we are unable to utilize our EGG dual channel feature, which we believe may be because we need to utilize a USB to USB-C adapter in order to plug into the EGG. It is not a priority to be fixed, what we are currently doing is just recording using the mic on one laptop and recording the EGG signal on a different laptop and then overlaying them based on the audio timestamp of where we start. We start by pressing the record button together so it won’t be the most accurate but it will be useable for the time being. Now, I am working on merging the data sheets together in a nicely formatted way as well as recording more data from our singers, we had one two hour session with a singer but we would like to get a variety of singers before our meeting on Wednesday.

As far as progress, I believe that I am basically done with my parts and now I am focusing on helping Susanna and Melina with their parts and making sure our frontend design is intact.

For verification on my work, I have been comparing the measured EGG data with the EGG data that was provided as well as research papers to ensure that our EGG data collection makes sense and doesn’t output insane closed quotients that do not make sense. With my data collection of the EGG data, I roughly know it to be accurate since I am watching the closed quotient move through VoceVista and it matches the output of the data scrapper I designed. We do not need it to be super accurate since in the future VoceVista will be able to do it automatically, but since we do not have access to that version yet I designed a crude solution to use to meet our deadlines.

Team Status Report 4/12/2025

The most significant risks for the data collection portion is the fact that we cannot get our dual channel EGG signal to work, so we need to use two separate laptops. The risk with that is we might misalign our data which will skew our data to be outputting incorrect CQ data during the singing. We are mitigating this by making sure we click at identical time stamps so that way we can easily combine the EGG signal and the audio signal with minimal offset. Worst case scenario if it is really bad we can design a clicker that will take the global time and click record at the exact same time so that we can eliminate the misalignment chance completely.

We have already been working with singers throughout the entire process, designing it based off of their needs and overlaying it in a way that they can understand the information being presented. However, we have been designing this with two singers throughout the entire process, so we will need to run through the entire project setup and usage with a new singer to see where they may need guidance and write up an extensive guide on how to use our tool or make certain parts more intuitive to use. We want to ensure that our project is simple to understand and easy to use for singers, so testing on a wide range of singers will ensure that we can hash out any potential issues the singers may have in understanding the data or collecting the data. Our main form of validation in these cases will be surveying both our new singers and our existing participants to ensure that we meet our original use case requirements.

We will compare our measured CQ data results with several different research paper ideals that we read on for differing genders and vocal ranges and see if they match the data that we got for verification that our data collection was  proper.

No changes were made to the schedule or existing design.