Melina’s Status Report for 04/12/2025

Schedule

Official Project Management Schedule Updates

  • IN PROGRESS Backend Testing
    • DOING Pitch analysis testing
    • NOT STARTED CQ change analysis testing
  • IN PROGRESS Inform about Change in CQ

Pitch Analysis

There have been delays in receiving most of the pitch data from this week for testing. One important recording, C Major Scale by one of our opera singers, was obtained and testing yielded ~97.83% accuracy. This task is a little behind, so I will be following up with the team to get access to more crucial tests, such as Happy Birthday.

CQ Analysis

Currently the program now extracts CQ values at timestamps that correspond to the time a given pitch is from. The next step will be to implement a way to choose a set of CQ data based on an identifier (such as the name of a repertoire or the key of a scale for “name-less” warm up recordings). The idea is have a system that allows the front end to pass in these identifiers and receive the clean sets of data to pass on to DPG for graphing.

I conducted interviews with the vocalists this week to gain a better understanding of what changes in CQ they would find helpful to know about. The main takeaway was that they would like help in identifying “flutters” and “wobbles”. I am currently working on understanding how a CQ measurement might indicate one of these problems. There do not appear to be research papers on this specific topic, but if we take CQ measurement recordings of these problematic samples and compare them with ideal samples, there may be a pattern we can discern.

Regarding the issue with collecting audio and EGG signals at the same time, I have proposed transferring ownership of the Voce Vista access key to my computer which has a USB-B port that may solve the issue. Unfortunately, this might mean limiting Tyler’s ability to collect data. For the time being, we have decided to focus on implementing the first upcoming task mentioned in this section so that the front-end has something tangible to demo for the vocalists.

Susanna’s Status Report for 4/12/25

Revisiting last week’s goals

  • Use sample data to more clearly demonstrate the layered table rows in the playback view (for interim demo)
    • Done– there were some glitches here that I wasn’t expecting, but I basically just added a sine wave overlaid with the original audio signal. Tyler is working on syncing up our audio signal and EGG signal from our singer session on Thursday of this week, and once that’s complete I’ll simply use the average CQ data as the graph here instead.
  • Enable cursor scroll
  • Expand page menus and navigation
    • Partially done – This is sort of a vague task and I should have fleshed out more of what I was expecting it to encompass. I have created menus to link to the different instructional/playback pages, but since those pages aren’t completed yet, it’s not much of an accomplishment to note here.
  • Hammer down specific parameters for data analytics views, and complete one view (eg, average CQ over time graph) using arbitrary sample data
    • Partially done – Melina and I worked on establishing explicitly what we want some of our data views to look like, particularly with allowing the user to view multiple CQ waveforms simultaneously and specifically how to display CQ per pitch over time.
    • Partly still in progress – I haven’t actually coded the sample view itself. It’s a bit of a chicken and egg scenario between gathering data and coding data views, but I was hesitant to dive in head first as I don’t actually know what form the pitch data will look like once Melina has processed it.

Repertoire Library Wireframe

Additionally, I worked on the actual flow of the application as the user navigates it, with some help from our School of Music friends, since we realized that some of our initial plans were a bit hand-wavey. Specifically, how do vocalists view their library of repertoire that they’ve recorded and navigate it easily, especially given that this is a key element of what sets our application apart? My solution was to sort the repertoire at the high level by the title of the piece, and then at a lower level by the date each instance was recorded. Perhaps this is another case where it’s easier to see what I mean than for me to try to describe it. This is the Repertoire Library view, where clicking on each recorded instance will open up the Playback View that we know and love:

Verification

Our verification testing for the frontend will be done manually. My main goal is to ensure that the playback page functions given various parameters (files that have varying lengths, files that are missing either the EGG signal or the audio signal) and that unexpected inputs are handled gracefully with the app’s error handling. Specifically:

  • Input 5 songs of various lengths. Successfully play through, replay, and scroll through each song.
  • Input 2 songs without EGG signal. Successfully play just the audio.
  • Input 2 songs without audio signal. Successfully view the signal without playing audio.

Goals for next week

  • Use real EGG output data on playback page
  • Implement basic repertoire library page
  • Complete one data analytics view

Tyler Status Report 4/12/2025

On Monday during class I was able to get a scraper in order for us to get the EGG data output available to us, I used the pytesseract library in order to do so. Tuesday I had a meeting with the manufacturers of the EGG and was able to debug and get the EGG signal and the microphone signal to work for the EGG. However, an issue that we ran into is that the EGG LED sensor for the electrode placement is broken, meaning that we will not be able to determine if we are getting the best electrode signal from the larynx placement while calibrating to the user. Another issue is that we are unable to utilize our EGG dual channel feature, which we believe may be because we need to utilize a USB to USB-C adapter in order to plug into the EGG. It is not a priority to be fixed, what we are currently doing is just recording using the mic on one laptop and recording the EGG signal on a different laptop and then overlaying them based on the audio timestamp of where we start. We start by pressing the record button together so it won’t be the most accurate but it will be useable for the time being. Now, I am working on merging the data sheets together in a nicely formatted way as well as recording more data from our singers, we had one two hour session with a singer but we would like to get a variety of singers before our meeting on Wednesday.

As far as progress, I believe that I am basically done with my parts and now I am focusing on helping Susanna and Melina with their parts and making sure our frontend design is intact.

For verification on my work, I have been comparing the measured EGG data with the EGG data that was provided as well as research papers to ensure that our EGG data collection makes sense and doesn’t output insane closed quotients that do not make sense. With my data collection of the EGG data, I roughly know it to be accurate since I am watching the closed quotient move through VoceVista and it matches the output of the data scrapper I designed. We do not need it to be super accurate since in the future VoceVista will be able to do it automatically, but since we do not have access to that version yet I designed a crude solution to use to meet our deadlines.

Team Status Report 4/12/2025

The most significant risks for the data collection portion is the fact that we cannot get our dual channel EGG signal to work, so we need to use two separate laptops. The risk with that is we might misalign our data which will skew our data to be outputting incorrect CQ data during the singing. We are mitigating this by making sure we click at identical time stamps so that way we can easily combine the EGG signal and the audio signal with minimal offset. Worst case scenario if it is really bad we can design a clicker that will take the global time and click record at the exact same time so that we can eliminate the misalignment chance completely.

We have already been working with singers throughout the entire process, designing it based off of their needs and overlaying it in a way that they can understand the information being presented. However, we have been designing this with two singers throughout the entire process, so we will need to run through the entire project setup and usage with a new singer to see where they may need guidance and write up an extensive guide on how to use our tool or make certain parts more intuitive to use. We want to ensure that our project is simple to understand and easy to use for singers, so testing on a wide range of singers will ensure that we can hash out any potential issues the singers may have in understanding the data or collecting the data. Our main form of validation in these cases will be surveying both our new singers and our existing participants to ensure that we meet our original use case requirements.

We will compare our measured CQ data results with several different research paper ideals that we read on for differing genders and vocal ranges and see if they match the data that we got for verification that our data collection was  proper.

No changes were made to the schedule or existing design.

3/30/2025 Tyler’s Status Report

This week, I was sadly unable to get much done because of sickness, but there is a lot to update on. For our electroglottograph data exportation, we will be able to export it similarly to our pitch data since the author of Voce Vista has told us that he can easily implement it through his end. He told us we can expect it by the next update, we will confirm an exact date soon (early next week). Another update on the electroglottograph repair, I am in talks with the manufacturer and have sent them a video of the problem on Friday, they are very quick about responding so hopefully on Monday I will have either a way to fix it or send the electroglottograph to them in order for them to repair it. I was also able to get on a zoom call with Professor Brancaccio to work on the electroglottograph with me but since she has a different model it was not too useful in terms of debugging our electroglottograph.

I think right now, we are all set in terms of Voce Vista and data collection, and all we need to do is get our EGG fixing and then we are set for final presentation. Outside of the EGG, I will help Melina with working through how to detect CQ change as well as how to display it to the user.

So far, roughly on pace, illnesses have thrown me off a little bit and made it so that we as a team have to make some slight adjustments to our schedule but still on pace. As for my personal schedule adjustments, after deliberating with my team, we decided against implementing a database since all of our data will be stored in excel spreadsheets in a unique folders for each user so it will already be organized for them to utilize.

Melina’s Status Report for 03/29/2025

Schedule

Official Project Management Schedule Updates

  • IN PROGRESS Backend Testing
    • DOING Pitch analysis testing
    • NOT STARTED CQ change analysis testing
  • CANCELLED Streamline Voce Vista
  • IN PROGRESS Inform about Change in CQ

Testing Pitch Analysis

We recently got a new interface for the Shure microphone which now allows us to record clear audio. Due to illness on the team, uploading the data and recording from Voce Vista was delayed, but became available today, so this upcoming week will focus on that.

Streamlining Voce Vista

We had previously thought that a command line was available to start the Voce Vista app, as it was mentioned in the documentation, but tech support confirmed that this functionality was deprecated. As a result this task has been cancelled. This means the general user flow would be to follow instruction on the recording page to open and utilize Voce Vista instead of pressing a record button on our app directly. There would also be instructions to export data. This process could be facilitated by instructing the user to use 1-2 keyboard shortcuts.

Inform about changes to CQ

Tyler and I are brainstorming what it functionally means to inform about changes in CQ. At the very minimum, our app graphs CQ average per pitch over time, which is being handled by Susanna in the frontend. What Tyler and I are thinking about, is what additional information about CQ changes over time could be provided. We want to provide useful information that is beyond what a user could simply infer from just looking at the original graph. There are two areas I could see additional analysis come in:

1. More analysis (specifics currently undetermined) is added to CQ changes over time for the warm-up.

2. A CQ analysis is added to the repertoire recording. Currently we plan to simply list the measured CQ on top of the audio signal something like this:

Perhaps by making use of DPG and our pitch analysis algorithm, we can graph the average CQ for each pitch detected in the repertoire. Maybe we could also show them the difference between a CQ average for a pitch in the warm-up versus the repertoire. The main question here is how this would be useful for the user. Would they be informed enough to make a reasonable decision on vocal technique based on this additional analysis? Would the resulting decision have negative consequences?

Susanna’s Status Report for 3/29/2025

Revisiting last week’s goal’s:

  • Add Visual Cursor
    • I did this by inserting an additional vertical line onto the graph that displays the audio waveform (see the red line in the image below), corresponding to where the time index is while playing. The cursor is built into the runtime loop so that it moves smoothly while the audio is playing, and it’s triggered by the play/pause button.
  • Add Restart Button
    • I set up this button to return the timestamp of the given song to 0, and to pause it if it’s currently playing. It also resets the cursor position.
    •  
  • Work on parsing VoceVista output file & CQ waveform
    • I got somewhat stymied by this, because we don’t actually have an example of a CQ waveform for a full piece. Ideally, we’ll be able to have a clean version of the waveform in spreadsheet form, similar to what we have currently for the pitch detection data. Unfortunately, it will currently only export this for one cycle of the CQ waveform. Tyler has been in contact with VoceVista about this, and there should be a way of exporting a full file. However, it’s possible that this won’t actually end up working. We’re prepared for this possibility, as there’s also an option of using a python scraper to compile a sheet out of the discrete chunks of data that VoceVista gives us. As long as the data is formatted well, it should be extremely simple to display in plot form using DearPyGui, just like I did for the audio signal.
    • In the meantime, I’ve made progress on restructuring the way I initially formatted the page to make it straightforward to insert new graph lines once the CQ data comes.This might seem like a minor change, but it ended up taking me some time to figure out how to do properly due to my unfamiliarity with DearPyGui. Basically, I turned the structure of the page into a table with one column. Within each row of the table, multiple graph lines can be inserted. The x-axis running along each row will be time, and the cursor will go over any graph lines in the given row. This structure works well because the table rows delineate which part of the data the cursor should be focused on, while also allowing the flexibility to add or subtract different lines inside of the row (eg: EGG signal, audio signal, potential future sheet music).
  • Overall
    • Since I don’t currently have CQ data to work with, the playback view is a bit difficult to demo, but the basic version is now complete (though, I’d like to add the cursor scroll function). Additionally, we’ve had some shifts in our thinking about how to connect VoceVista to our application (see the team update), so the recording page planning is also still in flux. In the original plan, I was working on data analytics views this week, but I haven’t yet started that, so I’ve pushed that to next week. I’m also pleased with my gradual progress learning about DearPyGui in general. It was a bit of a gamble for us to use, and while I certainly have encountered outdated documentation and a lack of online resources for more granular adjustments I want to make, it’s been immensely helpful for things like automatic scaling and graph functionalities, and I’ve enjoyed getting to learn a new framework.

Goals for next week:

  • Use sample data to more clearly demonstrate the layered table rows in the playback view (for interim demo)
  • Enable cursor scroll
  • Expand page menus and navigation
  • Hammer down specific parameters for data analytics views, and complete one view (eg, average CQ over time graph) using arbitrary sample data

Team Status Report for 3/29/2025

Risks

We’re continuing to face the risk of some part of the EGG potentially being broken or damaged. As mentioned in the last report, the EGG was able to turn on, but was unable to read a signal through its electrodes, in contrast to prior tests where the electrodes had successfully been able to pick up a signal. As a troubleshooting step, we first wanted to check on the EGG by inputting a signal using our larynx simulator. The larynx simulator plugs directly into the EGG and doesn’t use the electrodes, so this would tell us whether the issue lay with our EGG or our electrodes. Unfortunately, our larynx simulator’s battery was drained, and it took us a couple days to find a replacement battery of the correct type. Once we had replaced the battery, we plugged it into the EGG, which still read no signal, meaning that the issue lay with the EGG rather than the electrodes. As our troubleshooting continues, we’re now draining the battery of our EGG in order to do a complete factory reset. We also reached out to Glottal Enterprises, the manufacturers of the EGG, and are in communication with them about the details of our issue.

Microphone Audio Interface

Initially, we thought that the EGG would act doubly as an audio interface for our microphone: that is, processing the audio signal from our microphone and acting as an intermediary between it and our computer. We made this assumption based on the microphone ports supplied in the EGG. Unfortunately, this did not turn out to be the case: our understanding is that the microphone is used to contribute to EGG data, but the EGG does not act like a normal audio interface. We weren’t able to get the audio signal to our computer via the EGG, and were forced to improvise, using a laptop microphone. This week, we borrowed an audio interface from Professor Sullivan, and tested using it to record with our microphone. We successfully connected our microphone to VoceVista using this interface.

VoceVista Software Connection

One of our concerns has been how we’re going to integrate VoceVista with the backend of our own application. VoceVista documentation mentions an option for triggering a VoceVista recording to start via the command line. This would have been an amazing way to integrate VoceVista seamlessly, preventing the user from having to manually start a VoceVista recording each time they sang. Unfortunately, we were unable to figure out how to get this to work. We reached out to VoceVista, and this turns out to not actually be a currently released feature: the documentation we were looking at was in error. There is an option instead for setting up VoceVista such that it starts recording automatically when the application is opened, which is a potential way that we can minimize the work that the user has to do to start the recording.

Melina’s Status Report for 3/22/2025

Schedule

Official Project Management Schedule Updates

  • COMPLETED Pitch Analysis
  • IN PROGRESS Streamline Voce Vista

Personal Task Updates

  • DONE Propose modification to Pitch Analysis Testing
  • DONE Add normalization support to pitch analysis of vibrato
  • DOING Draft code for streamlining Voce Vista
  • DOING Test pitch analysis accuracy

Pitch analysis has been implemented, however modifications are expected to follow for the sake of accuracy. This will be determined once we formally test the algorithm. Currently, informal tests have been completed in which we recorded the vocalists singing a C Major scale and “Happy Birthday”. I call this an informal test, because for these recording sessions, we were not able to utilize the XLR microphone. When connected to the EGG, our laptop could not properly detect and record from the microphone. As a result, we decided to record the vocalists from our laptop’s built-in microphone this week. While this allowed me to roughly test the pitch analysis program, I do not expect this to be a fully accurate representation of the algorithm’s accuracy. For a C Major scale, for example, the resulting analysis was as shown below:

We are generally on schedule, but will need CQ data to be extracted from Voce Vista this upcoming week to stay on track.

Susanna’s Status Report for 3/22/2025

This week I worked on implemented a better version of the previous audio player, and doing so in DearPyGui. I also spent some time working on the pitch analysis algorithm with Melina.

Here’s the basic window for the audio player, with a simple menu, button for file selection, and the window that’s going to be filled with the audio visualization once a file is selected:

Once the user selects an audio file (WAV or mp3), it’s turned into a waveform visualization (normalized so that all audio files are displayed on a y-axis from -1 to 1). If the waveform is longer than ten seconds, it gets split up into multiple lines, making it easier to see the details of longer recordings. This window is scrollable, with a hovering bar at the bottom containing a play/pause button that toggles playing of the audio file. The top menu, file selector button, and name of the selected file also hover so that the user has easy access to navigation.

The window also auto-scales when the user adjusts its dimensions:

I’m still running a bit behind due to the switch to the new framework and getting very thrown off by my sickness coming back from spring break. At this point, I was planning to have started integrating the CQ data, which I haven’t done. However, my progress this week has pretty much finished the basic audio playback (I just want to add a visual cursor and a restart button), which means I’ll be able to endeavor to add the CQ waveform next week.

Goals for next week:

  • Add Visual Cursor
  • Add Restart Button
  • Work on parsing VoceVista output file & CQ waveform