Tyler Status Report 4/12/2025

On Monday during class I was able to get a scraper in order for us to get the EGG data output available to us, I used the pytesseract library in order to do so. Tuesday I had a meeting with the manufacturers of the EGG and was able to debug and get the EGG signal and the microphone signal to work for the EGG. However, an issue that we ran into is that the EGG LED sensor for the electrode placement is broken, meaning that we will not be able to determine if we are getting the best electrode signal from the larynx placement while calibrating to the user. Another issue is that we are unable to utilize our EGG dual channel feature, which we believe may be because we need to utilize a USB to USB-C adapter in order to plug into the EGG. It is not a priority to be fixed, what we are currently doing is just recording using the mic on one laptop and recording the EGG signal on a different laptop and then overlaying them based on the audio timestamp of where we start. We start by pressing the record button together so it won’t be the most accurate but it will be useable for the time being. Now, I am working on merging the data sheets together in a nicely formatted way as well as recording more data from our singers, we had one two hour session with a singer but we would like to get a variety of singers before our meeting on Wednesday.

As far as progress, I believe that I am basically done with my parts and now I am focusing on helping Susanna and Melina with their parts and making sure our frontend design is intact.

For verification on my work, I have been comparing the measured EGG data with the EGG data that was provided as well as research papers to ensure that our EGG data collection makes sense and doesn’t output insane closed quotients that do not make sense. With my data collection of the EGG data, I roughly know it to be accurate since I am watching the closed quotient move through VoceVista and it matches the output of the data scrapper I designed. We do not need it to be super accurate since in the future VoceVista will be able to do it automatically, but since we do not have access to that version yet I designed a crude solution to use to meet our deadlines.

Team Status Report 4/12/2025

The most significant risks for the data collection portion is the fact that we cannot get our dual channel EGG signal to work, so we need to use two separate laptops. The risk with that is we might misalign our data which will skew our data to be outputting incorrect CQ data during the singing. We are mitigating this by making sure we click at identical time stamps so that way we can easily combine the EGG signal and the audio signal with minimal offset. Worst case scenario if it is really bad we can design a clicker that will take the global time and click record at the exact same time so that we can eliminate the misalignment chance completely.

We have already been working with singers throughout the entire process, designing it based off of their needs and overlaying it in a way that they can understand the information being presented. However, we have been designing this with two singers throughout the entire process, so we will need to run through the entire project setup and usage with a new singer to see where they may need guidance and write up an extensive guide on how to use our tool or make certain parts more intuitive to use. We want to ensure that our project is simple to understand and easy to use for singers, so testing on a wide range of singers will ensure that we can hash out any potential issues the singers may have in understanding the data or collecting the data. Our main form of validation in these cases will be surveying both our new singers and our existing participants to ensure that we meet our original use case requirements.

We will compare our measured CQ data results with several different research paper ideals that we read on for differing genders and vocal ranges and see if they match the data that we got for verification that our data collection was  proper.

No changes were made to the schedule or existing design.

3/30/2025 Tyler’s Status Report

This week, I was sadly unable to get much done because of sickness, but there is a lot to update on. For our electroglottograph data exportation, we will be able to export it similarly to our pitch data since the author of Voce Vista has told us that he can easily implement it through his end. He told us we can expect it by the next update, we will confirm an exact date soon (early next week). Another update on the electroglottograph repair, I am in talks with the manufacturer and have sent them a video of the problem on Friday, they are very quick about responding so hopefully on Monday I will have either a way to fix it or send the electroglottograph to them in order for them to repair it. I was also able to get on a zoom call with Professor Brancaccio to work on the electroglottograph with me but since she has a different model it was not too useful in terms of debugging our electroglottograph.

I think right now, we are all set in terms of Voce Vista and data collection, and all we need to do is get our EGG fixing and then we are set for final presentation. Outside of the EGG, I will help Melina with working through how to detect CQ change as well as how to display it to the user.

So far, roughly on pace, illnesses have thrown me off a little bit and made it so that we as a team have to make some slight adjustments to our schedule but still on pace. As for my personal schedule adjustments, after deliberating with my team, we decided against implementing a database since all of our data will be stored in excel spreadsheets in a unique folders for each user so it will already be organized for them to utilize.

Melina’s Status Report for 03/29/2025

Schedule

Official Project Management Schedule Updates

  • IN PROGRESS Backend Testing
    • DOING Pitch analysis testing
    • NOT STARTED CQ change analysis testing
  • CANCELLED Streamline Voce Vista
  • IN PROGRESS Inform about Change in CQ

Testing Pitch Analysis

We recently got a new interface for the Shure microphone which now allows us to record clear audio. Due to illness on the team, uploading the data and recording from Voce Vista was delayed, but became available today, so this upcoming week will focus on that.

Streamlining Voce Vista

We had previously thought that a command line was available to start the Voce Vista app, as it was mentioned in the documentation, but tech support confirmed that this functionality was deprecated. As a result this task has been cancelled. This means the general user flow would be to follow instruction on the recording page to open and utilize Voce Vista instead of pressing a record button on our app directly. There would also be instructions to export data. This process could be facilitated by instructing the user to use 1-2 keyboard shortcuts.

Inform about changes to CQ

Tyler and I are brainstorming what it functionally means to inform about changes in CQ. At the very minimum, our app graphs CQ average per pitch over time, which is being handled by Susanna in the frontend. What Tyler and I are thinking about, is what additional information about CQ changes over time could be provided. We want to provide useful information that is beyond what a user could simply infer from just looking at the original graph. There are two areas I could see additional analysis come in:

1. More analysis (specifics currently undetermined) is added to CQ changes over time for the warm-up.

2. A CQ analysis is added to the repertoire recording. Currently we plan to simply list the measured CQ on top of the audio signal something like this:

Perhaps by making use of DPG and our pitch analysis algorithm, we can graph the average CQ for each pitch detected in the repertoire. Maybe we could also show them the difference between a CQ average for a pitch in the warm-up versus the repertoire. The main question here is how this would be useful for the user. Would they be informed enough to make a reasonable decision on vocal technique based on this additional analysis? Would the resulting decision have negative consequences?

Susanna’s Status Report for 3/29/2025

Revisiting last week’s goal’s:

  • Add Visual Cursor
    • I did this by inserting an additional vertical line onto the graph that displays the audio waveform (see the red line in the image below), corresponding to where the time index is while playing. The cursor is built into the runtime loop so that it moves smoothly while the audio is playing, and it’s triggered by the play/pause button.
  • Add Restart Button
    • I set up this button to return the timestamp of the given song to 0, and to pause it if it’s currently playing. It also resets the cursor position.
    •  
  • Work on parsing VoceVista output file & CQ waveform
    • I got somewhat stymied by this, because we don’t actually have an example of a CQ waveform for a full piece. Ideally, we’ll be able to have a clean version of the waveform in spreadsheet form, similar to what we have currently for the pitch detection data. Unfortunately, it will currently only export this for one cycle of the CQ waveform. Tyler has been in contact with VoceVista about this, and there should be a way of exporting a full file. However, it’s possible that this won’t actually end up working. We’re prepared for this possibility, as there’s also an option of using a python scraper to compile a sheet out of the discrete chunks of data that VoceVista gives us. As long as the data is formatted well, it should be extremely simple to display in plot form using DearPyGui, just like I did for the audio signal.
    • In the meantime, I’ve made progress on restructuring the way I initially formatted the page to make it straightforward to insert new graph lines once the CQ data comes.This might seem like a minor change, but it ended up taking me some time to figure out how to do properly due to my unfamiliarity with DearPyGui. Basically, I turned the structure of the page into a table with one column. Within each row of the table, multiple graph lines can be inserted. The x-axis running along each row will be time, and the cursor will go over any graph lines in the given row. This structure works well because the table rows delineate which part of the data the cursor should be focused on, while also allowing the flexibility to add or subtract different lines inside of the row (eg: EGG signal, audio signal, potential future sheet music).
  • Overall
    • Since I don’t currently have CQ data to work with, the playback view is a bit difficult to demo, but the basic version is now complete (though, I’d like to add the cursor scroll function). Additionally, we’ve had some shifts in our thinking about how to connect VoceVista to our application (see the team update), so the recording page planning is also still in flux. In the original plan, I was working on data analytics views this week, but I haven’t yet started that, so I’ve pushed that to next week. I’m also pleased with my gradual progress learning about DearPyGui in general. It was a bit of a gamble for us to use, and while I certainly have encountered outdated documentation and a lack of online resources for more granular adjustments I want to make, it’s been immensely helpful for things like automatic scaling and graph functionalities, and I’ve enjoyed getting to learn a new framework.

Goals for next week:

  • Use sample data to more clearly demonstrate the layered table rows in the playback view (for interim demo)
  • Enable cursor scroll
  • Expand page menus and navigation
  • Hammer down specific parameters for data analytics views, and complete one view (eg, average CQ over time graph) using arbitrary sample data

Team Status Report for 3/29/2025

Risks

We’re continuing to face the risk of some part of the EGG potentially being broken or damaged. As mentioned in the last report, the EGG was able to turn on, but was unable to read a signal through its electrodes, in contrast to prior tests where the electrodes had successfully been able to pick up a signal. As a troubleshooting step, we first wanted to check on the EGG by inputting a signal using our larynx simulator. The larynx simulator plugs directly into the EGG and doesn’t use the electrodes, so this would tell us whether the issue lay with our EGG or our electrodes. Unfortunately, our larynx simulator’s battery was drained, and it took us a couple days to find a replacement battery of the correct type. Once we had replaced the battery, we plugged it into the EGG, which still read no signal, meaning that the issue lay with the EGG rather than the electrodes. As our troubleshooting continues, we’re now draining the battery of our EGG in order to do a complete factory reset. We also reached out to Glottal Enterprises, the manufacturers of the EGG, and are in communication with them about the details of our issue.

Microphone Audio Interface

Initially, we thought that the EGG would act doubly as an audio interface for our microphone: that is, processing the audio signal from our microphone and acting as an intermediary between it and our computer. We made this assumption based on the microphone ports supplied in the EGG. Unfortunately, this did not turn out to be the case: our understanding is that the microphone is used to contribute to EGG data, but the EGG does not act like a normal audio interface. We weren’t able to get the audio signal to our computer via the EGG, and were forced to improvise, using a laptop microphone. This week, we borrowed an audio interface from Professor Sullivan, and tested using it to record with our microphone. We successfully connected our microphone to VoceVista using this interface.

VoceVista Software Connection

One of our concerns has been how we’re going to integrate VoceVista with the backend of our own application. VoceVista documentation mentions an option for triggering a VoceVista recording to start via the command line. This would have been an amazing way to integrate VoceVista seamlessly, preventing the user from having to manually start a VoceVista recording each time they sang. Unfortunately, we were unable to figure out how to get this to work. We reached out to VoceVista, and this turns out to not actually be a currently released feature: the documentation we were looking at was in error. There is an option instead for setting up VoceVista such that it starts recording automatically when the application is opened, which is a potential way that we can minimize the work that the user has to do to start the recording.

Melina’s Status Report for 3/22/2025

Schedule

Official Project Management Schedule Updates

  • COMPLETED Pitch Analysis
  • IN PROGRESS Streamline Voce Vista

Personal Task Updates

  • DONE Propose modification to Pitch Analysis Testing
  • DONE Add normalization support to pitch analysis of vibrato
  • DOING Draft code for streamlining Voce Vista
  • DOING Test pitch analysis accuracy

Pitch analysis has been implemented, however modifications are expected to follow for the sake of accuracy. This will be determined once we formally test the algorithm. Currently, informal tests have been completed in which we recorded the vocalists singing a C Major scale and “Happy Birthday”. I call this an informal test, because for these recording sessions, we were not able to utilize the XLR microphone. When connected to the EGG, our laptop could not properly detect and record from the microphone. As a result, we decided to record the vocalists from our laptop’s built-in microphone this week. While this allowed me to roughly test the pitch analysis program, I do not expect this to be a fully accurate representation of the algorithm’s accuracy. For a C Major scale, for example, the resulting analysis was as shown below:

We are generally on schedule, but will need CQ data to be extracted from Voce Vista this upcoming week to stay on track.

Susanna’s Status Report for 3/22/2025

This week I worked on implemented a better version of the previous audio player, and doing so in DearPyGui. I also spent some time working on the pitch analysis algorithm with Melina.

Here’s the basic window for the audio player, with a simple menu, button for file selection, and the window that’s going to be filled with the audio visualization once a file is selected:

Once the user selects an audio file (WAV or mp3), it’s turned into a waveform visualization (normalized so that all audio files are displayed on a y-axis from -1 to 1). If the waveform is longer than ten seconds, it gets split up into multiple lines, making it easier to see the details of longer recordings. This window is scrollable, with a hovering bar at the bottom containing a play/pause button that toggles playing of the audio file. The top menu, file selector button, and name of the selected file also hover so that the user has easy access to navigation.

The window also auto-scales when the user adjusts its dimensions:

I’m still running a bit behind due to the switch to the new framework and getting very thrown off by my sickness coming back from spring break. At this point, I was planning to have started integrating the CQ data, which I haven’t done. However, my progress this week has pretty much finished the basic audio playback (I just want to add a visual cursor and a restart button), which means I’ll be able to endeavor to add the CQ waveform next week.

Goals for next week:

  • Add Visual Cursor
  • Add Restart Button
  • Work on parsing VoceVista output file & CQ waveform

Tyler Tan Status Report 3/22/2025

This week was an emotional rollercoaster for me. Every problem that I could run into I ran into. To start off, on Wednesday when we met with the singers we could not get recordings since the EGG battery had ran out so we could only get the pitch data from my computer since the microphone adapter is also connected to the EGG. Then, on Thursday, we accidentally charged the EGG for 6 hours but it was turned on the entire time, so no charge was accomplished so we had to cancel our scheduled meeting with the singers. Then, on Friday, with a full EGG charge, I tested it out in the morning and got everything to work except the microphone, except sometime 2 hours after when we went to meet with the singers to record proper data the EGG laryngeal electrode measuring feature was unable to be used. The picture above is the front of the EGG. and in the Electrode Placement laryngeal LED it would only signal too low regardless of where we put the sensors, however the EGG Signal would have a proper display. I have emailed Professor Brancaccio, Professor Helou, and the VoceVista author Bodo Maass questions on how to address this. So far, Bodo is the only one that has responded and he said he is happy to help me troubleshoot during a VoceVista coaching session, which we will have to pay for. It might be useful to use, since I can also learn more efficient ways to transfer VoceVista data out of VoceVista into our software component as well as learn how to troubleshoot an EGG for future issues. I believe I am a little bit behind since this week I really wanted to record a lot of data with the singers to use with the EGG and microphone, and so far we only have my computer microphone data not even the Shura mic that we purchased, but it should be quick to catch back up with a short one to two hour efficient session with the singers. Outside of that, another deliverable I would like to be able to achieve is quick setups of the EGG, since right now it took me over 10 minutes to calibrate and attach everything for the EGG as well as set up Voce Vista, so a little more familiarity with everything. I would also like to start working on storing the EGG data into a more permanent and elegant solution rather than just having excel spreadsheets in a shared folder in Google Drive, and start working on a shared database to use.

Team Status for 3/22/2025

Risks

Right now the most significant risks that could jeopardize our project are the EGG being broken since right now we can get an EGG signal but we cannot measure anything with the electrode and the EGG reads 0 signal when we try to measure with the electrode sensor. To mitigate this, Tyler has emailed both Professor Helou and Professor Brancaccio to go over the EGG setup and troubleshooting methodology. If we can pinpoint the problem, it could be possible to either repair or replace the sensor. Another risk is measuring the EGG data into a nice spreadsheet since VoceVista doesn’t instantly do it, but worst case scenario we can manually fix the spreadsheets since VoceVista allows you to transcribe the EGG data into the spreadsheet for short time periods.

No changes have been made to the existing design.

Pitch Analysis

Our pitch analysis algorithm has been implemented, but modifications are expected to be made (for the sake of accuracy) once we begin formally testing next week.

Currently, the algorithm starts by importing and processing an Excel sheet exported from Voce Vista. By using the openxl library, the program to extract the times and frequencies into separate lists. These lists are then processed to extract an average frequency value across 38 cells, which translates to roughly 0.5 seconds. Next, the program makes use of an interval tree (from the intervaltree library) to map each averaged frequency to a pitch note, saved in a list of pitches.

In an attempt to account for frequency averages that misleadingly represent the transition between one note to the next note, we strategically took a second list of averaged frequencies starting from a time slightly ahead of the first frequency list, which then is used to give us a second list of pitches. We build a final list of resulting pitches by comparing each value of the first pitch list against the second pitch list. If the value at index i in the first list does not match the value at index i or i-1 in the second list, then we use the value in the second list. Otherwise, we use the value in the first list. Below we visually represent this attempt to account for averages of transitioning notes.

Finally, the program returns a list of times and pitches.

A “None” time or “Undefined” pitch just indicates the last averaged 38 cells included data from empty Excel cells. Since this represents only the last ~0.5 seconds of the audio recording, we are willing to forgo this data as indicated above.