This week was an emotional rollercoaster for me. Every problem that I could run into I ran into. To start off, on Wednesday when we met with the singers we could not get recordings since the EGG battery had ran out so we could only get the pitch data from my computer since the microphone adapter is also connected to the EGG. Then, on Thursday, we accidentally charged the EGG for 6 hours but it was turned on the entire time, so no charge was accomplished so we had to cancel our scheduled meeting with the singers. Then, on Friday, with a full EGG charge, I tested it out in the morning and got everything to work except the microphone, except sometime 2 hours after when we went to meet with the singers to record proper data the EGG laryngeal electrode measuring feature was unable to be used. The picture above is the front of the EGG. and in the Electrode Placement laryngeal LED it would only signal too low regardless of where we put the sensors, however the EGG Signal would have a proper display. I have emailed Professor Brancaccio, Professor Helou, and the VoceVista author Bodo Maass questions on how to address this. So far, Bodo is the only one that has responded and he said he is happy to help me troubleshoot during a VoceVista coaching session, which we will have to pay for. It might be useful to use, since I can also learn more efficient ways to transfer VoceVista data out of VoceVista into our software component as well as learn how to troubleshoot an EGG for future issues. I believe I am a little bit behind since this week I really wanted to record a lot of data with the singers to use with the EGG and microphone, and so far we only have my computer microphone data not even the Shura mic that we purchased, but it should be quick to catch back up with a short one to two hour efficient session with the singers. Outside of that, another deliverable I would like to be able to achieve is quick setups of the EGG, since right now it took me over 10 minutes to calibrate and attach everything for the EGG as well as set up Voce Vista, so a little more familiarity with everything. I would also like to start working on storing the EGG data into a more permanent and elegant solution rather than just having excel spreadsheets in a shared folder in Google Drive, and start working on a shared database to use.
Team Status for 3/22/2025
Risks
Right now the most significant risks that could jeopardize our project are the EGG being broken since right now we can get an EGG signal but we cannot measure anything with the electrode and the EGG reads 0 signal when we try to measure with the electrode sensor. To mitigate this, Tyler has emailed both Professor Helou and Professor Brancaccio to go over the EGG setup and troubleshooting methodology. If we can pinpoint the problem, it could be possible to either repair or replace the sensor. Another risk is measuring the EGG data into a nice spreadsheet since VoceVista doesn’t instantly do it, but worst case scenario we can manually fix the spreadsheets since VoceVista allows you to transcribe the EGG data into the spreadsheet for short time periods.
No changes have been made to the existing design.
Pitch Analysis
Our pitch analysis algorithm has been implemented, but modifications are expected to be made (for the sake of accuracy) once we begin formally testing next week.
Currently, the algorithm starts by importing and processing an Excel sheet exported from Voce Vista. By using the openxl
library, the program to extract the times and frequencies into separate lists. These lists are then processed to extract an average frequency value across 38 cells, which translates to roughly 0.5 seconds. Next, the program makes use of an interval tree (from the intervaltree
library) to map each averaged frequency to a pitch note, saved in a list of pitches.
In an attempt to account for frequency averages that misleadingly represent the transition between one note to the next note, we strategically took a second list of averaged frequencies starting from a time slightly ahead of the first frequency list, which then is used to give us a second list of pitches. We build a final list of resulting pitches by comparing each value of the first pitch list against the second pitch list. If the value at index i
in the first list does not match the value at index i
or i-1
in the second list, then we use the value in the second list. Otherwise, we use the value in the first list. Below we visually represent this attempt to account for averages of transitioning notes.
Finally, the program returns a list of times and pitches.
A “None” time or “Undefined” pitch just indicates the last averaged 38 cells included data from empty Excel cells. Since this represents only the last ~0.5 seconds of the audio recording, we are willing to forgo this data as indicated above.
Team Status Report for 3/15/2025
Due to illness going around the team, some frontend tasks and data exportation have been delayed, but important strides were made in pitch analysis. Our first major data collection event also took place on Wednesday when we took audio recordings of the vocalists for the purpose of testing pitch analysis. Recordings included a variety of styles inlcuding vibrato, staccato, with piano, and without piano. An important observation was made that our opera vocalists naturally lean toward singing vibrato unless specifically instructed to sing staccato. This motivated the extension of the current pitch analysis to support normalization of data in order to extract the fundamental frequency of a vibrato note.
The main challenges we face this upcoming week including extracting and exporting CQ data long-term, adding pitch analysis support for vibrato, streamlining Voce Vista, and adding frontend views for playback and CQ data. Note that streamlining Voce Vista is a new task added to the official schedule and assigned to Melina.
Melina’s Status Report for 3/15/2025
Schedule
Official Project Management Schedule Updates
- IN PROGRESS BLOCKED Pitch Analysis
- COMPLETED Match Microphone Data to EGG
- IN PROGRESS Streamline Voce Vista
Personal Task Updates
- DONE Draft code for pitch analysis
- DOING Propose modification to Pitch Analysis Testing
- DOING Add normalization support to pitch analysis of vibrato
- DOING Draft code for streamlining Voce Vista
Pitch analysis is slightly behind, but very close to being done. Other that that, tasks are on-time. It is important to note that the task “Match Microphone Data to EGG” was marked complete ahead of schedule because Voce Vista already outputs the data we need for matching up time and pitch long-term, but only outputs CQ data in the short-term.
Pitch Analysis
Tyler and I collected some recordings of one of the vocalists singing scales and “Happy Birthday” in different styles (vibrato, staccato, with piano, without piano). Shortly after this, almost the whole team was feeling under the weather, so there has been some delays in getting this data uploaded to the drive. As soon as I have access to the new data, I will be able to implement the last part of pitch analysis, support for vibrato. Prof. Sullivan and I believe that if I normalize the data appropriately, I will be able to extract the fundamental frequency around which the pitch varies when a vocalist is singing vibrato. Supporting vibrato would be great for making our product useful for opera singers, who we observed to sing vibrato naturally. Even when asked to sing staccato, occasional slips into vibrato occurred, so it would be in the interest of minimizing over-self-consciousness that we provide support for vibrato in our pitch detection. My goal is to have this support complete by the next status report. A modification to pitch analysis testing was also proposed to involve the vocalists and vocal coach as described in the previous status report, “our testing would likely be modified to include having a vocalist sing a series of notes, such as “Happy Birthday” and have the vocal coach verify that that the pitch detected agrees with their perception of the pitch for 90% of the notes sung.” This proposal is awaiting feedback.
Match Microphone Data to EGG
As stated above, this task is being marked as complete and the team is instead focusing on actually extracting and exporting CQ data long-term in the same format as time and pitches. Tyler and I are working together on this, but he is taking the lead on communicating with Voce Vista support for learning how to export this data as part of his task “Integrate EGG Data”. If there is further support needed to match CQ to time stamps, this tasks would reopen.
Streamlining Voce Vista
Adding any support to streamline the use of Voce Vista would be in the interest of our product to make the analysis process easy for our users. Voce Vista does not appear to support the command line, but it has an extensive list of configurable keyboard shortcuts. These shortcuts could be conditionally triggered from a python script with the keyboard
library. This week, I will add code with functions to trigger Voce Vista functionalities including opening the app and immediately recording upon startup, stopping a recording, and exporting time and pitch data to a specific folder in the repository.
Susanna’s Status Report 3/15
I’ve unfortunately been knocked out with illness this entire week, and wasn’t able to make project progress. This puts me further behind on my goals. I will more thoroughly assess what needs to be done once I fully recover, hopefully in the next couple of days.
Tyler Status Report 3/15
This week me and Melina were able to get several different recordings of the opera singers to test various features of the pitch detection. Sadly, the electroglottograph ran out of power so we were unable to get the electroglottograph running for them, but next week we plan on utilizing it with them. I had another trial run with myself using the electroglottograph as well so that the next time we have the opera singers it will run smoothly. With the material we got from the opera singer, we focused on having segmented singing so we can ensure pitch detection works properly, as well as having the singer sing smoothly so we can watch the pitch detection gradually transition as well. For next week, we plan to collect the EGG data necessary and start designing a way to get the EGG data into a spreadsheet with similar format as the pitch detection. I have emailed Bodo, the lead designer of Voce Vista who has been quite helpful but he has not responded, another option that we might consider is paying 50 dollars for an hour of his time to tutor us on Voce Vista so that we can find where the EGG data is stored.
Melina’s Status Report for 3/1/2025
Schedule
Official Project Management Schedule Updates
- COMPLETE Design Report
- IN PROGRESS Pitch Analysis
- NOT STARTED Match Microphone Data to EGG
Personal Task Updates
- DONE Draft Design Review Report with the team
- DONE Ensure the team has a repository set up along with agreement on code style/organizational expectations
- DOING Draft code for pitch analysis
- TODO Propose modification to Pitch Analysis Testing
My tasks are on time, and given that nothing new was currently scheduled, I have Week 8 to complete pitch analysis.
Pitch Analysis
An initial attempt has been made at utilizing Librosa for pitch analysis. I learned how to load an audio file and extract basic information such as tempo, frequencies and their magnitudes. The current issue is that the pitch detection algorithm outputs a lot of frequencies, with some that appear to be noise. Overall extracting frequencies at the correct times is difficult, however, after reading Tyler’s updates, I saw that Voce Vista may already output the frequencies in a cleaner manner. This would help the algorithm more accurately map those frequencies to a note based on a standardized chart. The algorithm will have to take into account that there is some natural variance in pitch for the human voice compared to instruments. For example, the standardized chart marks G4 at 392 Hz and G4# at 415.3. An appropriate range has to be considered to distinguish adjacent notes. I’m currently thinking of approaching this by providing some slack in the range of frequencies that map to a note. This means our testing would likely be modified to include having a vocalist sing a series of notes, such as “Happy Birthday” and have the vocal coach verify that that the pitch detected agrees with their perception of the pitch for 90% of the notes sung. Proposing this modification for testing to the team has been noted as a new personal task.
Tyler’s Status Report 3/1/2025
Not a lot was accomplished the week before spring break, as I primarily focused my attention on working on the design report but I was able to spend a couple hours working through VoceVista. Right now I am trying to think of the best way to transfer the information we get from VoceVista into our software component. VoceVista can output the files into an excel sheet or an image and it has a lot of different capablities of different data we can output. Not only can we output all of the EGG data we can also output pitch as well as statistic results. Now, I need to decide on how to export the data with Melina and Susanna to decide what is the easiest way for them to process this data as well as decide easy ways to transfer the audio recordings from VoceVista out as well.
Progress is mostly on schedule, I feel like I could be a little bit more familiar with the VoceVista interface but I will get better as the semester goes on. Focus now will be finding ways to output the VoceVista files into useable information.
Team Status Report for 3/1/2025
Sensor Adhesive
We had previously thought that the electrode gel, which is applied to the neck sensors before the sensors are fixed to the neck, might function as an adhesive. However, the electrode gel serves as an aid in conductivity, and doesn’t have adhesive properites. Established methods for attaching the sensors include a neck band, tape, and simply having the user hold the sensors up physically. Holding the sensors is obviously cumbersome and a violation of our use-case requirement of comfortability. Discussing this issue with our vocalists, we confirmed that a neck band would also be uncomfortable and could impede movements necessary for singing freely. As a result, we are purchasing some specifically designed skin-adhesive tape to tape the sensors in place, which will hopefully maximize both comfort and secure sensor placement.
Schedule for SoM Meetings
We determined a schedule for our meetings with our School of Music partners following break. We’ll start with a week of practicing using the electroglottograph with vocalist users, then start gathering weekly warmup data so that our final presentation can include the data over time for five weeks for two vocalists. At the same time, we’ll start experimenting with repertoire recording, likely with piano accompaniment, in week 10.
Future Test Users
While we have the opportunity to work with two vocalists in the School of Music (a soprano and a mezzo-soprano), our hope is to ultimately test our product with a larger number of vocalists to get more meaningful data for our user feedback survey. This will depend on whether or not more vocalists are willing to sign up for the music course we’re currently a part of. There’s also the question of whether we’d be interested in expanding the target audience of the product slightly to include singers who are trained, but aren’t necessarily vocal majors or opera singers. Even though our current use case is more specific, other singers might still be able to offer feedback for things like the ease of setting up the device. This decision will depend largely on whether or not more vocalists are willing to join the class.
Data Privacy
One issue we were considering is that of securing user data, since some users might consider their CQ data or vocal recordings to be private. However, with the advice of Tom and Fiona, we’ve concluded that this actually falls outside of our use case requirements: like any recording software, this application is meant to be used on personal devices, or in a controlled lab setting, and all the data is stored locally. As a result, we will not be worrying about the encryption of user data for our application.
Product Solution Meeting a Specified Need
A was written by Melina, B was written by Tyler, C was written by Susanna
Section A
Our Product Solution considers global factors by including users who are not in academia or do not consider themselves technologically savvy. Although our product utilizes specialized hardware and software, our solution includes a dedicated setup page that aims to facilitate the use of these technologies for users who will be assumed to have no prior experience with them. The feature pages of the app will also include more information about how to interpret the CQ in the context of a controlled exercise time series analysis and a distinct repertoire. We have also considered that the accessibility to our product beyond Pittsburgh is limited by the purchase of EGG hardware and Voce Vista software. Our product solution makes use of a shared lab that can be duplicated with these purchases for any other local region.
Section B
Some of the cultural factors our product solution take into account is how opera singers normally sing and what the accepted practice is. We spent a lot of time conducting research on the vocal pedagogy of opera singers to ensure that when we output data it does not contradict what the user’s vocal coach instructs them to do. On top of that, we have taken into account that it is usually taboo to try and get a singer to change the form of how they sing and have decided to instead just output the information in a useful way so that the opera singer can decide whether or not to make changes, instead of originally suggesting form changes for the opera singer.
Section C
The physical components used in this product (electroglottograph, connector cables, microphone, etc) were created by extracting materials from natural sources. While the overall goal of the project is not directly related to environmental concerns, the overall impact of the product can be minimized by using durable and reusable components where possible. Notably, we found a preexisting electroglottograph to borrow rather than buying or building our own. This certainly saved us considerable cost and effort, but is also notable for significantly reducing the amount of new material that went into building the project. While we did need to purchase a microphone new, we purchased a high-quality model that will be able to be reused in future projects.
Susanna’s Status Report for 3/1/2025
Well, I significantly overestimated what my bandwidth would be this week before spring break. I spent a lot of time working on our design report, and given that my spring break travel started on Thursday, I didn’t end up having time for much else.
MVC vs MVVM
One thing that I did more research into this week was the model for our software architecture. Originally, I had conceived of our backend as MVC (Model View Controller) architecture due to my familiarity with this paradigm from my experience developing web applications. However, looking into it a bit more, it turns out that the standard for desktop applications is MVVM (Model View Viewmodel).
Basically, MVVM introduces complete separation of concerns between the GUI (View) and the database and program logic (Model) with the Viewmodel acting as a mediator. This will make it easy for us to develop these elements in separation. Plus, it’s a reactive architecture: the Viewmodel automatically responds to changes in the Model, and likewise, the View responds to changes in the Viewmodel, which will be useful for real-time updating, like the animation of the cursor that scrolls over the music. MVC is a similar paradigm, but less strictly enforced, and more suitable to web development. Of course, for either paradigm, it’s up to us how strict we’re going to be, and there’s always options to customize. Tentatively, I think this will be a helpful general framework for us to follow.
Last Week’s Goals Revisited
- Write my parts for the Design Review report – COMPLETE
- Research & make a decision about DearPyGUI (ASAP) – COMPLETE
- Given the sorts of data analytics we will need in the application, and our desire for a flexible and engaging user interface, we decided that DPG is our best option for
- Better documentation for code – IN PROGRESS
- started README file, but haven’t tested it on other devices
- Record Screen File Saving – NOT STARTED
- (Optional) Playback Screen Upgrades – NOT STARTED
Goals For Next Week
- Better documentation for code
- Finish README file for setting up dependencies on different OS
- Re-implement basic menu, record screen, and playback in DearPyGui
- Record Screen
- User can choose to either save/discard a recording instead of auto-saving
- User can customize naming a recording upon saving
- (Optional) Playback Screen
- Add cursor that moves while playing
- Let user drag cursor to various points in the playback
- Add separation of signal into rows + scrolling