Lucas’s Status Report for 4/26

This week, I continued to put finishing touches on our game and made some final design decisions. I did some debugging to clean up the game’s code as well as adding a color palette and correcting the code that fetches JSON files, ensuring that it fetches the file with the right name from the correct location in the user’s filesystem.

This coming week I’ll need to focus on making sure the game works as a unit – this’ll take lots of debugging and playtesting. I’ll also try to connect a midi keyboard to make the game a bit more engaging for the player. I’ll also need to focus on finishing up the final documents – the poster, video, and final report.

Michelle’s Status Report for 4/26

This week, I worked on trying to analyze the rhythm of just the vocals of pop songs. I used a technique for vocal separation that involves using a similarity matrix to identify repeating elements and then calculates a repeating spectrogram model using the median and extracts the repeating patterns using a time-frequency masking. The separation is shown in an example below:

Vocal separation of Edith Piaf’s La Vie en Rose

Using this method, there is still some significant bleeding of the background into the foreground. Additionally, the audio processing with vocal separation takes on average 2.5 minutes for a 5 minute song, compared to 4 seconds without the vocal separation. Even when running the rhythm detection on a single voice a cappella track, the rhythm detection does not perform as well as on for a piano or guitar song, for example, since in general the notes have less distinct onsets in singing. Thus I think users who want to play the game with songs with voice in it are better off using the original rhythm detection and refining with the beat map editor or using the editor to build a beat map from scratch.

I am on schedule and on track with our goals for the final demo. Next week, I plan to conduct some user testing of the whole system to validate our use case requirements and ensure there are no bugs in preparation for the demo.

Lucas’s Status Report for 4/19

This week I finished up integration and improved on some basic game mechanics. I completed the transition between the main menus and the game itself, with a selection in the menu fetching the corresponding JSON and song, and beginning a game accordingly. The game will also return to menu when complete.

I also added some basic improvements – animations on hits, score display, and synchronization with the music. I tested on a number of different audio/json files generated by Michelle’s algorithm, and thus far the music seems to synchronize well with the game.

In the future I want to have others play test the game and make adjustments based on their comments, as well as adding better and more engaging UI. Next week I’ll continue to debug and improve the game’s general design, as well as implementing any changes suggested by playtesters.

Yuhe’s Status Report for 4/19

This week, I tested and stabilized the Beat Map Editor and integrated it with our game’s core system. I improved waveform rendering sync, fixed note input edge cases, and ensured exported beat maps load correctly into the gameplay module. I worked with teammates to align beatmap formats and timestamps. I also stress-tested audio playback, waveform scrolling, and note placement on both Ubuntu and Windows. Due to virtual audio issues on Ubuntu, I migrated the project to Windows, where SFML audio runs reliably using WASAPI. I updated the build system, linked SFML, spdlog, and verified playback stability across formats. The editor now supports consistent UI latency, note saving/loading, and waveform visualization, meeting our design metrics.

What Did I Learn
To build and debug the Beat Map Editor, I learned C++17 in depth, including class design, memory safety, and STL containers. I switched from Unity to SFML to get better control and performance, and I studied SFML docs and examples to implement UI components, audio playback, waveform rendering with sf::VertexArray, and real-time input handling. I learned to parse sf::SoundBuffer and extract audio samples to draw time-synced waveforms. I also learned to serialize beatmaps using efficient C++ data structures like unordered_set for fast duplicate checking and real-time editing.

I faced many cross-platform dev challenges. Ubuntu’s VM environment caused SFML audio issues due to OpenAL and ALSA incompatibilities. I debugged error logs, searched forums, and simplified my CMake files before deciding to migrate to Windows. There, I set up a native toolchain using Clang++, static-linked dependencies with vcpkg and built-in CMake modules. Audio playback became reliable under Windows using WASAPI.

Verion control is key to our project, and I picked up Git branching, rebasing, and conflict resolution to collaborate smoothly with teammates. I integrated my code with others’ systems, learning how to standardize file formats and maintain compatibility. I utilized YouTube tutorials, GitHub issues, and online forums to solve low-level bugs and these informal resources were often more helpful than official docs. I also  leanred lots of linux and windows commands, C++ syntax, SFML function use cases, git hacks, methods for resolving dependency issues from GPT. Overall, I learned engine programming, dependency management, debugging tools, and teamwork throughout this semester.

Lucas’ Status Report for 4/12

This week, I worked primarily on integration and debugging. I added an audio component to the game loop itself, allowing the music and game to play simultaneously. I was primarily focused on integrating mine and Yuhe’s parts of the game, adding some way to transition from the menus into the game itself by selecting the song the user would like to play. On a song request, the game must read which song was selected and then fetch the associated audio file and json representation of the game, which I still need to find a way to efficiently store.

Next week I’ll finish integration and audio synchronization, allowing for a seamless transition from menu to game and ensuring that the game itself is exactly what it needs to be for the user request.

As far as testing and verification, since it’s a bit more difficult to quantitatively test a game like this, I’ll start by having some user playtesting – I’ll then gather feedback and try to deploy it within the game. I’ll then try to measure performance, primarily FPS, and ensure that it meets our desired 30+ outlined in our initial design requirements. Further, I’ll likely need to just play the game with a number of different audio files, ensuring that the notes and music are synced, and making sure that there are no errors in parsing the JSON file.

Michelle’s Status Report for 4/12

This week, I continued testing and refining the rhythm analysis algorithm. I tested out a second version of the algorithm that more heavily weights the standard deviation of the onset strengths into determining whether to count a peak as a note or not. This version is much more accurate across various tempos, as shown in the figure below. These are the results of testing a self-made single clef piano composition. The first version would have more false positives at a very slow tempo and false negatives at a very fast tempo, the missed notes typically being any 32nd notes or some 16th notes. The second version when tested on the same piece performs much better, only missing a few 32nd notes at very fast tempos.

The verification methodology involves creating compositions in MuseScore and generating audio files to test the processing algorithm on. This way, I have an objective truth of the tempo and rhythm and can easily manipulate variables such as the instrument, dynamics, time signature, etc. and see how these affect the accuracy. Additionally, I also test the algorithm using real songs, which often have more noise and more blended sounding notes. Using a Python program, I can run the analysis on a song I uploaded, and playback the song while showing an animation that blinks on the extracted timestamps and record any missed or added notes. To verify that my subsystem meets the design requirements, the algorithm must capture at least 95% of the notes without adding any extra notes of single-instrument songs between 50 and 160 BPM.

Comparing results of V1 and V2 on a piano composition created in MuseScore

I also tested an algorithm that uses a local adaptive threshold instead of a global threshold. This version uses a sliding window so it compares onset strengths more locally, which can allow the algorithm to be more adaptive over the course of a piece especially when there are changes in dynamics. The tradeoff with this is that it can be more susceptible to noise.

I am on track with the project schedule. I think the current version is sufficient for the MVP of this subsystem, so further work will just be more extensive testing and stretch goals for more complex music. I have begun creating more compositions with even more complex rhythms, including time signature changes, which I plan to test this V2 on next week. I also will test the algorithm on pieces with drastic dynamic changes. I plan to play around with the minimum note length more as well. Since V2 is experiencing less false positives, I may be able to decrease this from the current 100ms to accommodate more complex pieces. Additionally, I want to test out a version that uses median absolute deviation instead of standard deviation to see if this outperforms V2. This method will be less sensitive to extreme peaks.

Lucas’ Status Report for 3/29

This week I focused on finishing integration of my game with the signal processing by adding a functioning JSON parser that will allow for the output from the music analysis to be played as a game. I also added a result splash screen that displays at the end of each game, with a few more detailed statistics and graphs that I plan to add to later.

Next week I’d like to finish integration to allow for a gateway between the main menus and the game itself, ensuring that each song saved by the user can be clicked on and played. I’d also like to clean up some UI and make the game more visually appealing.

Michelle’s Status Report for 3/29

This week, I continued finetuning the audio processing algorithm. I continued testing with piano and guitar and also started testing voice and bowed instruments. These are harder to extract the rhythm from since the articulation can be a lot more legato. If we used pitch information, it may be possible to distinguish note onsets in slurs, for example, but this is most likely out of scope for our project.

Also, there was a flaw in calculating the minimum note length based on the estimated tempo because sometimes a song that most people would consider 60 BPM, librosa would estimate 120 BPM, which is technically equivalent, but then the calculated minimum note length would be much smaller and result in a lot of “double notes”, or note detections directly after one another that resulted from one more sustained note. For the game experience, I believe it is better to have more false negatives than false positives. I think having a fixed minimum note length will be a better generalization. A threshold of 0.1 seconds seems to work well.

Additionally, In preparation to integrate the music processing with the game, I added some more information to the JSON output that bridges the two parts. Based on the number of notes in for a given timestamp, the lane numbers are randomly chosen from which the tiles will fall from.

Example JSON output

My progress is on schedule. Next week, I plan to finalize my work on processing the rhythm of single-instrument tracks and meet with my teammates to integrate all of our subsystems together.

Lucas’ Status Report for 3/22

This week, I continued to make the game into an actual game utilizing the new engine. Like I (maybe) said last week, the engine doesn’t cover much of what Unity did, which means there’s a lot more tedious work related to drawing and keeping track of data structures containing info about game elements. I was able to add back things like timing, scoring, tracking multiple note blocks at the same time, and the beginnings of a JSON parser that will generate the actual beat map.

The game looks and feels a lot more like an actual game now, barebones as it is. Next week, I’ll finish up the JSON parser to fully integrate the signal processing aspect into the game and add more visual feedback to the game/UI elements that should make it more engaging for the player. I’ll also add a gateway between the main menus and the game itself, allowing for more seamless transitioning between parts of the game.

Michelle’s Status Report for 3/22

This week I continued testing my algorithm on monophonic instrumental and vocal songs with fixed or varying tempo. I ran into some upper limits with SFML in terms of how many sounds it can keep track of at a time. For longer audios, when running the test, both the background music and the generated clicks on note onsets will play perfectly for about thirty seconds before the sound starts to glitch and then goes silent and produces this error:

It seems that there is an upper bound of SFML sounds that can be active at a time and after running valgrind it looks like there are some memory leak issues too. I am still debugging this issue, trying to clear click sounds as soon as they are done playing and implementing suggestions from forums. However, this is only a problem with testing as I am trying to play probably hundreds of metronome clicks in succession, and will not be a problem with the actual game since we will only be playing the song and maybe a few sound effects. If the issue persists, it might be worthwhile to switch to a visual test. This will be closer to the gameplay experience anyway.

Next week I plan to try to get the test working again, try out a visual test method, and work with my team members on integration of all parts. Additionally, after having a discussion with my team members, we think it may be best to leave more advanced analysis of multi-instrumental songs as a stretch goal and focus on the accuracy of monophonic songs for now.