Team Status Report for 4/26

Summary of Individual Contributions
Michelle focused on improving rhythm detection by analyzing vocals separately from instrumentals. She implemented a vocal separation technique using a similarity matrix and median spectrogram modeling, although testing revealed significant background noise bleeding and high latency (~2.5 minutes processing for a 5-minute song). After thorough testing across multiple instruments and vocal recordings, Michelle concluded that rhythm detection on vocals is generally less accurate than on instrumental tracks. Based on this finding, the team decided not to include vocal separation in the final system. Michelle is on track for the final demo and will conduct full system validation next week.

Lucas concentrated on polishing and stabilizing the gameplay system. He performed debugging across various game components, added a cohesive color palette for better visual design, and fixed issues in JSON file loading to ensure the game correctly fetches user data. Lucas plans to spend the final week on thorough system-wide debugging, playtesting, and attempting to integrate a MIDI keyboard to enhance gameplay engagement. He is also preparing final project deliverables, including the poster, video, and report.

Yuhe worked on validating and stabilizing the Beat Map Editor. She conducted user testing with four testers (all with programming and music backgrounds) and received positive feedback on the editor’s usability and responsiveness. Yuhe manually created two new beatmaps (for Bionic Games – Short Version and Champion by Cosmonkey) and began creating a third classical piece beatmap to demonstrate genre versatility. She stress-tested the editor across dense rhythmic patterns and verified consistent note placement, waveform rendering, and file persistence on Windows using SFML.

Unit Tests Performed
1. Michelle’s Unit Tests:

  • Rhythm detection accuracy on simple, self-composed pieces (whole to 16th notes, 50–220 BPM) for piano, violin, guitar, voice.
  • Rhythm detection with dynamic variations (pianissimo to fortissimo).
  • Rhythm detection on complex, dynamic-tempo self-composed pieces (whole to 64th notes, accelerandos/ritardandos).
  • Rhythm detection accuracy (aurally evaluated) on real single-instrument pieces:
  • Bach Sonata No. 1, Bach Partita No. 1, Bach Sonata No. 2, Clair de Lune, Moonlight Sonata, Paganini Caprice 17/19.
  • Rhythm detection accuracy on real multi-instrument pieces:
  • Brahms Piano Quartet No. 1, Dvorak Piano Quintet, Prokofiev Violin Concerto No.1, Guitar Duo, Retro Arcade Song, Lost in Dreams, Groovy Ambient Funk.
  • Rhythm detection with and without vocal separation on vocal pieces:
  • Piano Man, La Vie en Rose, Non Je ne Regrette Rien, Birds of a Feather, What Was I Made For, 小幸運.
  • Latency tests for all rhythm detection cases.

2. Yuhe’s Unit Tests:

  • UI responsiveness testing (<50ms input-to-action latency).
  • Beatmap save/load time (≤5s for standard songs up to 7 minutes).
  • Waveform synchronization testing across different sample rates and audio lengths.
  • Stress testing dense note placement, waveform scrolling, and audio playback stability.
  • Manual verification of beatmap integrity after saving/loading.

Lucas’ Unit Tests:

  • JSON file loading and path correctness verification.
  • Game UI color palette integration and visual consistency.
  • Basic gameplay functional debugging (note spawning, input handling, scoring).

Overall System Tests

  • Full gameplay flow validation: Music upload ➔ Beatmap auto-generation ➔ Manual beatmap editing ➔ Gameplay execution.
  • Cross-platform stability testing (Ubuntu/Windows) for Beat Map Editor and core game engine.
  • Audio playback stress testing across multiple hardware setups.
  • End-to-end latency and responsiveness validation under normal and stress conditions.
  • Early user experience feedback collection on usability and intuitiveness.

Findings and Design Changes
Vocal Separation Analysis:
Testing revealed that rhythm detection accuracy on vocals is significantly worse than on instruments. Vocal separation also introduced unacceptable latency (~2.5 minutes vs. 4 seconds without).
➔ Design Change: We decided not to incorporate vocal separation into the audio processing pipeline.

Beat Map Editor Validation:
Manual beatmap creation and user testing confirmed that the editor meets design metrics for latency, waveform rendering accuracy, and usability.
➔ Design Affirmation: Current editor architecture is robust; minor UX improvements (e.g., keyboard shortcuts) may be added post-demo.

Game Stability and File Handling:
Debugging of JSON fetching and general gameplay components improved reliability and reduced potential user-side errors.
➔ Design Improvement: Standardized file paths and error handling for smoother gameplay setup.

Yuhe’s Status Report for 4/26

This week, I focused on user testing and manual validation of the Beat Map Editor system. I conducted testing sessions with four friends, all of whom have programming and music backgrounds. Each tester independently used the editor and provided feedback based on their experience. Overall, the feedback was very positive—users found the UI intuitive, easy to navigate, and effective even without prior walkthroughs. They noted that note placement, waveform scrolling, and real-time playback behaved consistently and responsively. Some minor improvement suggestions, such as adding keyboard shortcuts or additional visual indicators, were collected for potential future refinements.

In parallel with user testing, I used the editor extensively myself to manually create and validate two new beatmaps. The first song I mapped was Bionic Games – Short Version, and the second was Champion by Cosmonkey. Both tracks are short electronic pieces with relatively complex rhythmic patterns, making them ideal candidates to stress-test the editor’s note placement, timing precision, and multi-lane support. During manual creation, I verified that waveform visualization remained accurate even in high-density sections and that all notes saved and reloaded correctly without timestamp drift or data corruption.

To prepare for the final demo, I also started working on a third beatmap for a classical music piece. This addition is intended to diversify the demo’s musical range and demonstrate the editor’s ability to handle different genres with distinct rhythmic structures. Manual beatmapping of a classical piece will help further validate the editor’s precision and flexibility under less beat-regular audio conditions.

Beyond content creation, I continued running informal internal tests to validate persistent data integrity, ensuring that beatmaps save and reload perfectly between sessions. I also monitored playback stability, waveform scrolling behavior, and editing UX consistency while stress-testing dense beat sequences. The editor remains stable and performant on Windows using SFML and no further platform migration issues were encountered. All tests and newly created beatmaps confirmed that the editor continues to meet project design metrics for responsiveness, usability, and performance.

Lucas’s Status Report for 4/26

This week, I continued to put finishing touches on our game and made some final design decisions. I did some debugging to clean up the game’s code as well as adding a color palette and correcting the code that fetches JSON files, ensuring that it fetches the file with the right name from the correct location in the user’s filesystem.

This coming week I’ll need to focus on making sure the game works as a unit – this’ll take lots of debugging and playtesting. I’ll also try to connect a midi keyboard to make the game a bit more engaging for the player. I’ll also need to focus on finishing up the final documents – the poster, video, and final report.

Michelle’s Status Report for 4/26

This week, I worked on trying to analyze the rhythm of just the vocals of pop songs. I used a technique for vocal separation that involves using a similarity matrix to identify repeating elements and then calculates a repeating spectrogram model using the median and extracts the repeating patterns using a time-frequency masking. The separation is shown in an example below:

Vocal separation of Edith Piaf’s La Vie en Rose

Using this method, there is still some significant bleeding of the background into the foreground. Additionally, the audio processing with vocal separation takes on average 2.5 minutes for a 5 minute song, compared to 4 seconds without the vocal separation. Even when running the rhythm detection on a single voice a cappella track, the rhythm detection does not perform as well as on for a piano or guitar song, for example, since in general the notes have less distinct onsets in singing. Thus I think users who want to play the game with songs with voice in it are better off using the original rhythm detection and refining with the beat map editor or using the editor to build a beat map from scratch.

I am on schedule and on track with our goals for the final demo. Next week, I plan to conduct some user testing of the whole system to validate our use case requirements and ensure there are no bugs in preparation for the demo.

Weekly Status Report for 4/19

This week, our team worked on both integration and putting finishing touches on our own individual game aspects. We also worked on out final presentation, which Michelle will be presenting next week.

As far as individual progress, Lucas worked primarily on integration between the game loop and the main menus, while Yuhe worked on the beat map editor and integrated it with the rest of the game. Yuhe also migrated to Windows from Ubuntu as a result of some technical difficulties. Michelle integrated her beat detection algorithm with the rest of the game, allowing it to be used on user uploaded files.

Going forward, the main challenge will be adding the finishing touches and making the game look professional and engaging to the user, which will have to come from playtesting and some UI related brainstorming. We also haven’t fully integrated, so we’ll need to finish that up, although most components are working together well at this point. We also may need to resolve an audio issue where the file doesn’t play correctly, although the problem has been infrequent and it’s been hard to locate the root of the issue.

Lucas’s Status Report for 4/19

This week I finished up integration and improved on some basic game mechanics. I completed the transition between the main menus and the game itself, with a selection in the menu fetching the corresponding JSON and song, and beginning a game accordingly. The game will also return to menu when complete.

I also added some basic improvements – animations on hits, score display, and synchronization with the music. I tested on a number of different audio/json files generated by Michelle’s algorithm, and thus far the music seems to synchronize well with the game.

In the future I want to have others play test the game and make adjustments based on their comments, as well as adding better and more engaging UI. Next week I’ll continue to debug and improve the game’s general design, as well as implementing any changes suggested by playtesters.

Yuhe’s Status Report for 4/19

This week, I tested and stabilized the Beat Map Editor and integrated it with our game’s core system. I improved waveform rendering sync, fixed note input edge cases, and ensured exported beat maps load correctly into the gameplay module. I worked with teammates to align beatmap formats and timestamps. I also stress-tested audio playback, waveform scrolling, and note placement on both Ubuntu and Windows. Due to virtual audio issues on Ubuntu, I migrated the project to Windows, where SFML audio runs reliably using WASAPI. I updated the build system, linked SFML, spdlog, and verified playback stability across formats. The editor now supports consistent UI latency, note saving/loading, and waveform visualization, meeting our design metrics.

What Did I Learn
To build and debug the Beat Map Editor, I learned C++17 in depth, including class design, memory safety, and STL containers. I switched from Unity to SFML to get better control and performance, and I studied SFML docs and examples to implement UI components, audio playback, waveform rendering with sf::VertexArray, and real-time input handling. I learned to parse sf::SoundBuffer and extract audio samples to draw time-synced waveforms. I also learned to serialize beatmaps using efficient C++ data structures like unordered_set for fast duplicate checking and real-time editing.

I faced many cross-platform dev challenges. Ubuntu’s VM environment caused SFML audio issues due to OpenAL and ALSA incompatibilities. I debugged error logs, searched forums, and simplified my CMake files before deciding to migrate to Windows. There, I set up a native toolchain using Clang++, static-linked dependencies with vcpkg and built-in CMake modules. Audio playback became reliable under Windows using WASAPI.

Verion control is key to our project, and I picked up Git branching, rebasing, and conflict resolution to collaborate smoothly with teammates. I integrated my code with others’ systems, learning how to standardize file formats and maintain compatibility. I utilized YouTube tutorials, GitHub issues, and online forums to solve low-level bugs and these informal resources were often more helpful than official docs. I also  leanred lots of linux and windows commands, C++ syntax, SFML function use cases, git hacks, methods for resolving dependency issues from GPT. Overall, I learned engine programming, dependency management, debugging tools, and teamwork throughout this semester.

Michelle’s Status Report for 4/19

This week, I integrated the rhythm analysis algorithm with Yuhe and Lucas’s game code. I also experimented with another method of determining the global threshold for onset strengths above which a timestamp should be counted as a note. This one used median and median absolute deviation instead of mean and standard deviation. Theoretically it would have less errors due to any outliers because the difference does not get exponentiated. This method performs similarly to the current method but has slightly more false positives at slower tempos. I also further tested the sliding window method. This one had an even higher amount of false positives at slow tempos. I believe this might be ameliorated by have the window slide more continuously rather than jumping from the first 100 frames to the second 100 frames, for example. The issue with this is that it would increase the audio processing latency, which we want to avoid. I think the method using standard deviation (in blue below) is still the best method overall.

Next week, I plan to help with integration testing and also conduct some user testing to validate that the project meets the user requirements. I also plan to help improve the visual design of the game as needed.

I learned a several new tools while working on this project. At the beginning, I learned the basics of Unity before we pivoted to Yuhe’s self-made lightweight game engine. I found it easiest to learn Unity as a beginner by watching and following along with a Youtube tutorial. By consulting documentation, I got more familiar with the numpy library, especially using it for complex plots. I learned how to use threading in Python, looking at examples on Stack Overflow, in order to create a verification program that could play a song and animate its detected notes at the same time. I also learned how to use MuseScore to compose pieces to use as tests for the rhythm analysis. For this I was able to mostly teach myself and occasionally Google any features I couldn’t find on my own.

Team Status Report for 4/12

This week, our team made progress on finalizing and debugging our subsystems as well as starting integration. Lucas added audio playback to the game loop and worked on integrating his components with Yuhe’s main menu. Yuhe worked on the beat map editor, adding waveform viewer and interactions to edit notes. Yuhe is also working on migrating the game to a Windows system in order to solve audio card reading issues when using the Linux virtual environment. Michelle continued testing and refining her rhythm analysis algorithm, moving to a new method that has yields higher accuracy, as shown below in a test on a test piano piece.

After integrating the subsystems we will do some integration tests to ensure all the components are communicating with each other correctly. There are several metrics we will need to focus on, including beat map accuracy, audio and falling tile synchronization, gameplay input latency, persistent storage validation, frame rate stability, and error handling. Both beat map alignment and input latency should be under 20ms to ensure a seamless game experience. The rhythm analysis should capture at at least 95% of the notes and have no false positives. Error handling should cover issues such as unexpected file formats, file sizes that are too large, and invalid file name inputs.

For validation of the use case requirements, we will do some iterative user testing and collect some qualitative feedback about the ease of use, difficulty of the game, accuracy of the rhythm synchronization, and overall experience. During user testing, users will upload their own choice of songs and play the game with the automated beat map and also try out the beat map editor as well. We will want to validate that the whole flow is intuitive and user-friendly.

Lucas’ Status Report for 4/12

This week, I worked primarily on integration and debugging. I added an audio component to the game loop itself, allowing the music and game to play simultaneously. I was primarily focused on integrating mine and Yuhe’s parts of the game, adding some way to transition from the menus into the game itself by selecting the song the user would like to play. On a song request, the game must read which song was selected and then fetch the associated audio file and json representation of the game, which I still need to find a way to efficiently store.

Next week I’ll finish integration and audio synchronization, allowing for a seamless transition from menu to game and ensuring that the game itself is exactly what it needs to be for the user request.

As far as testing and verification, since it’s a bit more difficult to quantitatively test a game like this, I’ll start by having some user playtesting – I’ll then gather feedback and try to deploy it within the game. I’ll then try to measure performance, primarily FPS, and ensure that it meets our desired 30+ outlined in our initial design requirements. Further, I’ll likely need to just play the game with a number of different audio files, ensuring that the notes and music are synced, and making sure that there are no errors in parsing the JSON file.