Team Status Report for 4/26

Summary of Individual Contributions
Michelle focused on improving rhythm detection by analyzing vocals separately from instrumentals. She implemented a vocal separation technique using a similarity matrix and median spectrogram modeling, although testing revealed significant background noise bleeding and high latency (~2.5 minutes processing for a 5-minute song). After thorough testing across multiple instruments and vocal recordings, Michelle concluded that rhythm detection on vocals is generally less accurate than on instrumental tracks. Based on this finding, the team decided not to include vocal separation in the final system. Michelle is on track for the final demo and will conduct full system validation next week.

Lucas concentrated on polishing and stabilizing the gameplay system. He performed debugging across various game components, added a cohesive color palette for better visual design, and fixed issues in JSON file loading to ensure the game correctly fetches user data. Lucas plans to spend the final week on thorough system-wide debugging, playtesting, and attempting to integrate a MIDI keyboard to enhance gameplay engagement. He is also preparing final project deliverables, including the poster, video, and report.

Yuhe worked on validating and stabilizing the Beat Map Editor. She conducted user testing with four testers (all with programming and music backgrounds) and received positive feedback on the editor’s usability and responsiveness. Yuhe manually created two new beatmaps (for Bionic Games – Short Version and Champion by Cosmonkey) and began creating a third classical piece beatmap to demonstrate genre versatility. She stress-tested the editor across dense rhythmic patterns and verified consistent note placement, waveform rendering, and file persistence on Windows using SFML.

Unit Tests Performed
1. Michelle’s Unit Tests:

  • Rhythm detection accuracy on simple, self-composed pieces (whole to 16th notes, 50–220 BPM) for piano, violin, guitar, voice.
  • Rhythm detection with dynamic variations (pianissimo to fortissimo).
  • Rhythm detection on complex, dynamic-tempo self-composed pieces (whole to 64th notes, accelerandos/ritardandos).
  • Rhythm detection accuracy (aurally evaluated) on real single-instrument pieces:
  • Bach Sonata No. 1, Bach Partita No. 1, Bach Sonata No. 2, Clair de Lune, Moonlight Sonata, Paganini Caprice 17/19.
  • Rhythm detection accuracy on real multi-instrument pieces:
  • Brahms Piano Quartet No. 1, Dvorak Piano Quintet, Prokofiev Violin Concerto No.1, Guitar Duo, Retro Arcade Song, Lost in Dreams, Groovy Ambient Funk.
  • Rhythm detection with and without vocal separation on vocal pieces:
  • Piano Man, La Vie en Rose, Non Je ne Regrette Rien, Birds of a Feather, What Was I Made For, 小幸運.
  • Latency tests for all rhythm detection cases.

2. Yuhe’s Unit Tests:

  • UI responsiveness testing (<50ms input-to-action latency).
  • Beatmap save/load time (≤5s for standard songs up to 7 minutes).
  • Waveform synchronization testing across different sample rates and audio lengths.
  • Stress testing dense note placement, waveform scrolling, and audio playback stability.
  • Manual verification of beatmap integrity after saving/loading.

Lucas’ Unit Tests:

  • JSON file loading and path correctness verification.
  • Game UI color palette integration and visual consistency.
  • Basic gameplay functional debugging (note spawning, input handling, scoring).

Overall System Tests

  • Full gameplay flow validation: Music upload ➔ Beatmap auto-generation ➔ Manual beatmap editing ➔ Gameplay execution.
  • Cross-platform stability testing (Ubuntu/Windows) for Beat Map Editor and core game engine.
  • Audio playback stress testing across multiple hardware setups.
  • End-to-end latency and responsiveness validation under normal and stress conditions.
  • Early user experience feedback collection on usability and intuitiveness.

Findings and Design Changes
Vocal Separation Analysis:
Testing revealed that rhythm detection accuracy on vocals is significantly worse than on instruments. Vocal separation also introduced unacceptable latency (~2.5 minutes vs. 4 seconds without).
➔ Design Change: We decided not to incorporate vocal separation into the audio processing pipeline.

Beat Map Editor Validation:
Manual beatmap creation and user testing confirmed that the editor meets design metrics for latency, waveform rendering accuracy, and usability.
➔ Design Affirmation: Current editor architecture is robust; minor UX improvements (e.g., keyboard shortcuts) may be added post-demo.

Game Stability and File Handling:
Debugging of JSON fetching and general gameplay components improved reliability and reduced potential user-side errors.
➔ Design Improvement: Standardized file paths and error handling for smoother gameplay setup.

Yuhe’s Status Report for 4/26

This week, I focused on user testing and manual validation of the Beat Map Editor system. I conducted testing sessions with four friends, all of whom have programming and music backgrounds. Each tester independently used the editor and provided feedback based on their experience. Overall, the feedback was very positive—users found the UI intuitive, easy to navigate, and effective even without prior walkthroughs. They noted that note placement, waveform scrolling, and real-time playback behaved consistently and responsively. Some minor improvement suggestions, such as adding keyboard shortcuts or additional visual indicators, were collected for potential future refinements.

In parallel with user testing, I used the editor extensively myself to manually create and validate two new beatmaps. The first song I mapped was Bionic Games – Short Version, and the second was Champion by Cosmonkey. Both tracks are short electronic pieces with relatively complex rhythmic patterns, making them ideal candidates to stress-test the editor’s note placement, timing precision, and multi-lane support. During manual creation, I verified that waveform visualization remained accurate even in high-density sections and that all notes saved and reloaded correctly without timestamp drift or data corruption.

To prepare for the final demo, I also started working on a third beatmap for a classical music piece. This addition is intended to diversify the demo’s musical range and demonstrate the editor’s ability to handle different genres with distinct rhythmic structures. Manual beatmapping of a classical piece will help further validate the editor’s precision and flexibility under less beat-regular audio conditions.

Beyond content creation, I continued running informal internal tests to validate persistent data integrity, ensuring that beatmaps save and reload perfectly between sessions. I also monitored playback stability, waveform scrolling behavior, and editing UX consistency while stress-testing dense beat sequences. The editor remains stable and performant on Windows using SFML and no further platform migration issues were encountered. All tests and newly created beatmaps confirmed that the editor continues to meet project design metrics for responsiveness, usability, and performance.

Yuhe’s Status Report for 4/19

This week, I tested and stabilized the Beat Map Editor and integrated it with our game’s core system. I improved waveform rendering sync, fixed note input edge cases, and ensured exported beat maps load correctly into the gameplay module. I worked with teammates to align beatmap formats and timestamps. I also stress-tested audio playback, waveform scrolling, and note placement on both Ubuntu and Windows. Due to virtual audio issues on Ubuntu, I migrated the project to Windows, where SFML audio runs reliably using WASAPI. I updated the build system, linked SFML, spdlog, and verified playback stability across formats. The editor now supports consistent UI latency, note saving/loading, and waveform visualization, meeting our design metrics.

What Did I Learn
To build and debug the Beat Map Editor, I learned C++17 in depth, including class design, memory safety, and STL containers. I switched from Unity to SFML to get better control and performance, and I studied SFML docs and examples to implement UI components, audio playback, waveform rendering with sf::VertexArray, and real-time input handling. I learned to parse sf::SoundBuffer and extract audio samples to draw time-synced waveforms. I also learned to serialize beatmaps using efficient C++ data structures like unordered_set for fast duplicate checking and real-time editing.

I faced many cross-platform dev challenges. Ubuntu’s VM environment caused SFML audio issues due to OpenAL and ALSA incompatibilities. I debugged error logs, searched forums, and simplified my CMake files before deciding to migrate to Windows. There, I set up a native toolchain using Clang++, static-linked dependencies with vcpkg and built-in CMake modules. Audio playback became reliable under Windows using WASAPI.

Verion control is key to our project, and I picked up Git branching, rebasing, and conflict resolution to collaborate smoothly with teammates. I integrated my code with others’ systems, learning how to standardize file formats and maintain compatibility. I utilized YouTube tutorials, GitHub issues, and online forums to solve low-level bugs and these informal resources were often more helpful than official docs. I also  leanred lots of linux and windows commands, C++ syntax, SFML function use cases, git hacks, methods for resolving dependency issues from GPT. Overall, I learned engine programming, dependency management, debugging tools, and teamwork throughout this semester.

Yuhe’s Status Report for 4/12

Over the past two weeks, I have made significant progress in both the development and stabilization of the Beat Map Editor, one of the core components of our rhythm game. My primary efforts were focused on implementing interactive editor features, resolving persistent audio playback issues encountered in the Ubuntu virtual environment, and initiating the migration of the project to a native Windows environment to ensure audio stability. This report details the technical progress, debugging process, cross-platform adaptations, and testing strategies aimed at meeting the engineering design specifications and use-case requirements of our project.

Beat Map Editor Development

The Beat Map Editor serves as the interactive interface for creating and modifying rhythm game charts. I designed and implemented this module with modularity and real-time user interaction in mind. The editor is built around a central BeatMapEditor class that integrates a collection of UI components such as buttons (UIButton), text input boxes (UITextBox), and a waveform visualization module (WaveformViewer).

The WaveformViewer class is responsible for rendering a real-time visual representation of the audio waveform using sf::Vertex primitives. It allows users to scroll and zoom through audio data within a fixed time window (defaulted to 10 seconds), providing an intuitive interface for note placement. I implemented the waveform extraction by sampling peaks from the left audio channel stored in an sf::SoundBuffer.

To facilitate note creation, I implemented two synchronized input fields: a timestamp box and a lane index box. When both fields are populated, the editor stores note data into a std::unordered_set of timestamps and serializes them into a beatmap vector. This mechanism ensures constant-time lookups to prevent duplicate entries and simplifies beatmap export functionality.

I also integrated a minimal but effective UI manager class (UIManager) to streamline event routing and rendering. This modular structure lays the foundation for extensibility and paves the way for implementing future features such as drag-and-drop note placement or real-time preview playback.

Audio Subsystem Debugging (Ubuntu)

During implementation and testing, I encountered recurring failures in the audio subsystem when running the editor in an Ubuntu 22.04 virtual environment. These issues included the inability to initialize or play audio using SFML’s sf::Sound, sporadic freezing during playback, and cryptic runtime errors from the underlying OpenAL backend such as:

AL lib: (EE) alc_open_device: No such audio device
AL lib: (EE) ALCplaybackAlsa_open: Could not open playback device

These errors appeared to stem from the mismatch between the VM’s virtualized audio hardware and OpenAL Soft’s hardware requirements. Even with correct installation and runtime dependencies, the virtual sound card often failed to interface properly with SFML’s audio engine, especially after seek operations or audio buffer reloads.

To isolate the issue, I simplified the CMake configuration to exclude SFML’s audio components initially, later reintroducing them with explicit linkage:

find_package(SFML 2.5 COMPONENTS graphics window system audio REQUIRED)

Despite these adjustments, the playback remained inconsistent. I conducted multiple tests with .wav, .ogg, and .mp3 files and monitored buffer behavior. Logging showed that buffer assignment and sound object lifetimes were valid, but device-level playback would intermittently fail, especially during real-time seeking.

Windows Migration for Audio Stability

Given the limitations of virtualized audio under Ubuntu, I initiated a platform migration to a native Windows environment. On Windows, SFML’s audio subsystem integrates directly with the physical sound card through Windows WASAPI, bypassing the openAL soft backend configuration altogether.

To support this transition, I rebuilt the project’s CMake toolchain using Clang++ and reconfigured dependencies. The following Windows-based setup was finalized:

  • SFML 2.5 (precompiled binaries)

  • spdlog (via vcpkg)

  • tinyfiledialogs (compiled as a static library)

  • Python3 and pybind11 (for future audio analysis integrations)

On Windows, all audio files were loaded and played back reliably. The waveform viewer remained in sync, and no audio crashes or freezes were observed during extended test runs. This confirmed that migrating to Windows would provide the stable execution environment needed for reliable audio interactions during gameplay and editing.

Testing Strategy and Metrics (Verification of My Subsystem)

To ensure the editor meets both engineering design specifications and functional use-case goals, I have implemented and planned a suite of tests that span user interaction, timing accuracy, scoring behavior, and system stability.

Current Testing Status

I manually verified UI responsiveness by logging event triggers and action completions, confirming a latency under 50ms for input actions such as play/pause toggles, seek operations, and waveform interactions. The beat map editor’s file saving and loading functionality was profiled using a stopwatch and consistently completed under 5 seconds for standard-length songs (up to 7 minutes).

Waveform rendering and scrolling behavior was tested with songs across different sample rates and durations. The system maintained accurate synchronization between the progress bar and the waveform display. This also verified correct use of SFML’s sample indexing and buffer durations.

While most manual editor features passed their respective design thresholds, a subset of functionality—such as input latency and beat map accuracy—will require automated benchmarking to validate against exact numerical thresholds.

Planned Testing and Metrics (Verification of the Entire Game)

In the few weeks before the final demo, we will expand testing as follows:

  1. Beat Map Accuracy
    Using annotated reference beat maps and a Python evaluation script, I will calculate average temporal deviation between auto-generated and human-placed notes. A root mean squared error below 20ms will be required to meet alignment standards for BPM ranges between 50–220.

  2. Gameplay Input Latency
    I will record key press events and game state transitions using an SDL2-based input logger or high-frame-rate video capture. These measurements will quantify the time from physical input to in-game response, with a target latency below 20ms.

  3. Persistent Storage Validation
    Save and reload operations for beat maps will be tested using a hash-diff script that validates structural integrity and data completeness. Changes to the note set must persist across game sessions with no corruption.

  4. Frame Rate Stability
    Using SFML’s internal sf::Clock, I will measure delta time across multiple frames during stress test scenarios (waveform scrolling + playback + note placement). The editor must maintain at least 30 frames per second without frame drops.

  5. Error Handling
    Simulated tests with malformed audio files and corrupted beat map data will validate error-handling routines. The editor must exit gracefully or display user-friendly error dialogs instead of crashing.

Team Status Report for 3/29

This week, we made progress toward integrating the audio processing system with the game engine. We implemented a custom JSON parser in C++ to load beatmap data generated by our signal processing pipeline, enabling songs to be played as fully interactive game levels. We also added a result splash screen at the end of gameplay, which currently displays basic performance metrics and will later include more detailed graphs and stats.

In the editor, we refined key systems for waveform visualization and synchronization. We improved timeToPixel() and pixelToTime() mappings to ensure the playhead and note grid align accurately with audio playback. We also advanced snapping logic to quantize note timestamps based on BPM and introduced an alternative input method for placing notes at specific times and lanes, in addition to drag-and-drop.

On the signal processing side, we expanded rhythm extraction testing to voice and bowed instruments and addressed tempo estimation inconsistencies by using a fixed minimum note length of 0.1s. We also updated the JSON output to include lane mapping information for smoother integration.

Next week, we plan to connect the main menu to gameplay, polish the UI, and fully integrate all system components.

Yuhe’s Status Report for 3/29

This week, I focused on implementing and debugging the waveform viewer and its synchronization with the audio playback system. I refined the timeToPixel(float timestamp) and pixelToTime(int x) functions to ensure accurate coordinate transformation between time and screen space, allowing the playhead and notes to sync with real-time audio. I also began implementing the snapping grid logic by quantizing timestamps to the nearest beat subdivision (e.g., 1/4, 1/8) based on BPM and audio offset. For note placement, I encountered difficulties with SFML’s event system for implementing smooth drag-and-drop behavior. As an alternative, I developed an input-based method that allows users to specify the number of notes, their lane indices, and timestamps via a dialog box, which then injects the notes into the grid. These changes aim to improve precision and editor usability. However,  it is still very challenging to ensure waveform-playhead alignment under various zoom levels.

Yuhe’s Status Report for 3/22

This week I worked on designing and implementing key features of the Beatmap Editor for our game. In the first half of the week, I worked on the layout system, structuring the editor interface to accommodate a waveform display area, time axis, note grid overlay, and playback controls. I integrated JSON-based waveform data preprocessed externally to visualize amplitude over time using SFML’s VertexArray class. In the latter half, I worked on the note grid system that allows users to place notes along multiple lanes. Each lane corresponds to a specific input key, and note placement is quantized based on timestamp intervals to ensure rhythm accuracy. A snapping mechanism aligns notes to the closest beat subdivision, improving usability.

I think the major challenge is synchronizing the visual playhead with the music playback and managing coordinate transforms between waveform time and screen space. My goals for next week include implementing audio playback controls and note data serialization.

Yuhe’s Status Report for 3/15

This week, I finished implementing the game engine and UI components for Rhythm Genesis. I improved our existing UI elements—including buttons, textboxes, sliders, labels, and the UI manager—by refining their appearance and interactivity. In addition, I implemented advanced UI components such as a scrollable list and an upload UI to handle audio file uploads.

I also developed a custom C++ to Python bridge, which allows us to call Python module functions directly from our C++ game engine. This integration enabled me to implement a basic parse_music_file function in Python using the Librosa library. Additionally, I created a function that plays both the original song and a beep sound generated based on the extracted beat timestamps, which is useful for testing our beat detection.

On the UI side, I completed the menu scene and upload scene, and I set up placeholders for the settings and beat map editor UI. These components provide a solid foundation for our user interface and will be further integrated as the project progresses.

On Friday, I conducted our weekly meeting with the team as usual. I demonstrated the current state of the game, showing the UI scenes and explaining my implementation of the game engine. I also helped team members set up their development environments by ensuring all necessary dependencies were installed. During the meeting, I discussed with Lucas how he might leverage our existing UI components—or use the SFML sprite system—to implement the game scenes and scoring system. I also took the opportunity to test Michelle’s audio processing algorithms within the game engine.

Overall, it was a productive week with substantial progress in both UI development and audio processing integration, setting us on a good path toward our project goals. For the next few weeks I will be working on implementing the beatmap editor for our game.

Team Report for 3/8

During the week leading up to spring break, our team worked on both the documentation and development of Rhythm Genesis. We completed the design review report collaboratively; On the development side, we transitioned from Unity to our own custom C++ engine with SFML, optimizing performance for low-power systems. Yuhe implemented sprite rendering, UI elements, and settings management, ensuring smooth interaction and persistent data storage. Lucas worked on refining the core game loop, adding a scoring system, and transitioning from randomly generated notes to a JSON-based beat map system. Michelle worked on creating an algorithm for determining the number of notes to be generated based on the onset strength of the beat.

Next steps include expanding gameplay mechanics, integrating Python’s librosa for beat map generation, and improving UI design. The team will also focus on fully integrating Lucas’ contributions into the new engine to ensure a seamless transition.

Part A: Global Factors (Michelle)

Since our rhythm game does not analyze the lyrics of any songs and just analyzes beats, onset strengths, and pitches, any song in any language will work with our game. The only translations needed to globalize our game will be that of the text in the menu. Additionally, we plan to publish the game for free on Steam, which is available in 237 countries, so our game will be widely available internationally.

Part B: Cultural Factors (Lucas)

While our game is, at the end of the day, just a game, it will address a couple cultural factors – namely tradition (specifically music) and language. Our game will allow users to play a game that caters to their specific heritage as it allows them to use the music that they like. Instead of typical rhythm games that cover a small subset of music and therefore likely only represent a limited number of cultural backgrounds, our game will allow users to upload whatever music they’d like, which means that users from any background can enjoy the music that they feel represents them while playing the game.

Part C: Environmental Factors (Yuhe)
Good news—our game doesn’t spew carbon emissions or chop down trees! Rhythm Genesis is just code, happily living on user’s device, consuming only as much electricity as their computer allows. Unlike some bloated game engines that demand high-end GPUs and turn one’s laptop into a space heater, our lightweight C++ engine keeps things simple and efficient. Plus, since a player can play with their keyboard instead of buying plastic peripherals, we’re technically saving the planet… So, while our game can’t plant trees, it is at least not making things worse. Play guilt-free—unless you miss all the notes, then that’s on you 🙂

Yuhe’s Status Report for 3/8

During the week before the spring break, I worked on both the documentation and development of our rhythm game. I did the System Implementation, Testing, and Summary sections of our Design Review Report, detailing the architecture, subsystems, and testing methodologies.

Additionally, I focused on implementing our custom game engine in C++ using SFML, replacing Unity due to its high computational demands and uncessary packages and features that we will not use for our 2D game. I implemented sprite rending system and core UI elements, including the settings menu, keybinding input, and volume control, ensuring smooth interaction and persistent data storage. The next steps involve expanding gameplay mechanics, refining beat map processing, improving UI usability, and most importantly integrating our game engine with Python’s Librosa audio processing library.