Yuhe’s Status Report for 4/12

Over the past two weeks, I have made significant progress in both the development and stabilization of the Beat Map Editor, one of the core components of our rhythm game. My primary efforts were focused on implementing interactive editor features, resolving persistent audio playback issues encountered in the Ubuntu virtual environment, and initiating the migration of the project to a native Windows environment to ensure audio stability. This report details the technical progress, debugging process, cross-platform adaptations, and testing strategies aimed at meeting the engineering design specifications and use-case requirements of our project.

Beat Map Editor Development

The Beat Map Editor serves as the interactive interface for creating and modifying rhythm game charts. I designed and implemented this module with modularity and real-time user interaction in mind. The editor is built around a central BeatMapEditor class that integrates a collection of UI components such as buttons (UIButton), text input boxes (UITextBox), and a waveform visualization module (WaveformViewer).

The WaveformViewer class is responsible for rendering a real-time visual representation of the audio waveform using sf::Vertex primitives. It allows users to scroll and zoom through audio data within a fixed time window (defaulted to 10 seconds), providing an intuitive interface for note placement. I implemented the waveform extraction by sampling peaks from the left audio channel stored in an sf::SoundBuffer.

To facilitate note creation, I implemented two synchronized input fields: a timestamp box and a lane index box. When both fields are populated, the editor stores note data into a std::unordered_set of timestamps and serializes them into a beatmap vector. This mechanism ensures constant-time lookups to prevent duplicate entries and simplifies beatmap export functionality.

I also integrated a minimal but effective UI manager class (UIManager) to streamline event routing and rendering. This modular structure lays the foundation for extensibility and paves the way for implementing future features such as drag-and-drop note placement or real-time preview playback.

Audio Subsystem Debugging (Ubuntu)

During implementation and testing, I encountered recurring failures in the audio subsystem when running the editor in an Ubuntu 22.04 virtual environment. These issues included the inability to initialize or play audio using SFML’s sf::Sound, sporadic freezing during playback, and cryptic runtime errors from the underlying OpenAL backend such as:

AL lib: (EE) alc_open_device: No such audio device
AL lib: (EE) ALCplaybackAlsa_open: Could not open playback device

These errors appeared to stem from the mismatch between the VM’s virtualized audio hardware and OpenAL Soft’s hardware requirements. Even with correct installation and runtime dependencies, the virtual sound card often failed to interface properly with SFML’s audio engine, especially after seek operations or audio buffer reloads.

To isolate the issue, I simplified the CMake configuration to exclude SFML’s audio components initially, later reintroducing them with explicit linkage:

find_package(SFML 2.5 COMPONENTS graphics window system audio REQUIRED)

Despite these adjustments, the playback remained inconsistent. I conducted multiple tests with .wav, .ogg, and .mp3 files and monitored buffer behavior. Logging showed that buffer assignment and sound object lifetimes were valid, but device-level playback would intermittently fail, especially during real-time seeking.

Windows Migration for Audio Stability

Given the limitations of virtualized audio under Ubuntu, I initiated a platform migration to a native Windows environment. On Windows, SFML’s audio subsystem integrates directly with the physical sound card through Windows WASAPI, bypassing the openAL soft backend configuration altogether.

To support this transition, I rebuilt the project’s CMake toolchain using Clang++ and reconfigured dependencies. The following Windows-based setup was finalized:

  • SFML 2.5 (precompiled binaries)

  • spdlog (via vcpkg)

  • tinyfiledialogs (compiled as a static library)

  • Python3 and pybind11 (for future audio analysis integrations)

On Windows, all audio files were loaded and played back reliably. The waveform viewer remained in sync, and no audio crashes or freezes were observed during extended test runs. This confirmed that migrating to Windows would provide the stable execution environment needed for reliable audio interactions during gameplay and editing.

Testing Strategy and Metrics (Verification of My Subsystem)

To ensure the editor meets both engineering design specifications and functional use-case goals, I have implemented and planned a suite of tests that span user interaction, timing accuracy, scoring behavior, and system stability.

Current Testing Status

I manually verified UI responsiveness by logging event triggers and action completions, confirming a latency under 50ms for input actions such as play/pause toggles, seek operations, and waveform interactions. The beat map editor’s file saving and loading functionality was profiled using a stopwatch and consistently completed under 5 seconds for standard-length songs (up to 7 minutes).

Waveform rendering and scrolling behavior was tested with songs across different sample rates and durations. The system maintained accurate synchronization between the progress bar and the waveform display. This also verified correct use of SFML’s sample indexing and buffer durations.

While most manual editor features passed their respective design thresholds, a subset of functionality—such as input latency and beat map accuracy—will require automated benchmarking to validate against exact numerical thresholds.

Planned Testing and Metrics (Verification of the Entire Game)

In the few weeks before the final demo, we will expand testing as follows:

  1. Beat Map Accuracy
    Using annotated reference beat maps and a Python evaluation script, I will calculate average temporal deviation between auto-generated and human-placed notes. A root mean squared error below 20ms will be required to meet alignment standards for BPM ranges between 50–220.

  2. Gameplay Input Latency
    I will record key press events and game state transitions using an SDL2-based input logger or high-frame-rate video capture. These measurements will quantify the time from physical input to in-game response, with a target latency below 20ms.

  3. Persistent Storage Validation
    Save and reload operations for beat maps will be tested using a hash-diff script that validates structural integrity and data completeness. Changes to the note set must persist across game sessions with no corruption.

  4. Frame Rate Stability
    Using SFML’s internal sf::Clock, I will measure delta time across multiple frames during stress test scenarios (waveform scrolling + playback + note placement). The editor must maintain at least 30 frames per second without frame drops.

  5. Error Handling
    Simulated tests with malformed audio files and corrupted beat map data will validate error-handling routines. The editor must exit gracefully or display user-friendly error dialogs instead of crashing.

Yuhe’s Status Report for 3/29

This week, I focused on implementing and debugging the waveform viewer and its synchronization with the audio playback system. I refined the timeToPixel(float timestamp) and pixelToTime(int x) functions to ensure accurate coordinate transformation between time and screen space, allowing the playhead and notes to sync with real-time audio. I also began implementing the snapping grid logic by quantizing timestamps to the nearest beat subdivision (e.g., 1/4, 1/8) based on BPM and audio offset. For note placement, I encountered difficulties with SFML’s event system for implementing smooth drag-and-drop behavior. As an alternative, I developed an input-based method that allows users to specify the number of notes, their lane indices, and timestamps via a dialog box, which then injects the notes into the grid. These changes aim to improve precision and editor usability. However,  it is still very challenging to ensure waveform-playhead alignment under various zoom levels.

Yuhe’s Status Report for 3/15

This week, I finished implementing the game engine and UI components for Rhythm Genesis. I improved our existing UI elements—including buttons, textboxes, sliders, labels, and the UI manager—by refining their appearance and interactivity. In addition, I implemented advanced UI components such as a scrollable list and an upload UI to handle audio file uploads.

I also developed a custom C++ to Python bridge, which allows us to call Python module functions directly from our C++ game engine. This integration enabled me to implement a basic parse_music_file function in Python using the Librosa library. Additionally, I created a function that plays both the original song and a beep sound generated based on the extracted beat timestamps, which is useful for testing our beat detection.

On the UI side, I completed the menu scene and upload scene, and I set up placeholders for the settings and beat map editor UI. These components provide a solid foundation for our user interface and will be further integrated as the project progresses.

On Friday, I conducted our weekly meeting with the team as usual. I demonstrated the current state of the game, showing the UI scenes and explaining my implementation of the game engine. I also helped team members set up their development environments by ensuring all necessary dependencies were installed. During the meeting, I discussed with Lucas how he might leverage our existing UI components—or use the SFML sprite system—to implement the game scenes and scoring system. I also took the opportunity to test Michelle’s audio processing algorithms within the game engine.

Overall, it was a productive week with substantial progress in both UI development and audio processing integration, setting us on a good path toward our project goals. For the next few weeks I will be working on implementing the beatmap editor for our game.

Yuhe’s Status Report for 3/8

During the week before the spring break, I worked on both the documentation and development of our rhythm game. I did the System Implementation, Testing, and Summary sections of our Design Review Report, detailing the architecture, subsystems, and testing methodologies.

Additionally, I focused on implementing our custom game engine in C++ using SFML, replacing Unity due to its high computational demands and uncessary packages and features that we will not use for our 2D game. I implemented sprite rending system and core UI elements, including the settings menu, keybinding input, and volume control, ensuring smooth interaction and persistent data storage. The next steps involve expanding gameplay mechanics, refining beat map processing, improving UI usability, and most importantly integrating our game engine with Python’s Librosa audio processing library.

Yuhe’s Status Report for 2/22

This week, I focused on implementing the Upload UI of our game in Unity, which allows users to upload audio files and generate beat maps. I built the file selector to support MP3, WAV, and OGG formats and designed a progress bar to visually indicate the beat map generation process. Additionally, I implemented a text input field for naming the song and a confirmation message to notify users upon successful upload.

One major challenge was handling file compatibility and format validation, ensuring the system properly recognized and processed different audio formats. Next week, I plan to improve error handling and refine the UI layout and connecting all the related game scenes for better user experience.

Yuhe’s Status Report for 2/15

Progress This Week
This week, I worked on the User Interface Layer for our game, focusing on implementing the main menu and song selection UI in Unity. Using Unity’s UI Toolkit and Canvas system, I created:

  1. A scrolling song list UI, allowing users to browse available beat maps. This was implemented using Scroll View with a vertical layout and dynamically instantiated UI elements for song entries.
  2. Main menu buttons, including Upload Music, Beat Map Editor, Settings, and Quit, which were placed inside a UI Canvas with Button components and linked to appropriate scene transitions and logic hooks.

Challenges Faced
One major challenge was version control. Unity’s built-in Plastic SCM requires using Unity Cloud for team collaboration, which made syncing my work with Lucas difficult. Granting privileges and managing team members in Unity Cloud was unnecessarily complex, and we ran into access issues when trying to pull/push changes.

Next Steps
Improve UI styling and transitions for a smoother user experience.
Work with Lucas to establish a more reliable version control workflow, possibly moving to Git with LFS for large assets.
Implement beat map metadata display in the song list.

Yuhe’s Status Report for 2/8

This week, I focused on game architecture design and getting things set up for development. I spent a good chunk of time figuring out how to structure the project so everything runs smoothly and making sure Unity’s UI, game logic, and beat map system all fit together.

I went through some Unity 2D game tutorials, mostly on handling input, spawning objects (for the falling notes), and setting up animations. I also read up on how Unity handles JSON files since that’s how we’ll store beat maps. On the C# side, I checked out best practices for handling game logic and timing. I also played around with Unity’s file I/O to make sure we can load and save beat maps without issues.

I also set up a GitHub repo so we can track progress and keep things organized. Next up, I’ll start working on spawning notes based on the beat map data and getting a simple scoring system running. Overall, solid progress this week and I am excited to start developing the game!