Over the past two weeks, I have made significant progress in both the development and stabilization of the Beat Map Editor, one of the core components of our rhythm game. My primary efforts were focused on implementing interactive editor features, resolving persistent audio playback issues encountered in the Ubuntu virtual environment, and initiating the migration of the project to a native Windows environment to ensure audio stability. This report details the technical progress, debugging process, cross-platform adaptations, and testing strategies aimed at meeting the engineering design specifications and use-case requirements of our project.
Beat Map Editor Development
The Beat Map Editor serves as the interactive interface for creating and modifying rhythm game charts. I designed and implemented this module with modularity and real-time user interaction in mind. The editor is built around a central BeatMapEditor
class that integrates a collection of UI components such as buttons (UIButton
), text input boxes (UITextBox
), and a waveform visualization module (WaveformViewer
).
The WaveformViewer
class is responsible for rendering a real-time visual representation of the audio waveform using sf::Vertex
primitives. It allows users to scroll and zoom through audio data within a fixed time window (defaulted to 10 seconds), providing an intuitive interface for note placement. I implemented the waveform extraction by sampling peaks from the left audio channel stored in an sf::SoundBuffer
.
To facilitate note creation, I implemented two synchronized input fields: a timestamp box and a lane index box. When both fields are populated, the editor stores note data into a std::unordered_set
of timestamps and serializes them into a beatmap vector. This mechanism ensures constant-time lookups to prevent duplicate entries and simplifies beatmap export functionality.
I also integrated a minimal but effective UI manager class (UIManager
) to streamline event routing and rendering. This modular structure lays the foundation for extensibility and paves the way for implementing future features such as drag-and-drop note placement or real-time preview playback.
Audio Subsystem Debugging (Ubuntu)
During implementation and testing, I encountered recurring failures in the audio subsystem when running the editor in an Ubuntu 22.04 virtual environment. These issues included the inability to initialize or play audio using SFML’s sf::Sound
, sporadic freezing during playback, and cryptic runtime errors from the underlying OpenAL backend such as:
AL lib: (EE) alc_open_device: No such audio device
AL lib: (EE) ALCplaybackAlsa_open: Could not open playback device
These errors appeared to stem from the mismatch between the VM’s virtualized audio hardware and OpenAL Soft’s hardware requirements. Even with correct installation and runtime dependencies, the virtual sound card often failed to interface properly with SFML’s audio engine, especially after seek operations or audio buffer reloads.
To isolate the issue, I simplified the CMake configuration to exclude SFML’s audio components initially, later reintroducing them with explicit linkage:
Despite these adjustments, the playback remained inconsistent. I conducted multiple tests with .wav
, .ogg
, and .mp3
files and monitored buffer behavior. Logging showed that buffer assignment and sound object lifetimes were valid, but device-level playback would intermittently fail, especially during real-time seeking.
Windows Migration for Audio Stability
Given the limitations of virtualized audio under Ubuntu, I initiated a platform migration to a native Windows environment. On Windows, SFML’s audio subsystem integrates directly with the physical sound card through Windows WASAPI, bypassing the openAL soft backend configuration altogether.
To support this transition, I rebuilt the project’s CMake toolchain using Clang++ and reconfigured dependencies. The following Windows-based setup was finalized:
-
SFML 2.5 (precompiled binaries)
-
spdlog (via vcpkg)
-
tinyfiledialogs (compiled as a static library)
-
Python3 and pybind11 (for future audio analysis integrations)
On Windows, all audio files were loaded and played back reliably. The waveform viewer remained in sync, and no audio crashes or freezes were observed during extended test runs. This confirmed that migrating to Windows would provide the stable execution environment needed for reliable audio interactions during gameplay and editing.
Testing Strategy and Metrics (Verification of My Subsystem)
To ensure the editor meets both engineering design specifications and functional use-case goals, I have implemented and planned a suite of tests that span user interaction, timing accuracy, scoring behavior, and system stability.
Current Testing Status
I manually verified UI responsiveness by logging event triggers and action completions, confirming a latency under 50ms for input actions such as play/pause toggles, seek operations, and waveform interactions. The beat map editor’s file saving and loading functionality was profiled using a stopwatch and consistently completed under 5 seconds for standard-length songs (up to 7 minutes).
Waveform rendering and scrolling behavior was tested with songs across different sample rates and durations. The system maintained accurate synchronization between the progress bar and the waveform display. This also verified correct use of SFML’s sample indexing and buffer durations.
While most manual editor features passed their respective design thresholds, a subset of functionality—such as input latency and beat map accuracy—will require automated benchmarking to validate against exact numerical thresholds.
Planned Testing and Metrics (Verification of the Entire Game)
In the few weeks before the final demo, we will expand testing as follows:
-
Beat Map Accuracy
Using annotated reference beat maps and a Python evaluation script, I will calculate average temporal deviation between auto-generated and human-placed notes. A root mean squared error below 20ms will be required to meet alignment standards for BPM ranges between 50–220.
-
Gameplay Input Latency
I will record key press events and game state transitions using an SDL2-based input logger or high-frame-rate video capture. These measurements will quantify the time from physical input to in-game response, with a target latency below 20ms.
-
Persistent Storage Validation
Save and reload operations for beat maps will be tested using a hash-diff script that validates structural integrity and data completeness. Changes to the note set must persist across game sessions with no corruption.
-
Frame Rate Stability
Using SFML’s internal sf::Clock
, I will measure delta time across multiple frames during stress test scenarios (waveform scrolling + playback + note placement). The editor must maintain at least 30 frames per second without frame drops.
-
Error Handling
Simulated tests with malformed audio files and corrupted beat map data will validate error-handling routines. The editor must exit gracefully or display user-friendly error dialogs instead of crashing.