Team Status Report for 4/26

Summary of Individual Contributions
Michelle focused on improving rhythm detection by analyzing vocals separately from instrumentals. She implemented a vocal separation technique using a similarity matrix and median spectrogram modeling, although testing revealed significant background noise bleeding and high latency (~2.5 minutes processing for a 5-minute song). After thorough testing across multiple instruments and vocal recordings, Michelle concluded that rhythm detection on vocals is generally less accurate than on instrumental tracks. Based on this finding, the team decided not to include vocal separation in the final system. Michelle is on track for the final demo and will conduct full system validation next week.

Lucas concentrated on polishing and stabilizing the gameplay system. He performed debugging across various game components, added a cohesive color palette for better visual design, and fixed issues in JSON file loading to ensure the game correctly fetches user data. Lucas plans to spend the final week on thorough system-wide debugging, playtesting, and attempting to integrate a MIDI keyboard to enhance gameplay engagement. He is also preparing final project deliverables, including the poster, video, and report.

Yuhe worked on validating and stabilizing the Beat Map Editor. She conducted user testing with four testers (all with programming and music backgrounds) and received positive feedback on the editor’s usability and responsiveness. Yuhe manually created two new beatmaps (for Bionic Games – Short Version and Champion by Cosmonkey) and began creating a third classical piece beatmap to demonstrate genre versatility. She stress-tested the editor across dense rhythmic patterns and verified consistent note placement, waveform rendering, and file persistence on Windows using SFML.

Unit Tests Performed
1. Michelle’s Unit Tests:

  • Rhythm detection accuracy on simple, self-composed pieces (whole to 16th notes, 50–220 BPM) for piano, violin, guitar, voice.
  • Rhythm detection with dynamic variations (pianissimo to fortissimo).
  • Rhythm detection on complex, dynamic-tempo self-composed pieces (whole to 64th notes, accelerandos/ritardandos).
  • Rhythm detection accuracy (aurally evaluated) on real single-instrument pieces:
  • Bach Sonata No. 1, Bach Partita No. 1, Bach Sonata No. 2, Clair de Lune, Moonlight Sonata, Paganini Caprice 17/19.
  • Rhythm detection accuracy on real multi-instrument pieces:
  • Brahms Piano Quartet No. 1, Dvorak Piano Quintet, Prokofiev Violin Concerto No.1, Guitar Duo, Retro Arcade Song, Lost in Dreams, Groovy Ambient Funk.
  • Rhythm detection with and without vocal separation on vocal pieces:
  • Piano Man, La Vie en Rose, Non Je ne Regrette Rien, Birds of a Feather, What Was I Made For, 小幸運.
  • Latency tests for all rhythm detection cases.

2. Yuhe’s Unit Tests:

  • UI responsiveness testing (<50ms input-to-action latency).
  • Beatmap save/load time (≤5s for standard songs up to 7 minutes).
  • Waveform synchronization testing across different sample rates and audio lengths.
  • Stress testing dense note placement, waveform scrolling, and audio playback stability.
  • Manual verification of beatmap integrity after saving/loading.

Lucas’ Unit Tests:

  • JSON file loading and path correctness verification.
  • Game UI color palette integration and visual consistency.
  • Basic gameplay functional debugging (note spawning, input handling, scoring).

Overall System Tests

  • Full gameplay flow validation: Music upload ➔ Beatmap auto-generation ➔ Manual beatmap editing ➔ Gameplay execution.
  • Cross-platform stability testing (Ubuntu/Windows) for Beat Map Editor and core game engine.
  • Audio playback stress testing across multiple hardware setups.
  • End-to-end latency and responsiveness validation under normal and stress conditions.
  • Early user experience feedback collection on usability and intuitiveness.

Findings and Design Changes
Vocal Separation Analysis:
Testing revealed that rhythm detection accuracy on vocals is significantly worse than on instruments. Vocal separation also introduced unacceptable latency (~2.5 minutes vs. 4 seconds without).
➔ Design Change: We decided not to incorporate vocal separation into the audio processing pipeline.

Beat Map Editor Validation:
Manual beatmap creation and user testing confirmed that the editor meets design metrics for latency, waveform rendering accuracy, and usability.
➔ Design Affirmation: Current editor architecture is robust; minor UX improvements (e.g., keyboard shortcuts) may be added post-demo.

Game Stability and File Handling:
Debugging of JSON fetching and general gameplay components improved reliability and reduced potential user-side errors.
➔ Design Improvement: Standardized file paths and error handling for smoother gameplay setup.

Yuhe’s Status Report for 4/26

This week, I focused on user testing and manual validation of the Beat Map Editor system. I conducted testing sessions with four friends, all of whom have programming and music backgrounds. Each tester independently used the editor and provided feedback based on their experience. Overall, the feedback was very positive—users found the UI intuitive, easy to navigate, and effective even without prior walkthroughs. They noted that note placement, waveform scrolling, and real-time playback behaved consistently and responsively. Some minor improvement suggestions, such as adding keyboard shortcuts or additional visual indicators, were collected for potential future refinements.

In parallel with user testing, I used the editor extensively myself to manually create and validate two new beatmaps. The first song I mapped was Bionic Games – Short Version, and the second was Champion by Cosmonkey. Both tracks are short electronic pieces with relatively complex rhythmic patterns, making them ideal candidates to stress-test the editor’s note placement, timing precision, and multi-lane support. During manual creation, I verified that waveform visualization remained accurate even in high-density sections and that all notes saved and reloaded correctly without timestamp drift or data corruption.

To prepare for the final demo, I also started working on a third beatmap for a classical music piece. This addition is intended to diversify the demo’s musical range and demonstrate the editor’s ability to handle different genres with distinct rhythmic structures. Manual beatmapping of a classical piece will help further validate the editor’s precision and flexibility under less beat-regular audio conditions.

Beyond content creation, I continued running informal internal tests to validate persistent data integrity, ensuring that beatmaps save and reload perfectly between sessions. I also monitored playback stability, waveform scrolling behavior, and editing UX consistency while stress-testing dense beat sequences. The editor remains stable and performant on Windows using SFML and no further platform migration issues were encountered. All tests and newly created beatmaps confirmed that the editor continues to meet project design metrics for responsiveness, usability, and performance.

Weekly Status Report for 4/19

This week, our team worked on both integration and putting finishing touches on our own individual game aspects. We also worked on out final presentation, which Michelle will be presenting next week.

As far as individual progress, Lucas worked primarily on integration between the game loop and the main menus, while Yuhe worked on the beat map editor and integrated it with the rest of the game. Yuhe also migrated to Windows from Ubuntu as a result of some technical difficulties. Michelle integrated her beat detection algorithm with the rest of the game, allowing it to be used on user uploaded files.

Going forward, the main challenge will be adding the finishing touches and making the game look professional and engaging to the user, which will have to come from playtesting and some UI related brainstorming. We also haven’t fully integrated, so we’ll need to finish that up, although most components are working together well at this point. We also may need to resolve an audio issue where the file doesn’t play correctly, although the problem has been infrequent and it’s been hard to locate the root of the issue.

Michelle’s Status Report for 4/19

This week, I integrated the rhythm analysis algorithm with Yuhe and Lucas’s game code. I also experimented with another method of determining the global threshold for onset strengths above which a timestamp should be counted as a note. This one used median and median absolute deviation instead of mean and standard deviation. Theoretically it would have less errors due to any outliers because the difference does not get exponentiated. This method performs similarly to the current method but has slightly more false positives at slower tempos. I also further tested the sliding window method. This one had an even higher amount of false positives at slow tempos. I believe this might be ameliorated by have the window slide more continuously rather than jumping from the first 100 frames to the second 100 frames, for example. The issue with this is that it would increase the audio processing latency, which we want to avoid. I think the method using standard deviation (in blue below) is still the best method overall.

Next week, I plan to help with integration testing and also conduct some user testing to validate that the project meets the user requirements. I also plan to help improve the visual design of the game as needed.

I learned a several new tools while working on this project. At the beginning, I learned the basics of Unity before we pivoted to Yuhe’s self-made lightweight game engine. I found it easiest to learn Unity as a beginner by watching and following along with a Youtube tutorial. By consulting documentation, I got more familiar with the numpy library, especially using it for complex plots. I learned how to use threading in Python, looking at examples on Stack Overflow, in order to create a verification program that could play a song and animate its detected notes at the same time. I also learned how to use MuseScore to compose pieces to use as tests for the rhythm analysis. For this I was able to mostly teach myself and occasionally Google any features I couldn’t find on my own.

Team Status Report for 4/12

This week, our team made progress on finalizing and debugging our subsystems as well as starting integration. Lucas added audio playback to the game loop and worked on integrating his components with Yuhe’s main menu. Yuhe worked on the beat map editor, adding waveform viewer and interactions to edit notes. Yuhe is also working on migrating the game to a Windows system in order to solve audio card reading issues when using the Linux virtual environment. Michelle continued testing and refining her rhythm analysis algorithm, moving to a new method that has yields higher accuracy, as shown below in a test on a test piano piece.

After integrating the subsystems we will do some integration tests to ensure all the components are communicating with each other correctly. There are several metrics we will need to focus on, including beat map accuracy, audio and falling tile synchronization, gameplay input latency, persistent storage validation, frame rate stability, and error handling. Both beat map alignment and input latency should be under 20ms to ensure a seamless game experience. The rhythm analysis should capture at at least 95% of the notes and have no false positives. Error handling should cover issues such as unexpected file formats, file sizes that are too large, and invalid file name inputs.

For validation of the use case requirements, we will do some iterative user testing and collect some qualitative feedback about the ease of use, difficulty of the game, accuracy of the rhythm synchronization, and overall experience. During user testing, users will upload their own choice of songs and play the game with the automated beat map and also try out the beat map editor as well. We will want to validate that the whole flow is intuitive and user-friendly.

Team Status Report for 3/29

This week, we made progress toward integrating the audio processing system with the game engine. We implemented a custom JSON parser in C++ to load beatmap data generated by our signal processing pipeline, enabling songs to be played as fully interactive game levels. We also added a result splash screen at the end of gameplay, which currently displays basic performance metrics and will later include more detailed graphs and stats.

In the editor, we refined key systems for waveform visualization and synchronization. We improved timeToPixel() and pixelToTime() mappings to ensure the playhead and note grid align accurately with audio playback. We also advanced snapping logic to quantize note timestamps based on BPM and introduced an alternative input method for placing notes at specific times and lanes, in addition to drag-and-drop.

On the signal processing side, we expanded rhythm extraction testing to voice and bowed instruments and addressed tempo estimation inconsistencies by using a fixed minimum note length of 0.1s. We also updated the JSON output to include lane mapping information for smoother integration.

Next week, we plan to connect the main menu to gameplay, polish the UI, and fully integrate all system components.

Team Status Report 3/22

The team has all been progressing individually on their respective parts of the game over the past week. Michelle has been continuing work on the audio processing side, deciding to focus on perfecting the processing of monophonic piano pieces while Lucas and Yuhe are continuing work on the game itself. I’ve continued work on the core game loop, adding back many of the key features while implementing a JSON parser to turn processed audio into a game, and Yuhe has begun working on the in game beat map editor while also adding some neat visual tools to the game.

We haven’t had any major changes to our game’s design, other than the fact that the audio processing side will focus more on monophonic pieces than longer and more dense stuff. This will probably be our biggest challenge going forward in the coming weeks; as long as we are able to integrate the game itself and the beat map editor/menus, we should be able to devote more time and resources to figuring out more complex audio processing.

Yuhe’s Status Report for 3/22

This week I worked on designing and implementing key features of the Beatmap Editor for our game. In the first half of the week, I worked on the layout system, structuring the editor interface to accommodate a waveform display area, time axis, note grid overlay, and playback controls. I integrated JSON-based waveform data preprocessed externally to visualize amplitude over time using SFML’s VertexArray class. In the latter half, I worked on the note grid system that allows users to place notes along multiple lanes. Each lane corresponds to a specific input key, and note placement is quantized based on timestamp intervals to ensure rhythm accuracy. A snapping mechanism aligns notes to the closest beat subdivision, improving usability.

I think the major challenge is synchronizing the visual playhead with the music playback and managing coordinate transforms between waveform time and screen space. My goals for next week include implementing audio playback controls and note data serialization.

Team Status Report for 3/15

This week, we each made a lot of progress on our subsystems and started the integration process. Yuhe finished building a lightweight game engine that will much better suit our purposes than Unity, and implemented advanced UI components, a C++ to Python bridge, and a test in C++ for rhythm detection verification using SFML. Lucas worked on rewriting the gameplay code he wrote for Unity to work with the new engine, and was able to get a barebones version of the game working. Michelle worked on rhythm detection for monophonic time-varying tempo songs, which is quite accurate, and started testing fixed tempo multi-instrumental songs, which needs more work.

Beat Detection for Time-Varying Tempo
Core Game Loop in New Game Engine

There have been no major design changes in the past week. The most significant risk at this time to the success of our project is probably the unpredictability of the audio that the user will upload. Our design will mitigate this risk by only allowing certain file types and sizes and surfacing a user error if no tempo can be detected (i.e. the user uploaded an audio file that is not a song).

Next steps include finishing the transition to the new game engine, refining the rhythm detection of multi-instrumental songs, and implementing an in-game beatmap editor. With integration off to a good start, the team is making solid progress towards the MVP.

Lucas’ Status Report for 3/15

Since we’ve transitioned to a new engine, I’ve spent most of the week trying to rewrite most of my code to fit with the engine. I first spent time looking through documentation for our new libraries (sfml and spdlog), and then began experimenting with the two. I then tried to get a very barebones version of my game working, which I was able to do. This transition will be pretty time consuming, since many of the minutiae that were once handled by Unity will now have to be controlled by my code, although algorithmically everything should match up pretty closely.

I also spent time this week on the ethics assignment.

Next week, I want to try to add back the rest of the features I had working, including scoring, timing feedback, and json files controlling the notes instead of them being random.