Lucas’ Status Report for 3/22

This week, I continued to make the game into an actual game utilizing the new engine. Like I (maybe) said last week, the engine doesn’t cover much of what Unity did, which means there’s a lot more tedious work related to drawing and keeping track of data structures containing info about game elements. I was able to add back things like timing, scoring, tracking multiple note blocks at the same time, and the beginnings of a JSON parser that will generate the actual beat map.

The game looks and feels a lot more like an actual game now, barebones as it is. Next week, I’ll finish up the JSON parser to fully integrate the signal processing aspect into the game and add more visual feedback to the game/UI elements that should make it more engaging for the player. I’ll also add a gateway between the main menus and the game itself, allowing for more seamless transitioning between parts of the game.

Michelle’s Status Report for 3/22

This week I continued testing my algorithm on monophonic instrumental and vocal songs with fixed or varying tempo. I ran into some upper limits with SFML in terms of how many sounds it can keep track of at a time. For longer audios, when running the test, both the background music and the generated clicks on note onsets will play perfectly for about thirty seconds before the sound starts to glitch and then goes silent and produces this error:

It seems that there is an upper bound of SFML sounds that can be active at a time and after running valgrind it looks like there are some memory leak issues too. I am still debugging this issue, trying to clear click sounds as soon as they are done playing and implementing suggestions from forums. However, this is only a problem with testing as I am trying to play probably hundreds of metronome clicks in succession, and will not be a problem with the actual game since we will only be playing the song and maybe a few sound effects. If the issue persists, it might be worthwhile to switch to a visual test. This will be closer to the gameplay experience anyway.

Next week I plan to try to get the test working again, try out a visual test method, and work with my team members on integration of all parts. Additionally, after having a discussion with my team members, we think it may be best to leave more advanced analysis of multi-instrumental songs as a stretch goal and focus on the accuracy of monophonic songs for now.

Team Status Report for 3/15

This week, we each made a lot of progress on our subsystems and started the integration process. Yuhe finished building a lightweight game engine that will much better suit our purposes than Unity, and implemented advanced UI components, a C++ to Python bridge, and a test in C++ for rhythm detection verification using SFML. Lucas worked on rewriting the gameplay code he wrote for Unity to work with the new engine, and was able to get a barebones version of the game working. Michelle worked on rhythm detection for monophonic time-varying tempo songs, which is quite accurate, and started testing fixed tempo multi-instrumental songs, which needs more work.

Beat Detection for Time-Varying Tempo
Core Game Loop in New Game Engine

There have been no major design changes in the past week. The most significant risk at this time to the success of our project is probably the unpredictability of the audio that the user will upload. Our design will mitigate this risk by only allowing certain file types and sizes and surfacing a user error if no tempo can be detected (i.e. the user uploaded an audio file that is not a song).

Next steps include finishing the transition to the new game engine, refining the rhythm detection of multi-instrumental songs, and implementing an in-game beatmap editor. With integration off to a good start, the team is making solid progress towards the MVP.

Yuhe’s Status Report for 3/15

This week, I finished implementing the game engine and UI components for Rhythm Genesis. I improved our existing UI elements—including buttons, textboxes, sliders, labels, and the UI manager—by refining their appearance and interactivity. In addition, I implemented advanced UI components such as a scrollable list and an upload UI to handle audio file uploads.

I also developed a custom C++ to Python bridge, which allows us to call Python module functions directly from our C++ game engine. This integration enabled me to implement a basic parse_music_file function in Python using the Librosa library. Additionally, I created a function that plays both the original song and a beep sound generated based on the extracted beat timestamps, which is useful for testing our beat detection.

On the UI side, I completed the menu scene and upload scene, and I set up placeholders for the settings and beat map editor UI. These components provide a solid foundation for our user interface and will be further integrated as the project progresses.

On Friday, I conducted our weekly meeting with the team as usual. I demonstrated the current state of the game, showing the UI scenes and explaining my implementation of the game engine. I also helped team members set up their development environments by ensuring all necessary dependencies were installed. During the meeting, I discussed with Lucas how he might leverage our existing UI components—or use the SFML sprite system—to implement the game scenes and scoring system. I also took the opportunity to test Michelle’s audio processing algorithms within the game engine.

Overall, it was a productive week with substantial progress in both UI development and audio processing integration, setting us on a good path toward our project goals. For the next few weeks I will be working on implementing the beatmap editor for our game.

Michelle’s Status Report for 3/15

I started out this week with exploring how to leverage Librosa to analyze the beat of songs that have time-varying tempo. These are the results of processing, using a standard deviation of 4 BPM, an iPhone recording of high school students at chamber music camp performing Dvorak Piano Quintet No. 2, Movement III:

When running the a test that simultaneously plays the piece and a click on each estimated beat, the beats sound mostly accurate but not perfect. I then moved on to adding note onset detection in order to determine the rhythm of the piece. My current algorithm selects timestamps where the onset strength is above the 85th percentile. It then removes any timestamps that are within a 32nd note of each other, which is calculated based on the overall tempo. This works very well for monophonic songs that can have some variation in tempo. For multi-instrumental tracks, it tends to detect the rhythm of the drums if present, since these have the most clear onsets, and some of the rhythm of the other instruments or voices.

I also worked on setting up my development environment for the new game engine Yuhe built. Next week I plan to continue integrating the audio analysis with the game. I also plan to adjust the rhythm algorithm to dynamically calculate the 32nd note threshold based on the dynamic tempo, as well as experiment with different values for the standard deviation when calculating the time-varying tempo. I also would like to look into possible ways that we can improve rhythm detection in multi-instrumental songs.

Lucas’ Status Report for 3/15

Since we’ve transitioned to a new engine, I’ve spent most of the week trying to rewrite most of my code to fit with the engine. I first spent time looking through documentation for our new libraries (sfml and spdlog), and then began experimenting with the two. I then tried to get a very barebones version of my game working, which I was able to do. This transition will be pretty time consuming, since many of the minutiae that were once handled by Unity will now have to be controlled by my code, although algorithmically everything should match up pretty closely.

I also spent time this week on the ethics assignment.

Next week, I want to try to add back the rest of the features I had working, including scoring, timing feedback, and json files controlling the notes instead of them being random.

Michelle’s Status Report for 3/8

This week, I worked on creating an algorithm for determining the number of notes to be generated based on the onset strength of the beat. Onset strength at time t is determined by max(0, S[f, t] – ref[f, t – lag]) where ref is S after local max filtering along the frequency axis and S is the log-power Mel spectrogram.

Since a higher onset strength implies a more intense beat, it can be better represented in the game by chords. Likewise, a weaker onset strength would generate a rest or a single notes. Generally we want more single notes than anything else, with three note chords being rarer than two note chords. These percentiles can be easily adjusted later on during user testing to figure out the best balance.

My progress is on schedule. Next week, I plan to refactor my explorations with Librosa into modular functions to be easily integrated with the game. I will also be transitioning from working on audio analysis to working on the UI of the game.

Team Report for 3/8

During the week leading up to spring break, our team worked on both the documentation and development of Rhythm Genesis. We completed the design review report collaboratively; On the development side, we transitioned from Unity to our own custom C++ engine with SFML, optimizing performance for low-power systems. Yuhe implemented sprite rendering, UI elements, and settings management, ensuring smooth interaction and persistent data storage. Lucas worked on refining the core game loop, adding a scoring system, and transitioning from randomly generated notes to a JSON-based beat map system. Michelle worked on creating an algorithm for determining the number of notes to be generated based on the onset strength of the beat.

Next steps include expanding gameplay mechanics, integrating Python’s librosa for beat map generation, and improving UI design. The team will also focus on fully integrating Lucas’ contributions into the new engine to ensure a seamless transition.

Part A: Global Factors (Michelle)

Since our rhythm game does not analyze the lyrics of any songs and just analyzes beats, onset strengths, and pitches, any song in any language will work with our game. The only translations needed to globalize our game will be that of the text in the menu. Additionally, we plan to publish the game for free on Steam, which is available in 237 countries, so our game will be widely available internationally.

Part B: Cultural Factors (Lucas)

While our game is, at the end of the day, just a game, it will address a couple cultural factors – namely tradition (specifically music) and language. Our game will allow users to play a game that caters to their specific heritage as it allows them to use the music that they like. Instead of typical rhythm games that cover a small subset of music and therefore likely only represent a limited number of cultural backgrounds, our game will allow users to upload whatever music they’d like, which means that users from any background can enjoy the music that they feel represents them while playing the game.

Part C: Environmental Factors (Yuhe)
Good news—our game doesn’t spew carbon emissions or chop down trees! Rhythm Genesis is just code, happily living on user’s device, consuming only as much electricity as their computer allows. Unlike some bloated game engines that demand high-end GPUs and turn one’s laptop into a space heater, our lightweight C++ engine keeps things simple and efficient. Plus, since a player can play with their keyboard instead of buying plastic peripherals, we’re technically saving the planet… So, while our game can’t plant trees, it is at least not making things worse. Play guilt-free—unless you miss all the notes, then that’s on you 🙂

Yuhe’s Status Report for 3/8

During the week before the spring break, I worked on both the documentation and development of our rhythm game. I did the System Implementation, Testing, and Summary sections of our Design Review Report, detailing the architecture, subsystems, and testing methodologies.

Additionally, I focused on implementing our custom game engine in C++ using SFML, replacing Unity due to its high computational demands and uncessary packages and features that we will not use for our 2D game. I implemented sprite rending system and core UI elements, including the settings menu, keybinding input, and volume control, ensuring smooth interaction and persistent data storage. The next steps involve expanding gameplay mechanics, refining beat map processing, improving UI usability, and most importantly integrating our game engine with Python’s Librosa audio processing library.

Lucas’ status report for 3/8

This week, I continued to work on the core game loop in Unity. I was able to add in a scoring system, and began work on implementing a json controlled beat map instead of dropping randomly generated notes. I also spent considerable time debugging a few things that ended up (mostly) being due to misunderstanding some Unity stuff. I also worked on writing the design review, and had to write up a few sections for it.

Next week, I think I’ll have to spend most of my time integrating; switching my sections of the game to use the game engine Yuhe developed, and writing it up in C++. I’d also like to do some work on the game’s visuals, as right now it doesn’t look great and could use some UI updates.