Ivy’s Status Report for 3/6

This week, I worked on the design review presentation with the rest of the team. I created this tentative design of the UI for our web app’s main page, where users will be recording and editing their music together.

In the beginning of the week, Jackson and I tested out SoundJack, and found we could communicate with one another with the latency of 60ms through it. This was much better than either of us were expecting, so using this method (adjusting the packet size to increase speed and amount of packets to be sent/received to increase audio quality) as a basis for our user-to-user connection seems to be a good idea. But instead of manual adjustments, which can become really complicated with more than two people, I will be creating an automatic function that takes into account of all the users’ connectivity, and set the buffer parameters based on that.

We have settled a major concern of our project, as we will be reducing the real-time latency so that users will be able to hear each other and synchronizing their recording afterwards. We have updated our gantt chart to reflect this.

My first task will be to create the click track generator. To begin, I created a CSS form which will send the beats per measure, beat value, and tempo variables to the server when the user sets them and clicks on the ‘test play’ button. A function will then generate a looped audio sound with this information and play it back to the user. As for the latter, I’m still not too sure whether the sound should be created with Python DSP Library or the Web Audio API. Further research is needed, but I imagine both implementations will not be too different, so I should be able to get the click track generator  functioning by 3/9, the planned due date for this deliverable.

Ivy’s Status Report for 2/27

This week I presented our project proposal as well as did further research into the synchronization issue. This remains my biggest concern: being able to synch real time vs only synching after the tracks are recorded will greatly affect how our project is constructed. I want to know the advantages and technological limits of implementing both of them asap, so we can decide on which one to focus on moving forward.

In saying this, I’ve found partial solution in the web app, SoundJack. The application can control the speed and number of samples that are sent over to other users which allow users to have some control over the latency and greatly stabilize their connection. It calculates the displays the latency to the user, so they may make the appropriate adjustments to decrease it. Users then can set multiple channels to mics and chose what audio to send to each other via buses.

One coincidental advantage of this is that, because we will be taking care to reduce latency during recording, the finished tracks will not need much adjustments to be completely on-beat. Still, where this solution falls short is that the latency will either have to be compounded with multiple users in order for real time to keep up with digital time, or other users will here an ‘echo’ of themselves playing. Additionally, the interface of all the programs (SoundJack, Audiomovers) I’ve looked into is pretty complicated and hard to understand. One common complaint I’ve seen in comments from YouTube guides is that it makes sound recording more engineering focused than music-making focused.  Perhaps our algorithm could do these speed and sample adjustments automatically, to take the burden off of the user.

Furthermore, in these video guides, the users use some sort of hardware device so that they are not reliant on wifi connection, like what our project assumes they will be doing in. So far, I’ve only read documentation and watched video guides of this. Since it is a free software, I want to experiment with this in our lab session on Monday and Wednesday.

I completed the Django tutorial and have started on metronome portion of the project. I have a good idea what I want this part of our project to look like, however I have less of an idea of what exactly the metronome’s output should be. One thing I know for sure is that, in order to mesh with our synchronization output, there needs to be a defined characteristic in the wave where the beat begins. I also think that, because some people may prefer to have a distinguishing beat at the beginning of measures, we need to take that into account when synchronizing.

Ivy’s Status Report For 2/20

During the week, we worked on the Project Proposal Presentation as a team, as well as set up the wordpress and gantt chart to layout our workflow and track our process.

Furthermore, I wrote up an outline for the oral presentation of our proposal here. One of the major concerns we have going into this project is figuring out a method to synchronize individual tracks. In our initial research, we’ve come across some papers (Carnegie Mellon Laptop Orchestra, Dannenberg 2007) and programs (rewire, audiomovers) that aim to do something similar. The former gives us an idea of how to sync performances to a click track, but we hope to sync performances as they are being played live as well. I will look further into the commercially available options this weekend.

We will be using Django to build our website. Since I have not used that before, I’ve been following a small tutorial that’ll hopefully get me familiarized with its functionality and interface.

Our design hasn’t changed much from our abstract, but we’ve added some more methods of testing our final products viability, including testing for security, website traffic, and qualitative feedback from its users.