Team Status Report for 5/8

This week Christy sorted out cloud deployment and has gotten our site online at: www.acapella2021.com. There are still things to debug however, namely configuring things in Https as certain functions in the Web AudioAPI don’t seem to work with Http. Unfortunately, as this is central to recording and monitoring, it also means that we haven’t gotten to testing our latency over the web right now.

Additionally, because of some changes in the upload process on our webpage, we are updating our syncing on the backend to run with all the files in the group instead of one at a time. Another thing that remains is giving the users the ability to download all all their tracks. This can be easily implemented by giving users permission to download each track, however we would like there to be an option of mixing down all the tracks and exporting them as one.

In the next two days, we will be finishing up these tasks and working on our final poster and video due Monday night.

Ivy’s Status Report for 5/8

This week I worked on final integrations and testing our web app. Since Christy figured out how to get librosa on the cloud, I’ve reverted back to using that for syncing. One of the changes I had to make now was to have the syncing iterate through all the files attributed to the group , as we’ve change our project to let all group members upload their recorded audio first, so that all tracks could be displayed on the webpage for users to listen to first before deciding which ones to keep and sync.

Other than that, what’s left is allowing users to download their finished recordings off the webpage. I’d like to have an option of merging all tracks down to one audio file, but right now it seems easiest to download all tracks as separate audio files. Finally, I’m putting together a tutorial about how to use our website. After conferring with some test users, it seems the recording can be a little complicated, so I think it would be nice to have an instruction manual of sorts. In the next few days, I will be working on our poster. I’d like to have it our final presentation materials done before Monday as I have a file on that day :’)

Ivy’s Status Report for 5/1

This week I worked some more on integrating our individual parts in preparation for the final presentation and demo video. In addition to that, I worked on our final presentation, updating our system diagram to reflect our changes.

The last thing we have left to do is cloud deployment. On Friday, Christy and I met on campus to iron out some last minute details and get our project deployed on AWS. One of the issues she ran into while trying to deploy our project was that the python Librosa library I used for syncing the audio tracks will not work with deployment. From what I’ve found on the web, it does not seem like there is a lot of resources for solving this issue. Instead, I will rewrite that portion of my code using a different library, essentia. Essentia’s documentation describes explicitly how to compile on the web, so this should get rid of the error.

We are meeting again on Sunday so that we may get cloud deployment done by Monday. Afterwards we will test the latency by connecting with users at different locations

Team Status Report for 5/1

This week, our group worked on the final presentation and some finishing touches and adjustments to make sure the individual parts will work together. In our presentation, we updated our schedule, system diagrams and explained several features of our site in more detail.

Cloud deployment is the last thing we need to do. We ran into a couple problems trying to deploy with AWS. Firstly there was a database error when trying to load the python library librosa. There doesn’t seem to be any resources we can consult to fix this issue so instead, we will rewrite our code with another library, essentia, which has a similar onset detection function needed for syncing the tracks up.

In the following week, we will hopefully be able to test for latency online, with users in different locations. We will also get be getting survey responses about the UI and performance from other people, and filming the parts needed for our final video.

 

Ivy’s Status Report for 4/24

This week I integrated my syncing algorithm to the new recorder. After a lot of attempts to unsuccessfully write our recorded data into a readable .wav format. I gave up and decided to implement Recorderjs. Using its built in ‘exportWAV’ function, I was able to create an async function that uploads recordings to our server.

With that done, I did some more tests with the new recorder, playing a few pieces along with the metronome to iron out some bugs. One error I kept stumbling upon was that the tolerance parameter I set for detecting whether a sound was to be considered ‘on beat’ did not work for all tempos. While it was important to account for some pieces which may begin on an offbeat, the parameter would be impossible to meet for faster tempos. To fix this, I set some limits on how low the value could be.

The most critical part of our project is the latency reduction. Since we haven’t done cloud deployment yet, some of the networking aspects cannot yet be tested and improved upon. For the time being, I familiarized myself with the monitoring and django channels as implemented by Jackson the week prior. When reading about channels, I began wondering if the original click generator I wrote on python could be implemented via an AsyncWebSocket consumer. While the click track we have now works fine, users inside a group will have to listen to one instance of it being played through monitoring, rather than having the beats sent to them from the server. This might cause some confusion among users, as they will have to work out who to run the metronome and how much to set the tempo; on the other hand, if the metronome is implemented through a websocket, then the tempo will be updated automatically for all users when the page refreshes. Latency will affect when users hear the beats either way but again, we’ve yet to test that.

Right now, getting our project deployed onto the cloud seem to be the most important thing. This Monday, I will discuss with everyone on how to move forward with that.

Team Status Report for 4/24

This week we resolved the issue of uploading audio in a format that is accessible from our backend. There were some problems getting our initial recorder to record in a format decodable by our backend, so instead, we changed our implementation to initialize two recorders, one for monitoring and one for capturing audio. While having two recorders may seem like one too many, this might make easier for us when it comes to reducing our latency for monitoring, as we can lower the sample-rate of the recorder used for monitoring without affecting the sample-rate of the actual recording.

Additionally, we implemented more of the track UI and set up our database, where the uploaded files will be stored for the different groups. With this, we can now sync up the audio in the files based on the timing information we send with the click track. With that done, we were able to integrate some of our individual parts together and fix some of the bugs that cropped up.

We are behind schedule as most of what we have left requires cloud deployment which has not been done. Since, we can only test on our local machines right now, the monitoring latency is mere single-digits right now but this might not be true across multiple remote clients. If that is the case, then we will have to implement some of the buffers and filters described in our Design Review Document.

 

 

Ivy’s Status Report for 4/10

This week, I worked on trying to upload the recorded audio in a file format recognized by python. Chrome does not support recording in a .wav format (as a matter of fact, the only format that seems to be supported across all browers is webm), so we have to do this ourselves. Attempts involved trying to write the audio data into a wav file in the back end (which just resulted in noise), and trying to convert the recorded audio blob into a .wav file before passing it to the server.

After some research, I found this tutorial, which shows how write our own WAV header before uploading the audio to the server. Since webm does work with PCM encoding, appending our recorded audio to a WAV header seems to be the right way to go. However, after trying it, I’m still getting errors trying to read the file in the back end. I think the problem is we need to specify our sample rate and bit depth before recording and am currently looking into how to set that up.

Though I have the syncing post-upload done, not being able to get the audio in a readable format makes the entire functionality moot. I am behind right now, but I hope to get all this figured out before Monday, so we can update our gantt chart and get ready for the demo.

Ivy’s Status Report for 3/27

I am almost finished up with the audio upload to server right now. This past couple weeks, I realized my implementation discussed in my previous status report was impractical, as I was creating and playing the click track on the server rather than on the actual webpage. To fix this, I had to rewrite my code in .js, using WebAudio API to create the metronome clicks. Unfortunately, I was unable to replicate the clock I had created in Python in Javascript, and instead resorted to recursive TimeOut calls for the intervals between the clicks. But this implementation will create inevitable delay, which would causes successive ticks to drift further and further away from the ‘correct’ timing. To fix this, I would decrease the intervals for every other tick, to make up for time if the previous tick arrived a few ms late. I don’t like this solution too much as it only fixes the delay after it happens, rather than addressing it head on. But, for the range of tempo we’re aiming for, it seems the problem isn’t too exacerbated. If we have more time at the end, I will look to see if there is another, more accurate solution.

I think our groups biggest concern now that Jackson’s figured out how to implement monitoring is the UI. I don’t really have much experience with HTML outside of basic social media layouts and our proposed plan for it is much more involved than just a static webpage with some buttons.

Team Status Report for 3/13

This week, we began working on the basic functionality of our website, implementing the recording function, some basic ui and the click generator.

Because we know little of networking, our biggest concern is still implementing the real time monitoring. A good question regarding this was raised during our Design Review presentation, where someone asked about alternatives to our socket solution, if it does not meet our latency requirement. This is a valid concern we had not considered; even though the other examples we tried out could reduce the latency below 100 ms, we might be limited with websockets and might not be able to do the same. One alternative to this is requiring the use of ethernet, which might speed a user’s connection to meet this requirement, but we are not sure if that alone would be enough.

Ivy’s Status Report for 3/13

This week I finished implementing the click track. I settled on creating the click track in python. To do this I used the playsound module and the sleep command to play a short .wav file after a delay based on the beats per measure.

This raised some issues however, as the sleep command is not very accurate.  While testing,  I found that the beats consistently had up to 150 ms of delay. To improve upon this, I created a separate clock which I could initialize to run based on the imputed tempo.

The meter/beats per measure dependent click track ui was much harder than I thought. I only knew some basic HTML going into this so it took a while to figure out how to fetch variables from other elements on the webpage. Even now I’m not so sure it’ll fit with the rest of the UI; since I’m unsure of the dimensions of our actual site, I made it out of <div>s. I’m a little behind right now, as I have yet to merge my code with the current version on the github, but I will get it done by or after our lab meeting on Monday (should I end up with questions), and thus will begin working on the track synchronization by then.

Our biggest concern is the networking aspect of our project. We are not too knowledgeable about networking, and as the concern was raised to us during the Design Review presentation, we aren’t too sure if our proposed socket solution will even meet our requirements.