Christy’s Status Report for 5/8

This week, i have been working on deploying our application using AWS. I had configuration issues while dealing with Apache WebServer. However, I corrected the configuration, and our website works in http.

One of other deployment issues I faced is that one of our API only works in https setting. I do not have experience with https, so I have been following tutorials. What I have done so far is buying a domain name from AWS using 53 Route (our domain name is www.acapella2021.com), issuing SSL certificate (ACM type) for the domain name within AWS, and using load balancer to redirect http to https. This approach has failed. My assumption is that load balancer somehow fails with apache webserver configuration because SSL does match domain name of the load balancer. So next, approach I took is to issue SSl certififace from 3rd party website and to install the SSL certificate within Apache Configuration. This way, we do not need to utilize AWS load balancer to redirect our traffic. I continue work on this second approach to make it work in https setting

It took me a while to understand Apache Configuration and AWS Load Balancer. It was especially difficult because i was not sure where the error was coming from except for the error.log.  I will also work on creating demo video for our website.

Team Status Report for 5/8

This week Christy sorted out cloud deployment and has gotten our site online at: www.acapella2021.com. There are still things to debug however, namely configuring things in Https as certain functions in the Web AudioAPI don’t seem to work with Http. Unfortunately, as this is central to recording and monitoring, it also means that we haven’t gotten to testing our latency over the web right now.

Additionally, because of some changes in the upload process on our webpage, we are updating our syncing on the backend to run with all the files in the group instead of one at a time. Another thing that remains is giving the users the ability to download all all their tracks. This can be easily implemented by giving users permission to download each track, however we would like there to be an option of mixing down all the tracks and exporting them as one.

In the next two days, we will be finishing up these tasks and working on our final poster and video due Monday night.

Ivy’s Status Report for 5/8

This week I worked on final integrations and testing our web app. Since Christy figured out how to get librosa on the cloud, I’ve reverted back to using that for syncing. One of the changes I had to make now was to have the syncing iterate through all the files attributed to the group , as we’ve change our project to let all group members upload their recorded audio first, so that all tracks could be displayed on the webpage for users to listen to first before deciding which ones to keep and sync.

Other than that, what’s left is allowing users to download their finished recordings off the webpage. I’d like to have an option of merging all tracks down to one audio file, but right now it seems easiest to download all tracks as separate audio files. Finally, I’m putting together a tutorial about how to use our website. After conferring with some test users, it seems the recording can be a little complicated, so I think it would be nice to have an instruction manual of sorts. In the next few days, I will be working on our poster. I’d like to have it our final presentation materials done before Monday as I have a file on that day :’)

Jackson’s Status Report for 5/8

This week, I started working on the final video. I wrote a script and took some “b-roll” screen recordings of the app. I have not yet edited anything together, but I have all day tomorrow set aside for that. The script is mostly a clearer restatement of what is in my status report about setting up monitoring with WebRTC. It’s a challenge to try to explain how all of that works while not going well over time, but I did my best with keeping it as concise as possible. We’ll have the video up by Monday night, along with the poster. And speaking of the poster, I’ve also begun working on that. We have our graphics already from the final presentation.

At this point, with only days left before the demo, and cloud deployment still not entirely working, it will be very difficult to debug anything that breaks during deployment, and probably impossible to implement the latency improvement strategies I’ve talked about in previous status reports. With this in mind, our focus in this last week has shifted to the necessary assignments for the class, rather than the “nice-to-have”s for the project itself. So for my deliverables this week, I will edit the video and do my share of the poster and final paper.

Christy’s Status Report for 5/1

Over the past week, i have been working on fixing minor bugs and deploying our project on cloud through Amazon Web Service.

I merged two types of recording into one. One type of recording used to serve for testing audio of the user. And the other type of recording used to serve for uploading audio of the user. Merging those two recorders allows user to test their audio and upload the recorded audio.

Another main feature that i added is to assign a leader for the recording group. The leader is the one who first created the recording group. Other team members who join the group by the room_key will become team members. The role of the leader is to set click generator, which sets right beat and tempo for the members. Another role of the leader is to sync all the uploaded audio file once every team members successfully upload their audio. These functions were added to keep consistency and improve synchronization.

Deploying our project on AWS has been a challenge for this week. The server was not loaded to Apache server due to the limited storage & computation capacity of Ubuntu Server. So, I tried creating instance with more storage and computation capacity. Another challenge was loading python packages on the Apache Server.

 

Ivy’s Status Report for 5/1

This week I worked some more on integrating our individual parts in preparation for the final presentation and demo video. In addition to that, I worked on our final presentation, updating our system diagram to reflect our changes.

The last thing we have left to do is cloud deployment. On Friday, Christy and I met on campus to iron out some last minute details and get our project deployed on AWS. One of the issues she ran into while trying to deploy our project was that the python Librosa library I used for syncing the audio tracks will not work with deployment. From what I’ve found on the web, it does not seem like there is a lot of resources for solving this issue. Instead, I will rewrite that portion of my code using a different library, essentia. Essentia’s documentation describes explicitly how to compile on the web, so this should get rid of the error.

We are meeting again on Sunday so that we may get cloud deployment done by Monday. Afterwards we will test the latency by connecting with users at different locations

Team Status Report for 5/1

This week, our group worked on the final presentation and some finishing touches and adjustments to make sure the individual parts will work together. In our presentation, we updated our schedule, system diagrams and explained several features of our site in more detail.

Cloud deployment is the last thing we need to do. We ran into a couple problems trying to deploy with AWS. Firstly there was a database error when trying to load the python library librosa. There doesn’t seem to be any resources we can consult to fix this issue so instead, we will rewrite our code with another library, essentia, which has a similar onset detection function needed for syncing the tracks up.

In the following week, we will hopefully be able to test for latency online, with users in different locations. We will also get be getting survey responses about the UI and performance from other people, and filming the parts needed for our final video.

 

Jackson’s Status Report for 5/1

This week, I had again planned to test latency and packet loss using the tests I implemented a few weeks ago. The results look really good just testing locally, but they won’t be meaningful until we can test them between different computers over the internet, which depends on cloud deployment. Christy has begun working on deployment, but since it’s not done yet, I don’t know what parts of our project will “break” as a result of deployment. The plan on our initial schedule was for Ivy and I to make incremental changes to the app during the week, and for Christy to deploy every weekend, beginning on 3/11. We’ve completed about as much as we can before deployment. Because of this, I’m a bit behind schedule, and I’ll try to catch up as soon as cloud deployment is done.

The code isn’t the only part of the project that needed work this past week, though. So I spent a significant amount of time on the final presentation slides. I wrote simplified explanations of the more complicated portions of the project that I worked on: WebSocket signalling, establishing peer-to-peer connections, sending audio over these peer-to-peer connections, and tests for determining end-to-end latency and packet loss rate. A real challenge was conveying all of that information as simply as possible for the presentation format. As you know, I’ve written out very detailed explanations in my status reports, and there’s a lot more to it than I could fit in a few powerpoint slides.

I also made a new Gantt chart showing the schedule and division of labor as they actually happened, and our plan for the last couple weeks of the semester. This is also in the final presentation slides.

For next week, I hope to make significant progress on the poster, video, and final report. Also, if cloud deployment finishes up, I can do any last-minute debugging, determine the actual end-to-end latency and packet loss rate, and do my best to improve them with what little time we have left. But my main priority going forward will be the necessary class assignments.

Christy’s Status Report for 4/24

This week, I worked on merging ivy’s code and my code. Ivy worked on converting audio blob url into wav file, which will be stored in Django database. I used Jquery Ajax to send audio wave file to Django server.  I encountered several problems while implementing Jquery Ajax in javascript because I was not fully understanding the syntax of Jquery Ajax. Jquery Ajax adapts different format of form, depending on type of data.  Eventually, I figured it out and Django server was able to get the file.

Last week during out team meeting with Professor, Professor suggested us to name the audio file to be uploaded. So I required the user to name the audio file before the upload. However, based on Ivy code, the audio file is named after the time of the upload. So, I will make adjustments between our two different approaches.

Another feature I worked on is generating waveform for the uploaded audio file and creating UI for the uploaded audio.  For this functionality, I looped through every uploaded track and attached wavesurfer for the individual audio because they must generate different waveform according to their audio.  However, it seems like audio file from django database does not properly load to audio element in html. So, currently,  uploaded audio is not showing properly. I need to fix this issue.

In addition, there has been major change in terms of how we record audio. Jackson implemented  audio recording functionality based on Audio API.  However, audio blob generated from Audio API is not suitable to store. So, Ivy implemented this audio recording functionality with Recorder JS API.  Both API works perfectly when user tries to record and test their audio. However, once user decides to upload the audio, Recorder JS API is more suitable because it converts the recorded audio into wav file. My job for this weekend is

Ivy’s Status Report for 4/24

This week I integrated my syncing algorithm to the new recorder. After a lot of attempts to unsuccessfully write our recorded data into a readable .wav format. I gave up and decided to implement Recorderjs. Using its built in ‘exportWAV’ function, I was able to create an async function that uploads recordings to our server.

With that done, I did some more tests with the new recorder, playing a few pieces along with the metronome to iron out some bugs. One error I kept stumbling upon was that the tolerance parameter I set for detecting whether a sound was to be considered ‘on beat’ did not work for all tempos. While it was important to account for some pieces which may begin on an offbeat, the parameter would be impossible to meet for faster tempos. To fix this, I set some limits on how low the value could be.

The most critical part of our project is the latency reduction. Since we haven’t done cloud deployment yet, some of the networking aspects cannot yet be tested and improved upon. For the time being, I familiarized myself with the monitoring and django channels as implemented by Jackson the week prior. When reading about channels, I began wondering if the original click generator I wrote on python could be implemented via an AsyncWebSocket consumer. While the click track we have now works fine, users inside a group will have to listen to one instance of it being played through monitoring, rather than having the beats sent to them from the server. This might cause some confusion among users, as they will have to work out who to run the metronome and how much to set the tempo; on the other hand, if the metronome is implemented through a websocket, then the tempo will be updated automatically for all users when the page refreshes. Latency will affect when users hear the beats either way but again, we’ve yet to test that.

Right now, getting our project deployed onto the cloud seem to be the most important thing. This Monday, I will discuss with everyone on how to move forward with that.