Sai’s Status Report for 5/8/2021

This past week, I was able to integrate Carlos’ pitch detection and feedback algorithm with the user interface. I used the metrics from the algorithm to get the voice range evaluation. The algorithm outputs a note range, and I wrote some code to map that note range to a voice range category (“Soprano” , “Alto”, “Bass”, etc) to show to the user. I also wrote code to use this category to generate pitch lessons, and send expectancy metrics to the pitch feedback so that each user gets a custom lesson and evaluation based on their voice range. I designed the web app so that users can update this voice range at any time. I used the metrics from the pitch detection/feedback algorithm to show a user how sharp and flat a user and how well a user was able to hold each using charts from chart.js library. The pictures of this feedback can be found in the team status report. I’ve attached a picture of the example voice range feedback I got for my own voice range.

Using the feedback I got from the user experience surveys last week, I made modifications to the pitch exercise so that there are now pop-up instructions, and more navigation buttons (exit buttons, back buttons etc). 

The pitch detection algorithm can return empty metrics if the recording is flawed, so I added a case where if the detection algorithm detects a faulty recording, I put in an error message and tell the user to try recording again for both the pitch exercises and the voice range evaluation. This was the best alternative to a page just crashing.

I’ve actually been able to accomplish everything I’ve wanted to put into the web application, and am on schedule. Right now I am just hoping deployment works out.

For the upcoming week, I’ll keep cleaning up and refining the user interface, adding more instructions and navigation for the exercises, and working the welcome page.

Team Status Report For 5/8/21

This past week, our team was able to successfully integrate the user interface with the pitch detection and clap detection algorithms and the web application now displays and stores feedback for pitch and rhythm to a user’s audio recording.  Therefore, we no longer have any potential risks in integration. However before this upcoming Monday, the only risk we face is not being able to deploy our web application with the EC2 Instance. We tried to deploy more than once on the EC2 Instance using amazon S3 for storage, however, the web application is unresponsive at the public IP address on HTTP/HTTPS ports. We are currently still trying to make this deployment work, but our risk mitigation is to deploy on an 8000 port since that seems to be loading the web application on the EC2 instance. However we need to also use SSL for this option  in order to get the audio recording functionality working. Furthermore, another risk mitigation, will be to use SQL for database storage instead S3 since using S3 with the EC2 Instance is not as straightforward to deploy as is SQL. This will mean that there could potentially be design change for our product. No other parts of the design will be affected, the database storage will be the only thing that is replaced. Until Monday, we will be attempting to get deployment to work, working on the poster, the demo video, and making some User Interface refinements. Our schedule remains the same. Below are some pictures of the integrated feedback for the exercises.

Sai’s Status Report for 5/1/21

This week, I was able to get started on the voice range evaluation component of the web app.  This was a bit tricky because I wanted to use the code I already had for the pitch lessons to record for this component, but that code was kind of not scalable to the voice range. So a good amount of time went into making that code scalable for the voice range as well as the rhythm exercises.  The voice range evaluation frontend with its recording capability is complete, however it still needs to be integrated with the audio processing component to map the audio files to a voice range to output to the user. Me and Funmbi were able to integrate both our parts of the web app together. We integrated the vexflow music notes and the play rhythm + recording functionality to complete the clapping exercises.  We also worked on the dashboard to bring all the different components together and link them together. The dashboard pic can be found in the team status report.  For the upcoming week, I’ll be working with Funmbi on deploying what we so far, since it might be a bit tricky with the audio files, external APIs and packages we are using. I will also try to finish up the voice range exercise and integrate with Carlos’s audio processing algorithms so that I can actually display feedback to the users for the pitch lessons, rhythm lessons, and voice range evaluation. There’s still a good amount of work to do, so I feel that I may be behind, but hopefully the integration won’t take too long.  I’ve attached photos of the voice range evaluation run-through.

Sai’s Status Report for 4/24/2021

This week, there were some file formatting issues for the recorded audio files that were submitted on the web application. I found out that the API I’m using to record a user’s audio input on a browser doesn’t produce uncorrupt/clean .wav files that you can open or process on the backend, but produces clean .webm  files. I found an ffmpeg package that I can call on the server to convert .webm files to .wav files. This package has been successful and produces clean .wav files. We made some design changes to our pitch exercises, there is now  a countdown when you press the record button and we wanted to add breaks in between notes when listening to how the scale/interval sounds via piano notes. I had to change the algorithm that produced the sequence of piano notes and displays/animates the solfege visuals. There is a now a dashboard/list of some of the different pitch scale/interval exercises. But now, new exercises can be easily added by just adding a model with its sequence of notes, durations, and solfege.  I also spent some time refining the solfege animation/display since some edge cases came up with adding more pitch exercises. I want to add a more refined display for the solfege display this upcoming week using some bootstrap features. Another minor edit I made was adding a nice logo for web app and making the title bar more engaging/bright.  I feel like I’m behind schedule, since I thought integration would happen this past week, but we will spend a lot more time on this project together to make sure the integration gets done before next week.

For the upcoming week, I’ll be working on creating the voice range evaluation that will be part of the registration process, integrating with Funmbi’s features, and working with her to create the web app’s main dashboard. I will also integrate my code with Carlos’ Pitch detection algorithm to receive feedback from the audio recordings. I will also try to plan out what the User Experience survey questions/prompts will look like.

Below are some pictures of the new exercise format and the pitch exercise dashboard.

Sai’s Status Report for 4/10/21

This past week, I was able to figure out how to send the audio files that a user records for pitch exercises back to the server and store it into the database at the same time through AJAX. I also realized that for generating piano notes for the user to listen to and imitate, web audio api was not the best solution as it produced very robotic sounds for different notes. I found another library called AudioSynth which is very easy to use and generated very realistic piano notes. However, I ran into a roadblock where I failed to consider how the lessons would be generated. We planned that the lesson exercises for scale exercises in particular would be sent to the server and processed in javascript as dictionary with the pitch notes (A, B, C, D, etc) as keys and their durations (eighth note, quarter note, half note, etc) as their value in the order that it would be played and required to be sung. This will require JSON format in order to store them and send to the server for the pitch detection algorithm and feedback algorithm to compare the user’s pitch to the expected pitch. However this would mean that these scale exercises would need to be stored and hardcoded in javascript and sent in JSON file format to the server through AJAX, but there needs to be a more efficient way to storing/generating these pitch/scale exercises, which is what I need to figure out this upcoming week.

I’ve made a significant amount of progress this past week, however, I feel that I’m behind on progress in regards to integrating my code with the server. Therefore, figuring out how to get over the roadblock will require some planning which I plan to do this week after I generate feedback from the exercise to present for the interim demo. But before planning the integration, I plan on just using hardcoded values for the lessons without integration for now given the time constraints.

Here’s the code I wrote to send audio recordings over to the server via AJAX:

Here’s the code I used to generate piano notes for a C Major Scale:

 

Team Status Report for 4/10/21

This past week, we’ve noticed a few significant risks to the progress of our project. The first risk is that we won’t be well-prepared to integrate our front-end code with the server. As we’ve been spending most of our time hard-coding the exercises to make sure it looks right on the front end, we have run into a couple of road blocks as we haven’t planned out exactly how to send back expected exercise results to the server to compare it to the user’s actual results/recording and if it will work. The second risk, is that we won’t have enough time to be able to get our MVP.  We have to account for the time that we need spend on other classes as well since they’ve been taking up more and more of our time these past few weeks.

To mitigate these risks, we will plan out and collaborate more to plan and lay out all the functions we will be using in the different components of the web application – the views.py file( on the server ),  html + javascript ( frontend) and how they will be linked together.  As for accounting for the time restrictions we all have on doing work for the capstone project, we will all try to plan out how many hours we can spend on the capstone at the beginning of each week and make sure it’s at least 12, without other coursework taking too much time and without too much time spent on capstone.

There have been a few design changes made to our project. One of the design changes is that for the music theory exercises, instead of using VexFlow, we will be using image files to present the lessons due to the integration roadblocks mentioned above. Another design change we made was that instead of using Web Audio API to generate piano notes, we will be using another library we found online: AudioSynth since the AudioSynth library has more code to play more natural piano tones, whereas web audio api generated very robotic tones.

Our schedule has changed slightly:

Here’s a picture of our progress on the frontend of a basic pitch exercise:

Sai’s Status Report for 4/3/2021

This week, I started thinking about how to MediaRecorder API with our django web app, and the best way to send back the audio recording to the django server and how to store it. I was able to use the MediaRecorder functionality in the javascript file of our html + css + js frontend and got the basic recording and playback feature implemented:

I have also set up the data model to store the recording for a user for a given lesson in the Lesson Model:

 I have planned to link this model to a form that will send the audio recording back to the server as a POST request. This still needs needs to be implemented in code and tested. That’s one of the tasks I plan to complete earlier in this week.

I am still behind schedule, as I planned to be working on setting up the feedback page for the user’s audio recordings but hopefully, if I can manage to finish being able to record a user’s audio and store it, and pass it back to our pitch detection algorithm in the backend, I will immediately start working on the feedback generation and rendering at the end of this week and hopefully get that done before the interim demo.

 

Sai’s Status Report for 3/27/2021

This past week, I was able to get started on coding the web application’s HTML templates and integrated bootstrap features with it. I was able to finish getting the login, registration, and user dashboard pages implemented.  I’ve spent too much time focusing on the visual aspect (integrating bootstrap) of the web application, trying to make each element look perfect, and I realized that this has been making me fall behind schedule. I will be trying to implement the basic skeleton of the web application’s essential and main features first before getting bogged down on the visual details. Minor progress item, but I also placed the order for the headset we plan on using.

Pictures of the login and register pages are uploaded in the team status report. Here’s a picture of the user dashboard page:

For the upcoming week, I hope to finish getting a basic and simple pitch exercise page and a basic rhythm exercise page implemented as well as get started on the actions that control the functionality of the django application. I will also be working on implementing the database models that will store user info.

Sai’s Status Report for 3/20/2021

This past week, I was able to read and learn how to implement the APIs we will be using in our web application.  I came up with a high level algorithm to convert the time stamps of the claps from our application rhythm exercise into notes to display through VexFlow API. I also came up with a high level algorithm to compare the clap time stamps to the expected time stamps to color code music notes using VexFlow API. I attached the code below. 

While writing the design report, I was able to come up with explicit response time requirements for the web application based on an article I read., “Response Times: The Three Important Limits”. .1 seconds for page loads, transitions, clicks, anything that doesn’t require special feedback, and 10 seconds for special feedback.  I created the Django project and added it to our team GitHub repository.  I would say that I’m definitely behind schedule due to the design report taking up a lot of my time, however, writing up the design report definitely helped me clarify a few requirements. For the upcoming week, I plan on coding up with user login and registration pages as well as the base user interface that will be a common template for all of the different web application pages.

Sai’s Status Report for 3/13/2021

This week, I started doing some research for our design report. I looked through the different APIs we plan on implementing more thoroughly while taking notes on them and copying down useful snippets of code we could use. I plan to put what I learned about these APIs and their advantages over other APIs in the design report. I attached a pdf with my notes on the APIs below. I also found a few simple rhythm exercises we can choose from to start off our beginning implementation with. I attached a picture of the sheet music representing the exercises below.

For the upcoming week, I plan to refine our system diagram, following the feedback given to us about it based on color coding and the organization of it. I will continue to work on the design report and get the feedback user interface finalized. I also want to get the user login and registration implemented for our web application by the end of this week.

APIs – Research_References