Aarushi’s Status Report for 4/18

I completed porting the matlab code to C++ for the phase vocoder & stft & istft files – I had to refactor parts of the original matlab code and variable types for this conversion to be compile successful. I also created documentation for how the C++ files should be integrated / used.  This documentation was for communication between our team members, for later reference for myself, and for reference for any future user:

This is for IF we modify a song only once, not every minute. It’ll be easier to do that after.
 
I think the android class we need is AudioTrack (linked).
 
What you want to do is create a function as follows (i’m gonna write this in pseudo python/java/matlab/comments code sos):
def TSM(song_name, original_bpm, desired_bpm):
     ratio = desired_bpm / original_bpm
     n = 1024
     # audioread is only a matlab function. doesn’t port to C++
     # based on my google searches so far…
     # we want an analogous JAVA function that is native to androids OS like AudioTrack fns
     [original_signal, sampling_rate] = audioread(song_name) 
     # can be broken up into
     audioTrack.getSampleRate() of song_name
     # reading the song needs to output a matrix / array of the audio signal
     read song_name to signal (not sure which fn in AudioTrack does this) (this link may help)
     modified_signal = pvoc(original_signal, ratio, n)
     return (modified_signal, sampling_rate)
# calling the function
desired_BPM = 160 # equivalent to running pace
original_BPM = 140 # avg of eye of the tiger bpm got from online
playback_song, sampling_Rate = TSM(“IntroEyeoftheTiger1.wav”, original_BPM, desired_BPM)
audioTrack.play() of playback_song
So far, in actually working with integration, we may not actually be able to use AudioTrack. An idea we are tossing around is writing “audioread” in C++. I am skeptical about this because matlab Coder creates C++ files for all matlab built in functions. Coder does not do this for audioread, however. This being a missing basic functionality of Coder leads me to believe that what we are searching for is NOT a basic functionality. Which is why, blog posts I have read suggest performing “audioread” from the integrated device’s OS. However, there are complicated typing and inputting signal matrices from  Java to C++ with this method as well. Both not ideal methods will need to continue to be experimented with.

(DTCWT is still a work in progress.)

Akash’s Status Report for 4/18

This past week I worked on converting the Python implementation of the song selection algorithm to Java. This way we can easily integrate this feature into the app. I had a little difficulty with this as I haven’t coded in Java in a long time, but I was able to use the Python skeleton I wrote to help guide me. Now that it is all in Java, Mayur is going to work on integrating it into the app. The next feature we could add to this would be user preference of songs so that scores could be modified based on how much the user likes specific songs.

Team Status Report for 4/12

After our demo, our team decided that we would work on finishing up loose ends on our individual components. This included (1) porting the song selection algorithm from python to java, (2) porting the audio modification component to C++, and (3) writing code that would break up a song into 60 second chunks before warping. This would allow us to start integration the following week.

We made this decision based on our conversation with Professor Sullivan and Jens as they heavily reemphasized our goals for the end of the project. Additionally, we realized that our work progress was behind in relation to our Gantt chart. According to the plans, we would have completed integration this week and would move on to extended features in our following two weeks of slack time.

Thus, we will take what we have so far, integrate, and continue advancing on our individual parts in the remainder of our slack time, and then integrate again.

Mayur’s Status Report for 4/11

This week, Akash rewrote his code in Java and sent it to me. On my end, I added the song selection algorithm he sent into the code. This week highlighted one source of time sink in the future; up until now, I have only been writing code with a “proof of concept” mindset. Unfortunately, this means that the other two parts of the project are much more difficult to integrate, as the code is not designed well for this to happen. In the coming week, I will be refactoring the code so that it will be [hopefully] as simple as dragging-and-dropping Arushi’s code into the app. It’s pretty obvious integrating will be more complicated, but I want to get the code to as close as that state as possible so that it will be less of a hassle in the future.

Aarushi’s Status Report for 4/12

After our demo, our team decided that ensuring we could integrate is our highest priority considering our timelines. According to our Gantt Chart, we were to be done integrating by the end of this week. We are grateful for having budgeted sufficient slack time to work through our project. Thus, along with working on the ethics assignment this week, I focused my energy on porting my MatLab code to C++. This is essential for integration purposes.

Akash’s Status Report for 4/11

This week I have been working on converting the song selection algorithm code to Java so we can integrate it in the app. During our demo Professor Sullivan and Jens gave us some ideas about testing and other features to try with the song selection algorithm. The one that I think would be cool to work on is the having the users enter their ranks or preferences for songs that will be like a score booster when the songs are being selected.

Team Status Report for 4/4

Next week is the demo of our project. We will be showing each of our subsystems to Professor Sullivan and Jens.

-Mayur aims to have preliminary implementations of each part of his subsystem (the phone app)

-Akash aims to have a roughly close-to-done implementation on the song selector algorithm

-Arushi wants to have progress on the Wavelet algorithm to show.

We look forward to the feedback we will receive, and we think it will help us moving forward.

Mayur’s Status Report for 4/4

This week saw a bit of a pivot for me. Originally, my plan was to add music to the app and show that it was possible to run C code (call a function) from within the codebase.

First, I did the former. Indeed, I can play “Africa” by Toto with a WAV file when a user clicks the Play button. And, I can stop it when the user presses the Pause button. At this point, adding more music is fairly straightforward.

The pivot comes in from the second goal. Arushi was able to write a Vocoder in Matlab, and she found a link about how to run MatLab on Android. My network was really bad, and it took me 10 hours to download the Matlab software to my computer. I am hoping to investigate the possibility of using Matlab instead of C on Android using the suggestions at this link: https://www.mathworks.com/matlabcentral/answers/417602-can-matlab-code-is-used-in-android-studio-for-application. This way, I can explain the progress during the demo and maybe allow Arushi an easier time in developing the Wavelet code.

Next week is the Demo, for which my subsystem should be close to done. The idea is to have each part done to some degree (C/Matlab code, UI, music playability), so that all that is left is to polish the parts and integrate it with the other subsystems. Hence, that is my goal for the next week (to make sure the subparts have a preliminary implementation).

Akash’s Status Report for 4/4

This week I worked on the song selection algorithm. I got something basic working in Python. I created a list of song objects and the algorithm goes through as specified in the Design Document and scores the songs and picks the best one to play. The next step is to get this working with actual music and write it so it is compatible with Android Studio.

Aarushi’s Status Report for 4/4

I had the phase vocoder based on an STFT working on a monophonic signal last week. I tried playing a snipper of a polyphonic song that may be used while running. I used the intro of “Eye of the Tiger”. This did not directly work. The main problem was that the song’s signal as an array was too long to play on MatLab online. Despite having been able to complete my advanced signals classes with the online version, it seemed necessary for me to download the software on my laptop so that I could ACTUALLY test songs. I started this process, but ran into issues here because my mac is 8GB and has been low on storage/disk space for 3 years now. I spent a few hours to moving large folders and files online but still did not have enough space for the software. As a result, I was unable to download the software program. However, I have a new laptop coming in within the week. I will be able to test full songs on matlab on that.

In the meantime, I still wanted to make relevant progress this week. I passed a smaller snippet of the polyphonic signal through the phase vocoder based on stft.

I also deconstructed & reconstructed a signal with DTCWT. I did this with DWT last week. However, DWT results in shift invariance post reconstruction. DTCWT incurs a drastically lesser shift invariance. I tested this out to find this concept to be true.