E4: Final Project Video

Description: 18-500 ECE Capstone Final Project Video

Technical Note to our Professors: The app is almost fully-integrated, with 2/3 components already combined and the last almost there. Components work completely individually, and the outputs of each component is the only input required for the next. The output of the song selection and step detector sensors are the only inputs needed for the time modification algorithm. The output of the warping algorithm is the music to be player. Although it does not work in real-time, proof of concept was established and given a little extra time, could be completed.

Team Status Report for 5/2

Very briefly, there are only three items left in the agenda. These are the integration of the warping code with the app (which we will attempt in the next two days), writing up the final report, and creating the demo video. From our discussion with Professor Sullivan, we know that it is already an accomplishment to have the individual components working and the project integrated 2/3 of the way. In a perfect world, the app and warping algorithm would be have a shared language and be easy to mix together. Our current ability to have inputs/outputs of each piece flow into each other is enough to show that with such a situation, we would have been able to add this final piece. The other two items have due dates in the next week. We are working on creating them now!

Mayur’s Status Report for 5/2

Honestly, there isn’t much to say now. Akash sent me the code which bounded the songs to -10/+15 BPM in the song selection algorithm, and I integrated the new version into the app. Integrating the time warping algorithm with the app code remains difficult. All that’s left otherwise is the final report and the demo video. Overall, capstone has been very design heavy. I gave my feedback to the course instructors via the FCEs, and I honestly had a positive experience with the course.

Aarushi’s Status Report for 4/26

  • DTCWT PVOC adjusting matrix dimensions for each level of transformation. Goal was so that DTCWT PVOC could time stretch & shrink signals by any factor. Current implementation ONLY allows for time stretching by 1/x where x is a whole number. – Spoke to Jeffrey Livingston about his implementation.
  • Adapted DTCWT PVOC functionality to be referenced from function that performs signal pre-processing for the app.
  • ran numerous experiments on STFT PVOC & DTCWT PVOC to compare their performance. STFT is by far faster & more accurate. Experiments varied music choices, and speed of music change.
  • Ran numerous experiments on STFT PVOC alone with varied speed of music change to compare to our system requirements music tempo range, and computational speed. Not completely sure how to interpret the results.
  • Completed final presentation slide deck

Team Status Report for 4/25

We have two weeks left before the end of the semester. At this point, most of our project is wrapping up, although we still have things to aim for. Each component of our project works individually, and 2/3 of the pieces are integrated. We are able to demonstrate that the inputs/outputs of each components match our expectations, and would flow as desired if integrated properly.

The final presentation is on Monday, and will be done by Akash. Accordingly, our group spent time this week running tests for our components,  working on the slide deck, and generating graphs to include in the powerpoint. We finalized our decision to move to the phase vocoder as well in consideration of the fact that the wavelet transform experienced greater loss and took longer to process music.

Before “demo day”, we have the following goals:

  • Finish testing, and possibly re-test components post-integration
  • Attempt to integrate the final two portions
  • Implement the -15/+10 BPM requirement for Song Selection
  • Create the Video
  • Final Report

Akash’s Status Report for 4/25

This week I worked on testing our app and collecting some running data. I ran into some issues with Android Studio, but with the help of Mayur, I was able to download the app onto my phone (the testing device). However, even after that, my phone was not able to collect data with an issue we have still not figured out. I then switched over to using an older Android phone (Galaxy S6) and it worked fine. We got good data that we can use to show the time-scale warp.

I also started working on the slides for our final presentation, that will be given this week. Our presentation is mostly focused on our actual implemented system and our metric validation.

Mayur’s Status Report 4/25

Very short update for this week. We are paradoxically both wrapping up and trying to maximize the amount we finish. I continued attempting to integrate without much success. Namely, my Android phone was not excepting USB connections, which made it difficult to actually physically try out the app. The issue was eventually solved by removing the battery from my phone and putting it back in. Otherwise, I worked with Akash to get the App working on his phone. There were a few problems that we solved, which he will explain in his report. Finally, the presentation is next week, so I am working on the slides. I am still hopeful in getting the integration done, but it is pretty difficult to do.

Team Status Report for 4/18

This week we focused on integrating our independent components.

The matlab code was ported to C++. Documentation for how to integrate the C++ files/functions were created. Upon integration, we ran into issues with (1) typing input variables from Java to C++, (2) translating some matlab functions to Java functionalities — namely, audioread. This is to be done by the Android OS. However, inputting a signal from Java to C++ causes a challenge in typing and storing variables.

To work around this, we are considering creating a C++ function manually to “audioread” wav files. This implementation, however, would then require our java functionality, that initiates music modification every 60 seconds, to be written and integrated in C++. In parallel, we are experimenting with performing “audioread” in Java through the Android OS and storing, typing, and transforming the signal matrices as necessary.

Details about the high-level approach for JNI usage can be found in Mayur’s status report, and details about the time-warping integration and audioread function can be found in Aarushi’s.

Mayur’s Status Report for 4/18

While I believed that the Song Selection Algorithm had been fully implemented with the code, it was not the case. My code was building, but I had not actually attempted to test it on my phone (which was a mistake). As it turns out, I needed to adjust memory settings within the gradle files of the App so that I could increase the allotted memory to hold more songs at runtime. Afterwards, I started to work on integrating the time warping code. This is extremely complicated for several reasons. First, the code requires passing arguments via the JNI. So, code needs to be adjusted so that data is properly serialized/typecast when it is sent between portions of the code. Secondly, the audioread function from matlab needs to be rewritten for C/C++. Finally, several more files need to be “included” with the project, which requires working with CMake and its associated files.

At the moment, I have about 20 tabs open on my computer trying to figure out how to make this work. For now, I believe that the best approach is to send a string with the file name to the C++ code, warping it within C++, writing a new file with C++, and then returning the location of the new file to be played to the Java code. The number one tip for using the JNI according to the official docs in regards to speed is minimizing the amount of data that is sent across it. The JNI needs to marshall/unmarshall all data, which takes a long time. For this reason, doing it the way described would most likely be the fastest approach. Next week I am hoping that we will have finished completely integrating the app and possibly begun testing certain parts. The presentation is the week after!