Akash’s Status Report for 2/29

This week I worked on the slides for our design review presentation and working through the design review document. While going through the document and presentation we fleshed out a lot about how our project is going to work. Since we already found that the step detection from the phone is good enough, we modified part of our project to include a song selection algorithm, which is the new part I will be working on. So in the next few weeks, I will be working on that and make sure it works properly.

The point of this algorithm is to find a song in the user’s playlist that matches the running pace the best. Our metrics require that the song is within a certain BPM range of the running pace, so the goal of this algorithm is to find the song that best matches the pace. This is so that when we put the time-scale audio modification algorithm on the song, it will not sound too weird from the original and still be enjoyable to the user. Since at the end of the day, if the music does not sound good, then the user is less likely to use the app.

Aarushi’s Status Report for 2/22

While last week involved preliminary testing and information gathering for requirements for the design proposal, this week proved to offer drastic and unexpected changes.

  1. Software Decisions – Last week we decided that the best integration method between the wavelet transform and the mobile app would include the Wavelet transform implemented in python, and integrated to android studio via Jython. However, this method would have required a different library to be used on the Java side that would not interface with the phone’s step counter data. While researching and discussing this issue, Professor Sullivan suggested using C++ for the wavelet transform. Since I will be working on the wavelet transform, I took this decision particularly personally. I have little experience with C++. In fact, even upon initially playing with C++ to familiarize myself with the language, I was still uncomfortable. Despite my distaste for this language, it was important to note that C++ offered DWT & IDWT (discrete wavelet transform) methods AND these were well documented: http://wavelet2d.sourceforge.net/#toc-Chapter-2. In fact, these implementations and example use cases I found provided more customization and flexibility with the input/output signals than Python’s libraries and examples proved to show. As a result, I decided to bite the bullet to favor using C++ for its easier integration with the mobile app, its flexibility with signal processing, and its sufficient, clear examples/documentation of wavelet transform use cases.
  2. Device Decisions – NO watch, ONLY phone based on step counter data we measured and acquired since the watch’s step counter was the least accurate despite the fact that it is of a recent generation.
  3. Scoping Step Detection – As of last week, we decided our target audience to be runners. However, after performing our own ‘runner’s’ test, we realized that the discrepancy between our target scope of bpm was because our searches were targeting runners, but we were actually referencing joggers. Additionally, our measured pace/desired bpm of 150-180 bpm actually matched up well with many songs I do use to fast ‘jog’ to. Thus, we adjusted our target bpm/pace accordingly to match this pace.
  4. Music Choice – During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.
  5. Wavelet Transform vs Phase Vocoder metrics and tradeoffs were searched for and validated as expected. Additionally a plan to accomplish the wavelet transform has been made: code and test for a sin wave without changing inputs, do the same and test on simple music, account for tempo variations, account for pitch, & measure artifacts throughout this process. I have additionally researched additional resources on campus in case I need guidance in applying the wavelets for our specific use cases (i.e. Prof Aswin, Stern, Paul Heckbert, Jelena Kovačević)

Akash’s Status Report for 2/22

This week I worked on finding how to get the step data from the  Samsung Galaxy S9. We decided to work with the S9 since it gave us the best data overall from our testing.

I found a few papers and different websites that explain how to get the step detector to work for Android Studio, but it is unclear whether that is for Android in general or Samsung specific. I would like to keep doing some research but also try the suggestions in the websites.

My goal for the next week is to get something basic working from the papers and websites and see how accurate it is.

Team Status Update for 2/22

We made a couple of changes to our system design in accordance to the feedback we received from the proposal presentation. First, we have consolidated the project to the phone. We no longer plan on developing the app on the watch since the step detector accuracy is almost over twice as inaccurate as necessary for our design goals. Additionally, we will be adding a song-picking algorithm to our design. This will allow us to reduce the number of artifacts from warping the song too much. Furthermore, we decreased the range of tempos we will be warping our song between. Originally, we were going to warp songs between tempos of 90-150. Based on our own experience during our treadmill tests, we have changed the range to 150-180. We believe this is an accurate tempo for a long-distance jogger. On that topic, the last change we have made is to target long-distance jogger as opposed to every type of runner. This will allow us to finetune our project and narrow the scope.

During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.

The risks to our project are relatively unchanged. If the wavelet transform does not work, we will be using a phase vocoder, which is known to work accurately. Nevertheless, we are hoping to get the former working with the aid of professors. If there are too many artifacts left over from warping songs by up to 30 BPM, we may choose to switch the songs rather than warp them that much. This will be implemented within our algorithm.

Mayur’s Status Update for 2/22

This week, I finished my exploration into using Python with Android Studio. I discovered several different methods. Each came with a host of problems that made them illogical to use.  Most of their problems were either that they didn’t have a way to access the Android Step Detector and Step Counter sensors, or that they didn’t provide ways to make callable functions in Python to interface with the app built in Java. We will be including a section in the design presentation and report with more details.

There was no specific reason we wanted to use Python other than for its familiarity, so we have decided to write the discrete wavelet transform with C/C++, which is natively supported by Android Studio. We also had the option of using Java, but feel that its efficiency for audio processing will not be as strong as C’s. I did a bit of searching for libraries that already implement the wavelet transform, and found a couple.

Finally, I worked on the design presentation slides to update them for our presentation next week. Specifically, the team discussed the data we had gathered to make conclusions on how we wanted to adjust our design goals and platform specifications.

I would say that the project is currently on track. We have taken the feedback from the proposal and used it to refine our project appropriately. Next week, the deliverables I want are the design doc (obviously) and having an app that uses a little C code as a proof-of-concept.

Mayur’s Status Update for 2/15

My contributions this week can be summarized into two major sections.

    1. Our whole group decided to gather data in order to determine the viability of using the phone and smartwatch built-in step counter functions. The three of us met up at the Tepper gym and measured the accuracy of two different Android phones and an Android watch. See the team update for more information. In this part, my contribution was in running on the treadmill as a second person so that we could have a second set of data. Based on the information we gathered, we have started to debate the necessity of pushing our application onto the watch. Its accuracy is lower than that of current design requirements,  so our only options are to acknowledge that we will not be hitting our requirements, or forcing a person to carry a phone with them. However, if we pick the latter option, then the watch is literally pointless.
    2.  As we outlined in our Proposal Presentation, I will be in charge of creating the UI and functionality of our smartphone application. For this week and up until Tuesday night, I am aiming to have created a very basic application that can run some python code. We need this written for two reasons. The first is to verify that our code base can be written in Python. Python has rich libraries for Audio and Signal processing, which we would like to utilize. However, as far as we are aware, most code-bases are built using only a single language. The second is that having a basic app will give us the chance to explore additional parts of our project in the next week, such as the most logical way to built out the UI, how music is imported, gather information on how long it takes to process music on the phone, and how to get step detector information from the Android API. So far, I have been looking over coding examples and following a basic tutorial to build my first app. Familiarizing myself with the host of different files and Android jargon has been the most challenging part of the week. The most exciting has been the fact that I figured out how to test the app directly on my phone. I would say that I am mostly on track to write the app by the end of next week. Aside from the app, I’m also hoping to begin work on the design presentation slides & writeup during next week.

Akash’s Status Update for 2/15

This week I worked on testing the step detection in possible devices that we are going to use for our app. We worked with two Android phones (Galaxy S9 and Galaxy S7). We also worked with a Samsung Galaxy Watch. Since I am injured, I had my other 2 partners run on the treadmill with the phones and watch in their hands and manually count the steps in addition. After each run, we would record all the data that the phones and watch detected as well as the manual count. We can in 30 sec, and 1 min intervals in a range of speeds from 5.5 mph to 10 mph as this is the range of typical runners.

We added a little bit of error in step counting to the numbers we had due to us taking extra steps while we had the phones resting on the arm bars, etc. We calculated the avg error per device and also calculated the pace in footsteps per minute so we can use that for finding the BPM of songs for warping them.

All the data and calculations can be found in this speadsheet here: https://docs.google.com/spreadsheets/d/1J2ysAOXA1FXJSTiTSfZmVzg7IQvT8NH68mO__dcMQ8I/edit?usp=sharing

As of right now, I feel like we are on track with our goals. Since we know the data we got from the step detection is pretty good, we can move forward with our selected devices.

In the next week, I look forward to figuring out how to get the footstep data off the phone to use for processing.

Group’s Status Update for 2/15

Step Detection Verification was our main focus for this week’s design review process.

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

We used class time to research how accelerometer data is measured, differs, is calculated, and how we can verify them against each other. Additionally, this research included finding metrics for believed accuracy for the accelerometers we are considering to use. Additionally, we designed this test and verification process as follows with two users, Aarushi and Mayur (data sheet on our Google Drive):

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

We figured this would be a technology that could jeopardize our project if the data we got from the phones and watch weren’t good enough. Out contingency plan was to either use a Bluetooth pedometer or write our own step detection algorithm, however we found that the data we got from the newer Android phone lied within an average 4% error of the actual step count which makes us confident in using it.

The biggest change that we are considering is not using the watch since the error rate on average was roughly 10-15% which is a little higher than we liked. We are thinking of making the app for the watch, but still using all the data from the phone.

Software Decisions with Wavelet Transforms

During class time, we researched best methods for phone & watch applications – Java. Python would be used for wavelet transforms for our familiarity and ease of use. Integration via Jython is possible.

Aarushi’s Status Update for 2/15

 

Step Detection

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

 

Wavelet Transform

Working on the wavelet transform model  based off the paper for musical analysis and audio compression methods: https://www.hindawi.com/journals/jece/2008/346767/#experimental-procedures-and-results. This was decided after evaluating numerous methods that were also discussed in this paper. This paper provides research and insights on testing how a transformation can be deemed successful. They proved that it is effective in decreasing error, as seen as quantization artifacts or Signal-to-mask ratio (SMR). This music transformation was performed by Discrete Wavelet Packet Transform (DWPT) for its increased accuracy and less computational complexity. I will follow suite for these two beneficial distinctions.

I will be implementing this in Python for easy integration into Java via Jython. Therefore, I have been playing around with Python’s wavelet transform open-source library – pywaveletes. I have setup my environment deleting/installing all necessary libraries and their correct versions for this testing. I have started testing the wavelet transform functions of this library on basic signals like [1,2,3,4] , originalSignal = sin(2 * np.pi * 7 * originalTime) where originalTime is a linespace of time from -1 to 1 broken up into ‘discrete’ components of 0.01 increments in time, and images, since I have worked with wavelet transforms with images before. This experimentation will continue into Saturday night, however this update will be submitted before results with audio signals are tested.