Akash’s Status Report for 3/14

Before spring break we were working on the design review document and through our presentation, we got some new ideas and metrics that we could potentially include in our project. So in the week before break, I was trying to come up with ways we could possibly implement those ideas into our project, if at all. At the same time, I was continuing my research on the Android step detection functions and working on the details of the song selection algorithm. One important decision I think we came to as a group, that should be noted, is that we would rather play the same song multiple times than a song that doesn’t warp well.

Over the break, we found out that classes would be moving online and we would likely not be meeting each other for the rest of the semester. It will make it a little more difficult to work on the project, but because we split it evenly, it should still be doable to finish.

Mayur’s Status Report for 3/14

These last two weeks have been pretty wild. As I stated as the goal of my last post, I wanted to finish up the design report on Monday as my goal for the week. This ended up taking both of Monday and Tuesday, as we had a sick member of our team. We got a PDF for feedback, with a couple points highlighted. I think it will be beneficial to discuss it with our mentor and TA(s), since the scan did not turn out that well. One thing I noted was that we increased the 60 seconds for music changing to 90 seconds, as we believe that long-distance runners will prefer less warping. Furthermore, from testing, we are aware that a minimum of 60 seconds is needed for accurate pace measurements, with more time preferred.

Our group is currently in discussion on how to proceed with our project in light of COVID-19. Our group members will no longer be on campus, so we will need to be more careful with how we coordinate pieces of the project. The virus has introduced a couple dangers to our project. For one, we will need to reevaluate our team member responsibilities. The previous division of tasks assumed that we could have face-to-face meetings, but now that is not possible. Another pain point will be the integration phase, as we cannot physically meet. Finally, testing will have to be reevaluated, since each member will need individual testing equipment and a complete repository of the project.

This week, we will be hashing out all of these ideas. Individually, I plan on creating the new UI for the main page of the app.

Aarushi’s Status Report for 2/29

This past week, I heavily focused on the Design Presentation – ensuring a few slides, in particular, were clean, consistent, and logical. These few slides were complex and contained a lot of information that was difficult to aesthetically fit (i.e. slides 2,  3, 7). Additionally, I was the speaker for the design presentation. Thus, I spent considerable time rehearsing since our presentation was on Wednesday which allowed for more practice time.

Additionally, after working with the group to delegate portions of the design report, I have been working through my portion of the document. This will continue past Saturday night.

Other than meeting tasks assigned by the class, I took this week to really understand the benefits and drawbacks of the ‘wavelet transform based’ phase vocoder in relation to other time-scale modification methods we could use. This involved reading (very slowly, haha) numerous research papers and their findings on tradeoffs between the various methods. I now understand the high-level processes behind each of these methods, what distinguishes them from each other, and how exactly each of the more advanced/more accurate techniques builds off of the closest simpler/less accurate method considered. (These methods were explained in the design presentation & will be in the design report).

Next steps, following the completion of the design report, will be to complete a base portion/’method’ of the described phase vocoder.

Team Status Report for 2/29

Our team spent most of our group time this week working on the design presentation and report. We wanted to incorporate the feedback we received from both the course staff and our peers after we finished the former. Some good comments we got from Professor Sullivan and the TAs was that we needed to consider some edge cases with how we will be switching songs and to look into the wavelets more. Accordingly, we have decided to allow more time in our Gantt chart for completing the wavelet transform algorithm.

We got some good feedback from our peers as well. In particular, we would like to add a power consumption metric to measure the battery usage of our app and made us consider how other apps play music off of phones.

The biggest obstacle for our project the next couple weeks is balancing the midterms that come right before vacation. Furthermore, our members may not have access to computers during Spring Break, which puts us out of commission for another week. Our plan to work around it is to possibly use some of the slack time from our Gantt chart to extend our timeline for pieces of the project.

Mayur’s Status Report for 2/29

I spent most of my time for capstone this week working on the design presentation (finishing touches) and the design proposal. Our team wanted to ensure that we incorporated the feedback from the proposal that we received from both the mentors and our peers. We discuss this more in depth within our team status report. For the design proposal, we split up the paper into portions to work on individually. I am tasked with completing a piece of the system decision, part of the design specification, and part of the tradeoffs section.

In terms of deliverables, my top goal is to finish the design with the team. Finalizing each portion of the project is important so that we can begin the actual creation.

Akash’s Status Report for 2/29

This week I worked on the slides for our design review presentation and working through the design review document. While going through the document and presentation we fleshed out a lot about how our project is going to work. Since we already found that the step detection from the phone is good enough, we modified part of our project to include a song selection algorithm, which is the new part I will be working on. So in the next few weeks, I will be working on that and make sure it works properly.

The point of this algorithm is to find a song in the user’s playlist that matches the running pace the best. Our metrics require that the song is within a certain BPM range of the running pace, so the goal of this algorithm is to find the song that best matches the pace. This is so that when we put the time-scale audio modification algorithm on the song, it will not sound too weird from the original and still be enjoyable to the user. Since at the end of the day, if the music does not sound good, then the user is less likely to use the app.

Aarushi’s Status Report for 2/22

While last week involved preliminary testing and information gathering for requirements for the design proposal, this week proved to offer drastic and unexpected changes.

  1. Software Decisions – Last week we decided that the best integration method between the wavelet transform and the mobile app would include the Wavelet transform implemented in python, and integrated to android studio via Jython. However, this method would have required a different library to be used on the Java side that would not interface with the phone’s step counter data. While researching and discussing this issue, Professor Sullivan suggested using C++ for the wavelet transform. Since I will be working on the wavelet transform, I took this decision particularly personally. I have little experience with C++. In fact, even upon initially playing with C++ to familiarize myself with the language, I was still uncomfortable. Despite my distaste for this language, it was important to note that C++ offered DWT & IDWT (discrete wavelet transform) methods AND these were well documented: http://wavelet2d.sourceforge.net/#toc-Chapter-2. In fact, these implementations and example use cases I found provided more customization and flexibility with the input/output signals than Python’s libraries and examples proved to show. As a result, I decided to bite the bullet to favor using C++ for its easier integration with the mobile app, its flexibility with signal processing, and its sufficient, clear examples/documentation of wavelet transform use cases.
  2. Device Decisions – NO watch, ONLY phone based on step counter data we measured and acquired since the watch’s step counter was the least accurate despite the fact that it is of a recent generation.
  3. Scoping Step Detection – As of last week, we decided our target audience to be runners. However, after performing our own ‘runner’s’ test, we realized that the discrepancy between our target scope of bpm was because our searches were targeting runners, but we were actually referencing joggers. Additionally, our measured pace/desired bpm of 150-180 bpm actually matched up well with many songs I do use to fast ‘jog’ to. Thus, we adjusted our target bpm/pace accordingly to match this pace.
  4. Music Choice – During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.
  5. Wavelet Transform vs Phase Vocoder metrics and tradeoffs were searched for and validated as expected. Additionally a plan to accomplish the wavelet transform has been made: code and test for a sin wave without changing inputs, do the same and test on simple music, account for tempo variations, account for pitch, & measure artifacts throughout this process. I have additionally researched additional resources on campus in case I need guidance in applying the wavelets for our specific use cases (i.e. Prof Aswin, Stern, Paul Heckbert, Jelena Kovačević)

Akash’s Status Report for 2/22

This week I worked on finding how to get the step data from the  Samsung Galaxy S9. We decided to work with the S9 since it gave us the best data overall from our testing.

I found a few papers and different websites that explain how to get the step detector to work for Android Studio, but it is unclear whether that is for Android in general or Samsung specific. I would like to keep doing some research but also try the suggestions in the websites.

My goal for the next week is to get something basic working from the papers and websites and see how accurate it is.

Team Status Update for 2/22

We made a couple of changes to our system design in accordance to the feedback we received from the proposal presentation. First, we have consolidated the project to the phone. We no longer plan on developing the app on the watch since the step detector accuracy is almost over twice as inaccurate as necessary for our design goals. Additionally, we will be adding a song-picking algorithm to our design. This will allow us to reduce the number of artifacts from warping the song too much. Furthermore, we decreased the range of tempos we will be warping our song between. Originally, we were going to warp songs between tempos of 90-150. Based on our own experience during our treadmill tests, we have changed the range to 150-180. We believe this is an accurate tempo for a long-distance jogger. On that topic, the last change we have made is to target long-distance jogger as opposed to every type of runner. This will allow us to finetune our project and narrow the scope.

During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.

The risks to our project are relatively unchanged. If the wavelet transform does not work, we will be using a phase vocoder, which is known to work accurately. Nevertheless, we are hoping to get the former working with the aid of professors. If there are too many artifacts left over from warping songs by up to 30 BPM, we may choose to switch the songs rather than warp them that much. This will be implemented within our algorithm.

Mayur’s Status Update for 2/22

This week, I finished my exploration into using Python with Android Studio. I discovered several different methods. Each came with a host of problems that made them illogical to use.  Most of their problems were either that they didn’t have a way to access the Android Step Detector and Step Counter sensors, or that they didn’t provide ways to make callable functions in Python to interface with the app built in Java. We will be including a section in the design presentation and report with more details.

There was no specific reason we wanted to use Python other than for its familiarity, so we have decided to write the discrete wavelet transform with C/C++, which is natively supported by Android Studio. We also had the option of using Java, but feel that its efficiency for audio processing will not be as strong as C’s. I did a bit of searching for libraries that already implement the wavelet transform, and found a couple.

Finally, I worked on the design presentation slides to update them for our presentation next week. Specifically, the team discussed the data we had gathered to make conclusions on how we wanted to adjust our design goals and platform specifications.

I would say that the project is currently on track. We have taken the feedback from the proposal and used it to refine our project appropriately. Next week, the deliverables I want are the design doc (obviously) and having an app that uses a little C code as a proof-of-concept.