Mayur’s Status Report for 3/14

These last two weeks have been pretty wild. As I stated as the goal of my last post, I wanted to finish up the design report on Monday as my goal for the week. This ended up taking both of Monday and Tuesday, as we had a sick member of our team. We got a PDF for feedback, with a couple points highlighted. I think it will be beneficial to discuss it with our mentor and TA(s), since the scan did not turn out that well. One thing I noted was that we increased the 60 seconds for music changing to 90 seconds, as we believe that long-distance runners will prefer less warping. Furthermore, from testing, we are aware that a minimum of 60 seconds is needed for accurate pace measurements, with more time preferred.

Our group is currently in discussion on how to proceed with our project in light of COVID-19. Our group members will no longer be on campus, so we will need to be more careful with how we coordinate pieces of the project. The virus has introduced a couple dangers to our project. For one, we will need to reevaluate our team member responsibilities. The previous division of tasks assumed that we could have face-to-face meetings, but now that is not possible. Another pain point will be the integration phase, as we cannot physically meet. Finally, testing will have to be reevaluated, since each member will need individual testing equipment and a complete repository of the project.

This week, we will be hashing out all of these ideas. Individually, I plan on creating the new UI for the main page of the app.

Team Status Report for 2/29

Our team spent most of our group time this week working on the design presentation and report. We wanted to incorporate the feedback we received from both the course staff and our peers after we finished the former. Some good comments we got from Professor Sullivan and the TAs was that we needed to consider some edge cases with how we will be switching songs and to look into the wavelets more. Accordingly, we have decided to allow more time in our Gantt chart for completing the wavelet transform algorithm.

We got some good feedback from our peers as well. In particular, we would like to add a power consumption metric to measure the battery usage of our app and made us consider how other apps play music off of phones.

The biggest obstacle for our project the next couple weeks is balancing the midterms that come right before vacation. Furthermore, our members may not have access to computers during Spring Break, which puts us out of commission for another week. Our plan to work around it is to possibly use some of the slack time from our Gantt chart to extend our timeline for pieces of the project.

Mayur’s Status Report for 2/29

I spent most of my time for capstone this week working on the design presentation (finishing touches) and the design proposal. Our team wanted to ensure that we incorporated the feedback from the proposal that we received from both the mentors and our peers. We discuss this more in depth within our team status report. For the design proposal, we split up the paper into portions to work on individually. I am tasked with completing a piece of the system decision, part of the design specification, and part of the tradeoffs section.

In terms of deliverables, my top goal is to finish the design with the team. Finalizing each portion of the project is important so that we can begin the actual creation.

Team Status Update for 2/22

We made a couple of changes to our system design in accordance to the feedback we received from the proposal presentation. First, we have consolidated the project to the phone. We no longer plan on developing the app on the watch since the step detector accuracy is almost over twice as inaccurate as necessary for our design goals. Additionally, we will be adding a song-picking algorithm to our design. This will allow us to reduce the number of artifacts from warping the song too much. Furthermore, we decreased the range of tempos we will be warping our song between. Originally, we were going to warp songs between tempos of 90-150. Based on our own experience during our treadmill tests, we have changed the range to 150-180. We believe this is an accurate tempo for a long-distance jogger. On that topic, the last change we have made is to target long-distance jogger as opposed to every type of runner. This will allow us to finetune our project and narrow the scope.

During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.

The risks to our project are relatively unchanged. If the wavelet transform does not work, we will be using a phase vocoder, which is known to work accurately. Nevertheless, we are hoping to get the former working with the aid of professors. If there are too many artifacts left over from warping songs by up to 30 BPM, we may choose to switch the songs rather than warp them that much. This will be implemented within our algorithm.

Mayur’s Status Update for 2/22

This week, I finished my exploration into using Python with Android Studio. I discovered several different methods. Each came with a host of problems that made them illogical to use.  Most of their problems were either that they didn’t have a way to access the Android Step Detector and Step Counter sensors, or that they didn’t provide ways to make callable functions in Python to interface with the app built in Java. We will be including a section in the design presentation and report with more details.

There was no specific reason we wanted to use Python other than for its familiarity, so we have decided to write the discrete wavelet transform with C/C++, which is natively supported by Android Studio. We also had the option of using Java, but feel that its efficiency for audio processing will not be as strong as C’s. I did a bit of searching for libraries that already implement the wavelet transform, and found a couple.

Finally, I worked on the design presentation slides to update them for our presentation next week. Specifically, the team discussed the data we had gathered to make conclusions on how we wanted to adjust our design goals and platform specifications.

I would say that the project is currently on track. We have taken the feedback from the proposal and used it to refine our project appropriately. Next week, the deliverables I want are the design doc (obviously) and having an app that uses a little C code as a proof-of-concept.

Mayur’s Status Update for 2/15

My contributions this week can be summarized into two major sections.

    1. Our whole group decided to gather data in order to determine the viability of using the phone and smartwatch built-in step counter functions. The three of us met up at the Tepper gym and measured the accuracy of two different Android phones and an Android watch. See the team update for more information. In this part, my contribution was in running on the treadmill as a second person so that we could have a second set of data. Based on the information we gathered, we have started to debate the necessity of pushing our application onto the watch. Its accuracy is lower than that of current design requirements,  so our only options are to acknowledge that we will not be hitting our requirements, or forcing a person to carry a phone with them. However, if we pick the latter option, then the watch is literally pointless.
    2.  As we outlined in our Proposal Presentation, I will be in charge of creating the UI and functionality of our smartphone application. For this week and up until Tuesday night, I am aiming to have created a very basic application that can run some python code. We need this written for two reasons. The first is to verify that our code base can be written in Python. Python has rich libraries for Audio and Signal processing, which we would like to utilize. However, as far as we are aware, most code-bases are built using only a single language. The second is that having a basic app will give us the chance to explore additional parts of our project in the next week, such as the most logical way to built out the UI, how music is imported, gather information on how long it takes to process music on the phone, and how to get step detector information from the Android API. So far, I have been looking over coding examples and following a basic tutorial to build my first app. Familiarizing myself with the host of different files and Android jargon has been the most challenging part of the week. The most exciting has been the fact that I figured out how to test the app directly on my phone. I would say that I am mostly on track to write the app by the end of next week. Aside from the app, I’m also hoping to begin work on the design presentation slides & writeup during next week.