Akash’s Status Report for 4/25

This week I worked on testing our app and collecting some running data. I ran into some issues with Android Studio, but with the help of Mayur, I was able to download the app onto my phone (the testing device). However, even after that, my phone was not able to collect data with an issue we have still not figured out. I then switched over to using an older Android phone (Galaxy S6) and it worked fine. We got good data that we can use to show the time-scale warp.

I also started working on the slides for our final presentation, that will be given this week. Our presentation is mostly focused on our actual implemented system and our metric validation.

Akash’s Status Report for 4/18

This past week I worked on converting the Python implementation of the song selection algorithm to Java. This way we can easily integrate this feature into the app. I had a little difficulty with this as I haven’t coded in Java in a long time, but I was able to use the Python skeleton I wrote to help guide me. Now that it is all in Java, Mayur is going to work on integrating it into the app. The next feature we could add to this would be user preference of songs so that scores could be modified based on how much the user likes specific songs.

Akash’s Status Report for 4/11

This week I have been working on converting the song selection algorithm code to Java so we can integrate it in the app. During our demo Professor Sullivan and Jens gave us some ideas about testing and other features to try with the song selection algorithm. The one that I think would be cool to work on is the having the users enter their ranks or preferences for songs that will be like a score booster when the songs are being selected.

Akash’s Status Report for 4/4

This week I worked on the song selection algorithm. I got something basic working in Python. I created a list of song objects and the algorithm goes through as specified in the Design Document and scores the songs and picks the best one to play. The next step is to get this working with actual music and write it so it is compatible with Android Studio.

Akash’s Status Report for 3/28

This week I worked on starting the song selection algorithm and getting some step detection data so we can use it for the future. My goal with the song selection algorithm is to get it working without music to make sure that the actual math and selection of songs works properly. I am using Python for this right now, but the final one might be in Java.

We are collecting step data in case we do not have the ability to finally integrate everything into one app, we have the data to use, even if not in real-time. My goal for the next week is to finish the basic song selection algorithm.

Akash’s Status Report for 3/21

This week our team worked together to figure out how to move forward with being remote and having to finish the project. Since our project is already split into 3 parts and is mostly all software, we feel like we do not have to change our course of action very much.

In the next week, I will be working with Mayur to get the step detection tested and working on my phone, to make sure we can collect the data and send it to the time-warp algorithm since we might not be able to integrate all the parts together in one app. If we can still collect a graph of the data, we can use that to do the audio modification – just not in real time.

Team Status Report for 3/21

The recent and sudden outbreak of the COVID-19 (coronavirus) pandemic has forced several adjustments to be made throughout the 18-500 capstone course. Following official Carnegie Mellon University guidelines, there are no more in-person lectures. Additionally, all labs, including those in the ECE wings, are closed. Finally, all students were advised to practice “social distancing” and possibly stay at their homes for the remainder of the semester. In-person meetings are no longer feasible, and so all teamwork must be coordinated strictly online.

Our project has strictly covered the signals and software areas of ECE; so, the majority of our project is code. This turned out to be advantageous, as the overall goal of our project remains mostly the same. As a consequence, our division of labor remains mostly unaffected as well. In the design report, we described the three main pieces of our project and their assignment. Aarushi will still be in charge of writing the time-scale audio modification algorithm. Previously, she had set up meetings with professors to aid her, but meetings were cancelled with the outbreak of the virus. She will be attempting to set up new meetings while playing around with a couple libraries herself. Akash will be handling the song selection algorithm, and Mayur will be creating the mobile app framework, step detection, and user interface.

It would be ideal to have an end-to-end mobile application for joggers by the end of the semester. However, we recognize that there may be difficulties with the integration of the individual components of our project. Testing our entire program as a whole will also be difficult and possibly fruitless with the new situation. This is because we decided to use the Samsung S9 as our base test device since it had sufficient step count accuracy. With our new situation, Akash is the only member who has access to this device and he was not an initial test subject of our running data. Our discussion with our professor, Professor Sullivan, and TA, Jens, helped generate a reasonable plan to adjust. We will each write our individual parts, test them independently, and create deliverables that convey their functionalities. The latter mentioned deliverable is a new addition to our project. This new addition is to account for the case that we are not able to integrate the individual components. Thus, while we will aim to integrate the components, it will be a challenge and stretch goal.

Each component expects certain inputs, which we will provide recorded data, rather than real-time data. To test our components as if they were integrated, the data we generate will be an actual output from the other components. This will be the process of creating deliverables that display functionality. First, Akash will run with his Samsung S9 and gather running data. Concurrently, Mayur will write the android code that will acquire and interpret this running data as needed for our other algorithms. Mayur will also have written functionality to allow a user to input music into a playlist, and foolproof this functionality such that only songs of the right file type and bpm range will be accepted. This formatted running data and playlist will be inputted into Akash’s song choice algorithm. This algorithm will generate a sequence of songs the runner’s running data suggests they should listen to. This sequence of songs will be sent to the time-scale audio modification algorithm which will output the warped music the runner would theoretically listen to in real time. For the song selection and warping algorithms, we will test them on a computer instead of on a smartphone.

Overall, our project’s division of labor and goals remain mostly the same. In the event of unanticipated complications as a result of strictly online communications, we can demonstrate, document, and test each component to show their completeness.

Team Status Report for 3/14

In the week before break, we finished up our design review presentation and document. We fleshed out all the last minute remaining items in terms of how the whole system worked together and the implementation of all the components.

Over break, we got hit with some big news due to Coronavirus. As we can see the future is still a little hazy, we are going to do our best with the situation we are given. We will continue to work on our individual components and find a way to integrate and test as we need. We split the work pretty evenly and right now, we can each work on our own individual parts as nothing relies on one another until closer to the end. However, we will likely need extra time for integration, as we will not be able to do that in person, making it slightly trickier.

Akash’s Status Report for 3/14

Before spring break we were working on the design review document and through our presentation, we got some new ideas and metrics that we could potentially include in our project. So in the week before break, I was trying to come up with ways we could possibly implement those ideas into our project, if at all. At the same time, I was continuing my research on the Android step detection functions and working on the details of the song selection algorithm. One important decision I think we came to as a group, that should be noted, is that we would rather play the same song multiple times than a song that doesn’t warp well.

Over the break, we found out that classes would be moving online and we would likely not be meeting each other for the rest of the semester. It will make it a little more difficult to work on the project, but because we split it evenly, it should still be doable to finish.

Akash’s Status Report for 2/29

This week I worked on the slides for our design review presentation and working through the design review document. While going through the document and presentation we fleshed out a lot about how our project is going to work. Since we already found that the step detection from the phone is good enough, we modified part of our project to include a song selection algorithm, which is the new part I will be working on. So in the next few weeks, I will be working on that and make sure it works properly.

The point of this algorithm is to find a song in the user’s playlist that matches the running pace the best. Our metrics require that the song is within a certain BPM range of the running pace, so the goal of this algorithm is to find the song that best matches the pace. This is so that when we put the time-scale audio modification algorithm on the song, it will not sound too weird from the original and still be enjoyable to the user. Since at the end of the day, if the music does not sound good, then the user is less likely to use the app.