The recent and sudden outbreak of the COVID-19 (coronavirus) pandemic has forced several adjustments to be made throughout the 18-500 capstone course. Following official Carnegie Mellon University guidelines, there are no more in-person lectures. Additionally, all labs, including those in the ECE wings, are closed. Finally, all students were advised to practice “social distancing” and possibly stay at their homes for the remainder of the semester. In-person meetings are no longer feasible, and so all teamwork must be coordinated strictly online.
Our project has strictly covered the signals and software areas of ECE; so, the majority of our project is code. This turned out to be advantageous, as the overall goal of our project remains mostly the same. As a consequence, our division of labor remains mostly unaffected as well. In the design report, we described the three main pieces of our project and their assignment. Aarushi will still be in charge of writing the time-scale audio modification algorithm. Previously, she had set up meetings with professors to aid her, but meetings were cancelled with the outbreak of the virus. She will be attempting to set up new meetings while playing around with a couple libraries herself. Akash will be handling the song selection algorithm, and Mayur will be creating the mobile app framework, step detection, and user interface.
It would be ideal to have an end-to-end mobile application for joggers by the end of the semester. However, we recognize that there may be difficulties with the integration of the individual components of our project. Testing our entire program as a whole will also be difficult and possibly fruitless with the new situation. This is because we decided to use the Samsung S9 as our base test device since it had sufficient step count accuracy. With our new situation, Akash is the only member who has access to this device and he was not an initial test subject of our running data. Our discussion with our professor, Professor Sullivan, and TA, Jens, helped generate a reasonable plan to adjust. We will each write our individual parts, test them independently, and create deliverables that convey their functionalities. The latter mentioned deliverable is a new addition to our project. This new addition is to account for the case that we are not able to integrate the individual components. Thus, while we will aim to integrate the components, it will be a challenge and stretch goal.
Each component expects certain inputs, which we will provide recorded data, rather than real-time data. To test our components as if they were integrated, the data we generate will be an actual output from the other components. This will be the process of creating deliverables that display functionality. First, Akash will run with his Samsung S9 and gather running data. Concurrently, Mayur will write the android code that will acquire and interpret this running data as needed for our other algorithms. Mayur will also have written functionality to allow a user to input music into a playlist, and foolproof this functionality such that only songs of the right file type and bpm range will be accepted. This formatted running data and playlist will be inputted into Akash’s song choice algorithm. This algorithm will generate a sequence of songs the runner’s running data suggests they should listen to. This sequence of songs will be sent to the time-scale audio modification algorithm which will output the warped music the runner would theoretically listen to in real time. For the song selection and warping algorithms, we will test them on a computer instead of on a smartphone.
Overall, our project’s division of labor and goals remain mostly the same. In the event of unanticipated complications as a result of strictly online communications, we can demonstrate, document, and test each component to show their completeness.