Team Status Report for 3/28

Status Report

Risk Management Update (most remains the same)

We have built risk management into our project in a couple ways. First in terms of the schedule, as we mentioned before, we added slack time into the schedule to make sure we can account for any issues we run into as we develop our project. This allows us to work out these issues without running out of time at the end of the semester.

From the design side, we have backups for potential problems that will come up while working on the project. We have four specific cases for our metrics that we laid out earlier. The main risk factor is using the Dual-Tree Complex Wavelet Transform Phase Vocoder. If it does not work, we will fall back on using the STFT based Phase Vocoder that we know works well for music. We may attempt to implement our own, or use a library on GitHub. We have done our research and are putting most of our time into this aspect of the project since it is the focus point that differentiates us from other apps like this, while also being the primary risk factor. 

The second biggest risk factor is the accuracy of the step detection from the smartphones. We have done testing and seen that the phone meets our accuracy requirements, so hopefully this will not be an issue, but if we find out during implementation and testing that the accuracy is not as good as we thought, we are going to order a pedometer that we can collect the data from instead. 

The only change in our risk management is to account for lost time and lost ability to work physically together. This may deter integration, and full system testing capabilities. This is because we decided to use the Samsung S9 as our base test device since it had sufficient step count accuracy. With our new situation, Akash is the only member who has access to this device and he was not an initial test subject of our running data. Our discussion with our professor, Professor Sullivan, and TA, Jens, helped generate a reasonable plan to possibly adjust. This will serve as our backup plan. We will each write our individual parts, test them independently, and create deliverables that convey their functionalities. The latter mentioned deliverable is a new addition to our project. This new addition is to account for the case that we are not able to integrate the individual components. Thus, while we will aim to integrate the components, it will be a challenge and stretch goal.

The smaller risk factors involve the timing of the application, which will involve us widening the timing windows for our refresh rate, and minimizing the time it takes for the app to start when initially opened.

In terms of budget, we have not run into any issues yet, and do not plan on, since we have most everything we need already. 

Mayur’s Status Report for 3/28

My goal for last week was to finish off the basic step counting functionality of the app so that Akash could run… and I finished! I downloaded the app to my phone before sending it to Akash. After pressing a “play” button, the app begins counting steps. Instead of using the step counter sensor, I am currently using the step detector. From our discussion with Professor Sullivan, we decided to gather data every second so that we could test the limits of the warping algorithm. As described in our research phase, the step counter takes multiple extra seconds to gather its measurements. This is in order for it to filter out false positives (which takes extra processing time). When I tried it out with the app, I found that I wasn’t getting any measurements for the first 6-9 seconds after starting up the app. Since the step counter took a variable number of multiple seconds to gather data that we wanted at the granularity of a second, it was not possible to use it in this case. If we switch back to the granularity of 60 or 90 seconds, we will switch back to it then.

Initially, I displayed the step counts on the screen in the form of an alert dialog. This did not work out for two reasons. The first was that I was unsure if there was a limit to the character count of a dialog box. The second was that I could not find a way to copy the text of the dialog into an email program. As a solution, I implemented code that brought up a ready-filled email to send out the step counts.

My goal for next week is to play music off the phone. As a stretch goal, I would want to call some function in C/C++ just as a proof-of-concept. Admittedly, the UI looks hideous at the moment. This is something else I can fix.

Pictures:

 <– The UI

 

  <– Automatically pulls up email with pre-filled body after user presses pause button

Akash’s Status Report for 3/28

This week I worked on starting the song selection algorithm and getting some step detection data so we can use it for the future. My goal with the song selection algorithm is to get it working without music to make sure that the actual math and selection of songs works properly. I am using Python for this right now, but the final one might be in Java.

We are collecting step data in case we do not have the ability to finally integrate everything into one app, we have the data to use, even if not in real-time. My goal for the next week is to finish the basic song selection algorithm.

Aarushi’s Status Report for 3/28

In working on the time-scale audio modification program component for this project, Prof Sullivan suggested I start in Matlab and write a program that can read an audio file, break the signal into its wavelet components, and reconstruct the signal from its wavelet components. I played around with this. In the wavelet decomposition at levels 2 and 3, the reconstruction sound was faster, thinner, and of considerably higher pitch. Wavelet decomposition at level 1, however, allowed for a reconstructed signal that sounded the same as the original. This is true for ‘db2’ and ‘db1’ wavelet types – we will be using ‘db2’ since it is biorthogonal and our design report details why we chose a biorthogonal implementation. I completed this task this week.

In addition, I worked on a STFT based phase vocoder in Matlab. I was able to successfully test time-scale audio modification through this method with simple one layered musical signals. There were no interfering audio artifacts after the modification of the signals. Having this done means that 1. I can use this phase vocoder implementation as an example as to how to integrate the DTCWT in place of the STFT, and 2. the backup plan for how we would modify the signals is complete in Matlab.

Next steps are to test these on more complex signals, and implement the phase vocoder with DTCWT.

Aarushi’s Status Report for 3/21

This week involved further group discussions addressing how our project would move forward in our remote environments. Discussions with Prof Sullivan and Jens provided insights on how we could break our project down into milestones that may be safely achievable, to those that would target goals, to those that would be stretch goals. Working through this conceptualization with the team was helpful. It made us more comfortable with the current situation. These were detailed in our Statement of Work, another document we worked on this week.

We also decided on individual tasks for the next week that would help prove that our planned milestones are feasible. While Akash and Mayur are working to collect step count data, my week’s task is to create a skeleton for the time warping algorithm (create a design), and be able to import and visualize mp3 files into this skeleton.

To be clear, our group has decided that our week’s will start/end on Wednesdays to allow enough time and flexibility for us to get our individual tasks done, and for communication/coordination.

p.s. I apologize for this report being one day late. As a result of fracturing my hand mid last week and the new COVID-19 ‘stay at home’ sanctions in New Jersey, I last minute had to spend the weekend moving from Pittsburgh to home. I would really appreciate the flexibility, here.

Mayur’s Status Report for 3/21

As I described in last week’s report, the outbreak of COVID-19 has forced all classes online. Our group wrote a Statement of Work that describes how we plan on dividing the project (see the team status report for this week). Essentially, we will be continuing with our initial division of labor, with small modifications to compensate for the difficulty of integrating our individual portions. During this next week, I will be creating the framework that allows Android to record step counts. I will be writing an app that will allow Akash to record his pace at 90 seconds intervals during a 30 minute run. Currently, my plan is to emit the recorded data as a list and send it in an email with a hard-coded address. This is subject to change, however. Aarushi will be using the data we gather to test out her time warping implementation.

In regards to the implementation of the project this week, I spent time attempting to work with the Native C/C++ App with Android Studio. Before, I was working with an Empty Activity instead. As it turns out, there were many more problems than I initially anticipated with setting up the new app. Right from the start, I received errors when Android Studio tried to build the project.  Specifically, I received errors because I had not accepted certain licenses. I attempted to follow the solutions in this link https://stackoverflow.com/questions/54273412/failed-to-install-the-following-android-sdk-packages-as-some-licences-have-not-b to solve the problem. But, I got an error along the way that told me the JAVA_HOME environment variable had not been set. I was confused at first, since I assumed that Android Studio would just set it to the JDK (actually JRE) that came embedded with the program (apparently not). To solve the issue, I downloaded the latest version of Java (jdk-14) and set my environment variables manually. Surprisingly, Android Studio does not support versions of Java above 8, and so I had to uninstall it. Then, I found out that there are GUI issues with using the default shell with Windows when trying to accept the licenses. As a response, I downloaded Git Bash. Eventually, this was my solution on how to fix the issues I had:

  1. Download Git Bash
  2. Navigate to  ~/AppData/Local/Android/Sdk/tools/bin
  3. Run $ export JAVA_HOME=”C:\Program Files\Android\Android Studio\jre”
  4. Run $ ./sdkmanager.bat –licenses
  5. Reply yes to every prompt
  6. Reload Android Studio and attempt to re-build the project
  7. Accept every prompt to resolve issues related to getting/using a C/C++ compiler

Akash’s Status Report for 3/21

This week our team worked together to figure out how to move forward with being remote and having to finish the project. Since our project is already split into 3 parts and is mostly all software, we feel like we do not have to change our course of action very much.

In the next week, I will be working with Mayur to get the step detection tested and working on my phone, to make sure we can collect the data and send it to the time-warp algorithm since we might not be able to integrate all the parts together in one app. If we can still collect a graph of the data, we can use that to do the audio modification – just not in real time.

Team Status Report for 3/21

The recent and sudden outbreak of the COVID-19 (coronavirus) pandemic has forced several adjustments to be made throughout the 18-500 capstone course. Following official Carnegie Mellon University guidelines, there are no more in-person lectures. Additionally, all labs, including those in the ECE wings, are closed. Finally, all students were advised to practice “social distancing” and possibly stay at their homes for the remainder of the semester. In-person meetings are no longer feasible, and so all teamwork must be coordinated strictly online.

Our project has strictly covered the signals and software areas of ECE; so, the majority of our project is code. This turned out to be advantageous, as the overall goal of our project remains mostly the same. As a consequence, our division of labor remains mostly unaffected as well. In the design report, we described the three main pieces of our project and their assignment. Aarushi will still be in charge of writing the time-scale audio modification algorithm. Previously, she had set up meetings with professors to aid her, but meetings were cancelled with the outbreak of the virus. She will be attempting to set up new meetings while playing around with a couple libraries herself. Akash will be handling the song selection algorithm, and Mayur will be creating the mobile app framework, step detection, and user interface.

It would be ideal to have an end-to-end mobile application for joggers by the end of the semester. However, we recognize that there may be difficulties with the integration of the individual components of our project. Testing our entire program as a whole will also be difficult and possibly fruitless with the new situation. This is because we decided to use the Samsung S9 as our base test device since it had sufficient step count accuracy. With our new situation, Akash is the only member who has access to this device and he was not an initial test subject of our running data. Our discussion with our professor, Professor Sullivan, and TA, Jens, helped generate a reasonable plan to adjust. We will each write our individual parts, test them independently, and create deliverables that convey their functionalities. The latter mentioned deliverable is a new addition to our project. This new addition is to account for the case that we are not able to integrate the individual components. Thus, while we will aim to integrate the components, it will be a challenge and stretch goal.

Each component expects certain inputs, which we will provide recorded data, rather than real-time data. To test our components as if they were integrated, the data we generate will be an actual output from the other components. This will be the process of creating deliverables that display functionality. First, Akash will run with his Samsung S9 and gather running data. Concurrently, Mayur will write the android code that will acquire and interpret this running data as needed for our other algorithms. Mayur will also have written functionality to allow a user to input music into a playlist, and foolproof this functionality such that only songs of the right file type and bpm range will be accepted. This formatted running data and playlist will be inputted into Akash’s song choice algorithm. This algorithm will generate a sequence of songs the runner’s running data suggests they should listen to. This sequence of songs will be sent to the time-scale audio modification algorithm which will output the warped music the runner would theoretically listen to in real time. For the song selection and warping algorithms, we will test them on a computer instead of on a smartphone.

Overall, our project’s division of labor and goals remain mostly the same. In the event of unanticipated complications as a result of strictly online communications, we can demonstrate, document, and test each component to show their completeness.

Aarushi’s Status Report 3/14

This report consists of an update over the last two weeks — the latter week was Spring Break.

In the week before Spring Break, I got badly sick  with the flu. As a result, I spent the first half of the week resting and allocating all my ‘work time’/energy working on the design report. This included completing my parts which included the architecture, conclusion, future work, and all the audio modification sections. It also included formatting, and cohesive revisions since the design proposal presentation. Through this work, I realized that we were missing a key design requirement: the audio files that our product would support. After starting and completing research on what file types are best for signal processing, music, and popular audio uses, I completed this section in the design report as well. Considering my illness, and that I spent half of the week on this document.

The following week was Spring Break. Our group did not have plans to work this week. I was supposed to be traveling on a CMU sponsored brigade. Since this was cancelled, I decided to be slightly productive. I scheduled meetings with 2 professors, as previously identified, who would be able to advise on the phase vocoder I plan to implement. Once classes were officially moved online, our project was paused as we were waiting on how to proceed in working on this project remotely. Once our expectations are solidified, our team members will plan how to complete what is needed. For now, we have already identified three individual parts of the project that can be worked on independently. How these components will be integrated and tested is TBD based on forthcoming expectations and group conversations as classes resume this week.

Team Status Report for 3/14

In the week before break, we finished up our design review presentation and document. We fleshed out all the last minute remaining items in terms of how the whole system worked together and the implementation of all the components.

Over break, we got hit with some big news due to Coronavirus. As we can see the future is still a little hazy, we are going to do our best with the situation we are given. We will continue to work on our individual components and find a way to integrate and test as we need. We split the work pretty evenly and right now, we can each work on our own individual parts as nothing relies on one another until closer to the end. However, we will likely need extra time for integration, as we will not be able to do that in person, making it slightly trickier.