Mayur’s Status Update for 2/15

My contributions this week can be summarized into two major sections.

    1. Our whole group decided to gather data in order to determine the viability of using the phone and smartwatch built-in step counter functions. The three of us met up at the Tepper gym and measured the accuracy of two different Android phones and an Android watch. See the team update for more information. In this part, my contribution was in running on the treadmill as a second person so that we could have a second set of data. Based on the information we gathered, we have started to debate the necessity of pushing our application onto the watch. Its accuracy is lower than that of current design requirements,  so our only options are to acknowledge that we will not be hitting our requirements, or forcing a person to carry a phone with them. However, if we pick the latter option, then the watch is literally pointless.
    2.  As we outlined in our Proposal Presentation, I will be in charge of creating the UI and functionality of our smartphone application. For this week and up until Tuesday night, I am aiming to have created a very basic application that can run some python code. We need this written for two reasons. The first is to verify that our code base can be written in Python. Python has rich libraries for Audio and Signal processing, which we would like to utilize. However, as far as we are aware, most code-bases are built using only a single language. The second is that having a basic app will give us the chance to explore additional parts of our project in the next week, such as the most logical way to built out the UI, how music is imported, gather information on how long it takes to process music on the phone, and how to get step detector information from the Android API. So far, I have been looking over coding examples and following a basic tutorial to build my first app. Familiarizing myself with the host of different files and Android jargon has been the most challenging part of the week. The most exciting has been the fact that I figured out how to test the app directly on my phone. I would say that I am mostly on track to write the app by the end of next week. Aside from the app, I’m also hoping to begin work on the design presentation slides & writeup during next week.

Akash’s Status Update for 2/15

This week I worked on testing the step detection in possible devices that we are going to use for our app. We worked with two Android phones (Galaxy S9 and Galaxy S7). We also worked with a Samsung Galaxy Watch. Since I am injured, I had my other 2 partners run on the treadmill with the phones and watch in their hands and manually count the steps in addition. After each run, we would record all the data that the phones and watch detected as well as the manual count. We can in 30 sec, and 1 min intervals in a range of speeds from 5.5 mph to 10 mph as this is the range of typical runners.

We added a little bit of error in step counting to the numbers we had due to us taking extra steps while we had the phones resting on the arm bars, etc. We calculated the avg error per device and also calculated the pace in footsteps per minute so we can use that for finding the BPM of songs for warping them.

All the data and calculations can be found in this speadsheet here: https://docs.google.com/spreadsheets/d/1J2ysAOXA1FXJSTiTSfZmVzg7IQvT8NH68mO__dcMQ8I/edit?usp=sharing

As of right now, I feel like we are on track with our goals. Since we know the data we got from the step detection is pretty good, we can move forward with our selected devices.

In the next week, I look forward to figuring out how to get the footstep data off the phone to use for processing.

Group’s Status Update for 2/15

Step Detection Verification was our main focus for this week’s design review process.

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

We used class time to research how accelerometer data is measured, differs, is calculated, and how we can verify them against each other. Additionally, this research included finding metrics for believed accuracy for the accelerometers we are considering to use. Additionally, we designed this test and verification process as follows with two users, Aarushi and Mayur (data sheet on our Google Drive):

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

We figured this would be a technology that could jeopardize our project if the data we got from the phones and watch weren’t good enough. Out contingency plan was to either use a Bluetooth pedometer or write our own step detection algorithm, however we found that the data we got from the newer Android phone lied within an average 4% error of the actual step count which makes us confident in using it.

The biggest change that we are considering is not using the watch since the error rate on average was roughly 10-15% which is a little higher than we liked. We are thinking of making the app for the watch, but still using all the data from the phone.

Software Decisions with Wavelet Transforms

During class time, we researched best methods for phone & watch applications – Java. Python would be used for wavelet transforms for our familiarity and ease of use. Integration via Jython is possible.

Aarushi’s Status Update for 2/15

 

Step Detection

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

 

Wavelet Transform

Working on the wavelet transform model  based off the paper for musical analysis and audio compression methods: https://www.hindawi.com/journals/jece/2008/346767/#experimental-procedures-and-results. This was decided after evaluating numerous methods that were also discussed in this paper. This paper provides research and insights on testing how a transformation can be deemed successful. They proved that it is effective in decreasing error, as seen as quantization artifacts or Signal-to-mask ratio (SMR). This music transformation was performed by Discrete Wavelet Packet Transform (DWPT) for its increased accuracy and less computational complexity. I will follow suite for these two beneficial distinctions.

I will be implementing this in Python for easy integration into Java via Jython. Therefore, I have been playing around with Python’s wavelet transform open-source library – pywaveletes. I have setup my environment deleting/installing all necessary libraries and their correct versions for this testing. I have started testing the wavelet transform functions of this library on basic signals like [1,2,3,4] , originalSignal = sin(2 * np.pi * 7 * originalTime) where originalTime is a linespace of time from -1 to 1 broken up into ‘discrete’ components of 0.01 increments in time, and images, since I have worked with wavelet transforms with images before. This experimentation will continue into Saturday night, however this update will be submitted before results with audio signals are tested.

 

 

Project Proposal – “18 is basically 20”

General support to our idea: here

Metrics that define success:

 

  • Pitch similarity percentage until annoyance

 

      • Absolute vs relative vs perfect pitch. We care about absolute pitch
      • Like most human traits, AP is not an all-or-none ability, but rather, exists along a continuum 10, 17, 20, 21. Self-identified AP possessors score well above chance (which would be 1 out of 12, or 8.3%) on AP tests, typically scoring between 50 and 100% correct [19], and even musicians not claiming AP score up to 40% [18]. Here
      • AP possessors incorrectly identify tones that are 6% apart here (best possible case to meet AKA the hardest) – upper bound of accuracy
      • The response accuracy of melody comparison is shown separately in Figure 2 for the AP group and the non‐AP group. The chance level is 50%. In the C major (non-transposed) context, in which the two melodies could be compared at the same pitch level, both the AP and the non‐AP groups gave the highest level of performance; in contrast, in the E– and F# context, in which the comparison melody was transposed to different pitch level from the standard melody, both groups performed markedly worse. Notably, the AP group performed more poorly than the non‐AP group. Here (should our percentage of annoyance error depend on the pitch of the song?)
        • Non AP scores 40-60
        • AP scores 80-100
        • Avg = 70?
      • 1-5 people/10,000 have AP here => don’t worry about AP

 

  • Pitch will remain same with 25 cents (¼ semitone) marginal error

 

    • Will we have a relative pitch problem as songs don’t stay at one tone throughout?

 

  • Percentage of difference between pace and tempo until annoyance (pulsing)

 

      • Helpful pace/tempo matching Here
      • Helpful pace/tempo matching Here 
      • Runner’s side bpm measure here
      • 120-140 bpm is normal
      • Over 150 bpm probably too fast & will be distracting – we will ensure to stay under 150 bpm.
      • Range should be (inclusive of walking) 90-153 bpm

 

  • Room for error in pace detection (stride deviation)

 

 

  • How long between change in pace and change in tempo of currently playing song
  • Standard change in pace over run seems to be around + or – 5%
  • How to connect sensor on shoe to phone to share data

 

      • New pedometers have integration with phones using bluetooth and different apps

 

  • Which sensors to use / how to use sensors 

 

      • Best sensor would be a pedometer that you can attach to your shoe that connects to your phone using bluetooth
      • We can use this to track how accurate the step count is verse a phone/smart watch

 

  • Real-time feedback – how often?

 

      • Instead of time, let’s use # of footsteps
      • Since this gives a better relation to bpm than time does
      • Remeasure & calibrate every 20 steps? (conjecture based on my running experience. We should test amongst us & friends / random gym people for design proposal (mention this in project proposal)) 

 

  • calibration? 
  • Song choice algorithm
  • Spotify was forced to not make the song choice truly random

 

 

Shopping List:

  • Bluetooth wireless headphone(s)
  • Sensors
  • Extra smartwatch?
  • Android phone (ask Sullivan for 18-551 extras)

 

* scratch ML algo for tempo detection

* this app end goal is to be linked to spotify → metadata of songs

* long distance runners use this to help maintain

Week 1 – Abstract & Preliminary Design Thoughts

Pace Detection:

    • Challenges with step count detection: Therefore, various walking detection and step counting methods have been developed based on one or both of the following two physical phenomena: the moment when the same heel of a pedestrian strikes the ground once during each gait cycle results in a sudden rise in accelerations [17,18,19,20,21,22]; the cyclic nature of human walking results in cyclic signals measured by motion sensors [13,23,24,25,26]. However, existing methods rely on the usage of dedicated foot-mounted sensors [27] or constrained smartphones [28], which essentially imposes severe limitations on applying these methods in practice. In addition, the methods based on smartphones suffer from limited accuracy, especially when smartphone is held in an unconstrained manner [29], namely that the smartphone placement is not only arbitrary but also alterable. Therefore, precisely identifying walking motion and counting the resultant steps are still challenging.

 

  • However, we don’t care about accurate step count. We just need time distance between step counts

 

  • How to convert steps/second to beats/measure?

 

  • Possibilities: 

 

 

 

 

  • The frequency domain approaches focus on the frequency content of successive windows of measurements based on short-term Fourier transform (STFT) [30], FFT [31], and continuous/discrete wavelet transforms (CWT/DWT) [30,32,33,34], and can generally achieve high accuracy, but suffer from either resolution issues [34] or computational overheads [35]. In [31], steps are identified by extracting frequency domain features in acceleration data through FFT, and the accuracy of 87.52% was achieved. Additionally, FFT was employed in [36] too smooth acceleration data and then peak detection was used to count steps.

 

        • The feature clustering approaches employ machine learning algorithms, e.g., Hidden Markov models (HMMs) [37,38,39], KMeans clustering [40,41], etc., in order to classify activities based on both time domain and frequency domain features extracted from sensory data [14,42], but neither a single feature nor a single learning technique has yet been shown to perform the best [42].
        • A fair and extensive comparison has been made among various techniques in a practical environment in [29], and shows that the best performing algorithms for walking detection are thresholding based on the standard deviation and signal energy, STFT and autocorrelation, while the overall best step counting algorithms are windowed peak detection, HMM and CWT.
        • In this paper, we adopt the gyroscope that is becoming more and more popular in COTS smartphones and the efficient FFT method to implement a novel and practical method for simultaneous walking detection and step counting. Due to the advantages of the gyroscope and frequency domain approach, the proposed method relieves the restriction of most existing studies that assume the usage of smartphones in a constrained manner.

 

  • Android has ‘motion sensors’ documentation for accelerometer info

 

 

  • Android has a built in step counter & step detector!!! Use this LOL

 

 

Nontechnical Goals:

  • Real-time !!??!!
  • Play songs of current running pace
  • Play song timewarped at current running pace
  • Play song with added beats at current running pace
  • *** if running pace changes during song, real-time changes to music playing?
  • *** how else can we play with music with ML?

———————————————————————————————–

Technical Goals:

——————————————————————

  • Interface
  1. Smartwatch app & mobile app
  2. If this requires two sets of source code, ONLY smartwatch app
  3. If this isn’t possible, ONLY mobile app
  • *** to what EXTENT should we develop UI / user board / various screens? We could just have a button that starts the process. OR we could develop an app that looks like a real product that may exist in app store V1.0
  • *** smartwatch hardware may not match mobile hardware for computational abilities
  • Pace Detection ???

——————————————————————

  • Song choice
  1. Playlist exists, software chooses song of correct/nearest tempo for natural music 
    1. ML algo to determine a song’s tempo
    2. Database that holds pairs of song & its tempo
  2. Playlist exists, software plays songs in order with warping
  3. Suggested songs play based on pace & profile music data – ML

——————————————————————

  • Time-warp songs
  1. Wavelet Transform – previous groups mentioned this was more ‘accurate’ / ‘advanced’ & better logarithmic runtime
  2. Phase Vocoder – 2 previous groups actually used this method. changes speed without changing pitch. Uses STFT, modulation in frequency domain, then inv. STFT
  3. TDHS is also an option. Song hunter decided not to use this bc its not suited for polyphonic material (what does that mean?) – research this further and reprioritize methods to timewarp.

——————————————————————

  • Add beats at running pace 
  1. Heart beat / some other rhythm
  2. Just tempo beats

——————————————————————

  1. Simple impulses at pitch within a chord progression? At first glance, is this a complete mystery, or do we have an idea of how to begin
  2. If not of a certain pitch, insert impulses at running pace
  • Integration
    • Interface
    • Retrieve pace from phone
    • Send pace from phone data to app
    • App performs its job – find music & play it

——————————————————————

  1. MVP
    1. Mobile app with basic “Start” and options buttons
    2. Default list of music
    3. Pace detection 
    4. Time-warping 
    5. Song changes based on pace
  2. 2nd
    1. smartwatch
    2. Interface with profile / User authentication
    3. ML of song choice – specifically, tempo detection
  3. 3rd
    1. ML of song choice – song suggestions

 

Apps like this exist: https://www.iphoneness.com/iphone-apps/music-pace-running-app/

  • These apps match your choose from their own library or from your existing playlist to your pace automatically or manually

None of these apps alter songs themselves to match pace

Week 0 – Brainstorming & Preliminary Research

Drones That Provide Visual & Sensory Info

18551- Biometric Quadcopter Security: 

https://drive.google.com/open?id=1n7-88uXZdLNgPvCgoUYl53FvMd9fNqEN

https://drive.google.com/open?id=1S4pFCIYDsNxFYt50TVuXdDUyDG3hnyYr

  • Goal: Drone follows people it does not recognize
    • Simple face detection if person is facing drone
    • Complexity comes from gait identification and drone navigation
      • Gait identification is relatively new field
      • Have been multiple approaches in feature engineering with examples being background subtraction, optical flow, and motion energy.
      • Can try ML algorithms which have similar accuracy but come at the price of expensive computation
    • Hardware included drone and tablet
  • Achieved 87% accuracy for gait identification using a simple LDA + KNN classifier
    • Measured different accuracies for people walking at different angles in relation to the drone camera
  • The group did not get around to implementing gait recognition onto the drone- only the basic face recognition. However, they were able to show that the gait classification algorithm worked independently
    • They ran into problems with Autopylot code and integration

If we do this project:

  • Signals: Face mapping algorithm (PCA, LDA, NN), gait recognition algorithm (OpenCV algorithm or use ML since actual drones have succeeded by using it)
  • Software: Automatic drone navigation and communication between camera and database
  • Hardware/EE: Maybe Arduino & Raspberry Pi
  • Make a cheap version of https://www.theverge.com/2018/4/5/17195622/skydio-r1-review-autonomous-drone-ai-autopilot-dji-competitor basically
  • A lot of research would need to go into finding which drone would work best, but I think we need to find drones with flight controllers

18500- Snitch:

https://drive.google.com/open?id=13Rmza26JYVkNdvHaNwJTcUo68GZF2UFh

  • We should revisit this paper to review a list of problems/solutions this group faced
  • Goal: Create a “snitch” to avoid players and obstacles while flying around an area
    • The laptop, after computing where the players as well as the quadcopter were located, would send messages over Wi-Fi to the Raspberry Pi 3 onboard the quadcopter in the form of coordinate locations of players of interest and of the quadcopter itself. These coordinates would be coordinates in the 3D space of the pitch
    • Components: Raspberry Pi 3 would receive hall effect sensor data (from within IMU’s magnetic sensors) from the snitch ball, height information from an ultrasonic rangefinder (via an Arduino Nano), and gyroscopic information from a 9-Axis IMU
  • Can read this more in-depth to understand how to maybe work around network problems and how to work around issues associated with arduino/raspberry pi communication
  • Pivoted upon realizing their project was unsafe
    • Dangerous to have people grab things from rotating blades and hang too much weight from a drone
  • Faced issue with the drone not correctly reporting the angle it was at
  • Abandoned ROS for ROS-like framework in Python
  • Need relatively fast decisions in order to avoid obstacles
  • Pivoted to making an assisted flight room mapper that could somewhat avoid obstacles
    • New problem: Moving objects and moving the drone broke the map that was created by their mapping software (HECTOR SLAM)

If we go with this project:

  • Goal: Create 3d mapping tool from drone images. Allow person to move drone around to image and scan an area. We could sell this as a disaster relief drone that could identify missing people and help map out an area that rescuers would need to go into for safety
  • Signals: Facial recognition, 2d → 3d conversion algorithm
  • Software: Visualizing the 3d map. Maybe allow people to interact with map, maybe not. Also navigation for drone (flight mapping?) and handling communication between drone and remote
  • Hardware: Maybe wiring up an arduino/camera
  • Potential challenges: See the group above. Seriously. They had like a billion
  • There is already a whole bunch of software out there that tries to do this. I found several companies that are selling drones specifically for this purpose and I found a subreddit dedicated to the idea

 

Identification through Voice Authentication (for smart home?)

18551- Spoken Language Identifier: 

https://drive.google.com/open?id=1MyV1K0w8DoISeQXnmvOtuOsV3R_6l7wS

18500- YOLO:

http://course.ece.cmu.edu/~ece500/projects/s19-teamda/

  • Goal: One shot speech identification and speaker recognition via web app
    • Simple voice recognition with extra complexities
    • Complexity comes from speech id and speech recognition
      • This should be in real time – a recording of an authorized voice should not be able to pass
      • Should work within 10 seconds of recording
    • Web app and signal processing with ML
  • The group did not get around to implementing this specifically with smart home/iot devices, but it did work with a “logon service”
  • Goal: Create device/detector that matches voices to unlocking the ability to control a smart home/hub device
    • Simple voice recognition
    • Complexity comes from speech id and speech recognition
      • This should be in real time – a recording of an authorized voice should not be able to pass
    • Hardware included drone and tablet

If we do this project:

  • Signals: speech ID and recognition with ML
  • Software: Authenticator service that if voice is authorized then commands are issued
  • Hardware/EE: Maybe Arduino & Raspberry Pi or some sort of controller?
  • Probably need to by a smart home device or hub to use for testing

 

Run With It

18551- DJ Run: 

https://drive.google.com/open?id=1j7x7gFJguh4NDrayci-jIjX37-cb6uup

https://drive.google.com/open?id=1ACXJrRQjt2ibIdfNTTfwn2BPBIxn6bMA

18500- Song Hunter:

https://drive.google.com/open?id=1GKYRdOH90qN87q7cdNlU2zZHJN3Mt5LI

https://drive.google.com/open?id=11myryvI3wP7eiJqsaEPzYekTQjYbV7Gb

** manually assigned BPM labels for each song rather than detecting it (why?) this was a change from proposal to final project

  • mobile app (android or smart watch)
  • Detect runner’s pace (retrieve from phone’s accelerometer)
  • Choose songs based on pace (database with song tempo range stored) OR (ML algo that can detect pace of song)
    • Detect song pace – media lab MIT algorithm OR statistical beat detection (less computation, less accuracy)
  • Use existing songs & timewarp (signal processing up/downsampling)
    • Phase vocoder changes speed without changing pitch. Uses STFT, modulation in frequency domain, then inv. STFT  – both groups used this
    • Wavelet Transform is better bc advanced & has logarithmic precision  ** we will do this
    • TDHS is also an option. Song hunter decided not to use this bc its not suited for polyphonic material (what does that mean?)
  • Use existing songs & make automated remix (add in beats/impulses at desired pace of same intensity/pitch as the song playing
  • Music player with special features? OR fitness tracker with special features?
  • Quality of music subjective measure

Apps like this exist: https://www.iphoneness.com/iphone-apps/music-pace-running-app/

  • These apps match your choose from their own library or from your existing playlist to your pace automatically or manually
  • None of these apps alter songs themselves to match pace

 

Walkie Talkies for Location Tracking

18551- Marco:

https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-

https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-

  • New device or using smartphone or smartphone extension device
  • Bluetooth or other way of keeping track of the 2 devices 
  • Notification when going out of range
  • Use groundwaves to avoid obstructions – this just means certain frequency range
  • Or use satellite/cell tower – but how would that exist in a ‘mountain’
  • Mesh networking: smartphone that is using the app is the router creating the network.

Apps that exist: https://www.technologyreview.com/s/533081/communication-app-works-without-a-cellular-network/ (additional sources 

https://www.geckoandfly.com/22562/chat-without-internet-connection-mesh-network/

  • They use mesh networks
  • No internet connection, just cell to cell with bluetooth 
  • Text and call capabilities
  • Radius capabilities
    • 200 feet radius distance for this to work
    • 330 feet for person to person, greater than 330 feet requires a middleman within 330 feet radius.
  • Considered important for disaster situations
  • Briar is open-source

** this would also be a secure way of communicating. If messages are stored on the phone OR messages are encoded/decoded.

→ this tells us that communication is possible despite whatever obstructions.

→ 330 feet = .05 mile = thats a 1 minute walking distance… feasibility?

→ but we also don’t need to detect the data from the signal, just the strength of the signal

“Using bluetooth for localization is a very well known research field (ref.). The short answer is: you can’t. Signal strength isn’t a good indicator of distance between two connected bluetooth devices, because it is too much subject to environment condition (is there a person between the devices? How is the owner holding his/her device? Is there a wall? Are there any RF reflecting surfaces?). Using bluetooth you can at best obtain a distance resolution of few meters, but you can’t calculate the direction, not even roughly.

You may obtain better results by using multiple bluetooth devices and triangulating the various signal strength, but even in this case it’s hard to be more accurate than few meters in your estimates.” – https://stackoverflow.com/questions/3624945/how-to-measure-distance-between-two-iphone-devices-using-bluetooth (has more sources)

→ have a home base device at campsite or “meeting point”

→ use sound

→ metal devices are biggest obstruction. In mountain we would be ok. What are other use cases? Do we care if this is only targeted towards hikers/campers?

 

NOT SUPER CONFIDENT ABOUT THIS. we could probs make something work. But requires LOTS of scholarly research before we have any idea of what the design implementation will look like @ high level.

Introduction and Project Summary

We are group E4! Our names are Aarushi, Akash, and Mayur, and the project we will be building is called “Run With It.” In essence, we are hoping to create a phone application that speeds up music when you run quickly, and slows it down when you run slowly. Our plan is to target the app towards long-distance runners who run at around our pace (by pace, we mean steps per minute). The app will be using a Wavelet Transform in order to implement the time-warping of music, and will work on most Android devices. We may explore the possibility of porting our code to a smartwatch, but this is currently under debate. We will have finalized the decision by the design presentation.

 

Proposal Presentation Slides: Proposal Presentation (2)