Aarushi’s Status Report for 3/21

This week involved further group discussions addressing how our project would move forward in our remote environments. Discussions with Prof Sullivan and Jens provided insights on how we could break our project down into milestones that may be safely achievable, to those that would target goals, to those that would be stretch goals. Working through this conceptualization with the team was helpful. It made us more comfortable with the current situation. These were detailed in our Statement of Work, another document we worked on this week.

We also decided on individual tasks for the next week that would help prove that our planned milestones are feasible. While Akash and Mayur are working to collect step count data, my week’s task is to create a skeleton for the time warping algorithm (create a design), and be able to import and visualize mp3 files into this skeleton.

To be clear, our group has decided that our week’s will start/end on Wednesdays to allow enough time and flexibility for us to get our individual tasks done, and for communication/coordination.

p.s. I apologize for this report being one day late. As a result of fracturing my hand mid last week and the new COVID-19 ‘stay at home’ sanctions in New Jersey, I last minute had to spend the weekend moving from Pittsburgh to home. I would really appreciate the flexibility, here.

Aarushi’s Status Report 3/14

This report consists of an update over the last two weeks — the latter week was Spring Break.

In the week before Spring Break, I got badly sick  with the flu. As a result, I spent the first half of the week resting and allocating all my ‘work time’/energy working on the design report. This included completing my parts which included the architecture, conclusion, future work, and all the audio modification sections. It also included formatting, and cohesive revisions since the design proposal presentation. Through this work, I realized that we were missing a key design requirement: the audio files that our product would support. After starting and completing research on what file types are best for signal processing, music, and popular audio uses, I completed this section in the design report as well. Considering my illness, and that I spent half of the week on this document.

The following week was Spring Break. Our group did not have plans to work this week. I was supposed to be traveling on a CMU sponsored brigade. Since this was cancelled, I decided to be slightly productive. I scheduled meetings with 2 professors, as previously identified, who would be able to advise on the phase vocoder I plan to implement. Once classes were officially moved online, our project was paused as we were waiting on how to proceed in working on this project remotely. Once our expectations are solidified, our team members will plan how to complete what is needed. For now, we have already identified three individual parts of the project that can be worked on independently. How these components will be integrated and tested is TBD based on forthcoming expectations and group conversations as classes resume this week.

Aarushi’s Status Report for 2/29

This past week, I heavily focused on the Design Presentation – ensuring a few slides, in particular, were clean, consistent, and logical. These few slides were complex and contained a lot of information that was difficult to aesthetically fit (i.e. slides 2,  3, 7). Additionally, I was the speaker for the design presentation. Thus, I spent considerable time rehearsing since our presentation was on Wednesday which allowed for more practice time.

Additionally, after working with the group to delegate portions of the design report, I have been working through my portion of the document. This will continue past Saturday night.

Other than meeting tasks assigned by the class, I took this week to really understand the benefits and drawbacks of the ‘wavelet transform based’ phase vocoder in relation to other time-scale modification methods we could use. This involved reading (very slowly, haha) numerous research papers and their findings on tradeoffs between the various methods. I now understand the high-level processes behind each of these methods, what distinguishes them from each other, and how exactly each of the more advanced/more accurate techniques builds off of the closest simpler/less accurate method considered. (These methods were explained in the design presentation & will be in the design report).

Next steps, following the completion of the design report, will be to complete a base portion/’method’ of the described phase vocoder.

Aarushi’s Status Report for 2/22

While last week involved preliminary testing and information gathering for requirements for the design proposal, this week proved to offer drastic and unexpected changes.

  1. Software Decisions – Last week we decided that the best integration method between the wavelet transform and the mobile app would include the Wavelet transform implemented in python, and integrated to android studio via Jython. However, this method would have required a different library to be used on the Java side that would not interface with the phone’s step counter data. While researching and discussing this issue, Professor Sullivan suggested using C++ for the wavelet transform. Since I will be working on the wavelet transform, I took this decision particularly personally. I have little experience with C++. In fact, even upon initially playing with C++ to familiarize myself with the language, I was still uncomfortable. Despite my distaste for this language, it was important to note that C++ offered DWT & IDWT (discrete wavelet transform) methods AND these were well documented: http://wavelet2d.sourceforge.net/#toc-Chapter-2. In fact, these implementations and example use cases I found provided more customization and flexibility with the input/output signals than Python’s libraries and examples proved to show. As a result, I decided to bite the bullet to favor using C++ for its easier integration with the mobile app, its flexibility with signal processing, and its sufficient, clear examples/documentation of wavelet transform use cases.
  2. Device Decisions – NO watch, ONLY phone based on step counter data we measured and acquired since the watch’s step counter was the least accurate despite the fact that it is of a recent generation.
  3. Scoping Step Detection – As of last week, we decided our target audience to be runners. However, after performing our own ‘runner’s’ test, we realized that the discrepancy between our target scope of bpm was because our searches were targeting runners, but we were actually referencing joggers. Additionally, our measured pace/desired bpm of 150-180 bpm actually matched up well with many songs I do use to fast ‘jog’ to. Thus, we adjusted our target bpm/pace accordingly to match this pace.
  4. Music Choice – During our proposal presentation, we received feedback to narrow the scope of inputs – AKA the scope of songs that could be used to run to with warped conditions. With our new target pace, we will allow only songs of 150-180bpm. Additionally, when choosing a song from a defined, we will apply a scoring algorithm. This scoring algorithm will give a song a score depending on how many times its played, and how close the song’s natural bpm is to the jogger’s current pace. The algorithm will choose the song with the best score. This will ensure one song is not constantly on repeat, and a song of decent bpm is played. Both factors will be weighed and adjusted relatively based on the outcome of our algorithm.
  5. Wavelet Transform vs Phase Vocoder metrics and tradeoffs were searched for and validated as expected. Additionally a plan to accomplish the wavelet transform has been made: code and test for a sin wave without changing inputs, do the same and test on simple music, account for tempo variations, account for pitch, & measure artifacts throughout this process. I have additionally researched additional resources on campus in case I need guidance in applying the wavelets for our specific use cases (i.e. Prof Aswin, Stern, Paul Heckbert, Jelena Kovačević)

Group’s Status Update for 2/15

Step Detection Verification was our main focus for this week’s design review process.

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

We used class time to research how accelerometer data is measured, differs, is calculated, and how we can verify them against each other. Additionally, this research included finding metrics for believed accuracy for the accelerometers we are considering to use. Additionally, we designed this test and verification process as follows with two users, Aarushi and Mayur (data sheet on our Google Drive):

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

We figured this would be a technology that could jeopardize our project if the data we got from the phones and watch weren’t good enough. Out contingency plan was to either use a Bluetooth pedometer or write our own step detection algorithm, however we found that the data we got from the newer Android phone lied within an average 4% error of the actual step count which makes us confident in using it.

The biggest change that we are considering is not using the watch since the error rate on average was roughly 10-15% which is a little higher than we liked. We are thinking of making the app for the watch, but still using all the data from the phone.

Software Decisions with Wavelet Transforms

During class time, we researched best methods for phone & watch applications – Java. Python would be used for wavelet transforms for our familiarity and ease of use. Integration via Jython is possible.

Aarushi’s Status Update for 2/15

 

Step Detection

This week was important to test our step detection methods as it is the base of our desires – to match a runner’s pace. Pace being steps/minute rather than speed of distance/time.

I ran on a treadmill to (1) verify accelerometer data, and (2) to measure my tolerance for gap between starting run and the music adjusting tempo to pace. For jogs of 20-40 minutes (3-5 miles) at more or less the same pace, my tolerance for not adjusted music was 3 minutes. For runs of 10 minutes (1-1.5 miles) at more or less the same pace, my tolerance for not adjusted music was 1.5 minutes.

When verifying accelerometer data, we compare between two android phones of different generations and a smartwatch. This design was controlled by manually counting steps while running, and using all devices on the same run. These measurements were done for 30 second, and 1 minute intervals at speeds of 5.5mph to 10mph at intervals of 0.5. Additionally, I completed three ‘long’ distance runs of 3 minutes and 5 minutes for step verification, and longer for tolerance of gap between starting run and the music adjusting tempo to pace. (A tragic event because I prefer intervals to distance). An iphone was attempted for comparable metrics, but the iphone 7 plus was what we had access to, and only updates every 10 minutes. Thus, it was impossible to use to measure the number of steps in a defined time interval.

 

Wavelet Transform

Working on the wavelet transform model  based off the paper for musical analysis and audio compression methods: https://www.hindawi.com/journals/jece/2008/346767/#experimental-procedures-and-results. This was decided after evaluating numerous methods that were also discussed in this paper. This paper provides research and insights on testing how a transformation can be deemed successful. They proved that it is effective in decreasing error, as seen as quantization artifacts or Signal-to-mask ratio (SMR). This music transformation was performed by Discrete Wavelet Packet Transform (DWPT) for its increased accuracy and less computational complexity. I will follow suite for these two beneficial distinctions.

I will be implementing this in Python for easy integration into Java via Jython. Therefore, I have been playing around with Python’s wavelet transform open-source library – pywaveletes. I have setup my environment deleting/installing all necessary libraries and their correct versions for this testing. I have started testing the wavelet transform functions of this library on basic signals like [1,2,3,4] , originalSignal = sin(2 * np.pi * 7 * originalTime) where originalTime is a linespace of time from -1 to 1 broken up into ‘discrete’ components of 0.01 increments in time, and images, since I have worked with wavelet transforms with images before. This experimentation will continue into Saturday night, however this update will be submitted before results with audio signals are tested.

 

 

Project Proposal – “18 is basically 20”

General support to our idea: here

Metrics that define success:

 

  • Pitch similarity percentage until annoyance

 

      • Absolute vs relative vs perfect pitch. We care about absolute pitch
      • Like most human traits, AP is not an all-or-none ability, but rather, exists along a continuum 10, 17, 20, 21. Self-identified AP possessors score well above chance (which would be 1 out of 12, or 8.3%) on AP tests, typically scoring between 50 and 100% correct [19], and even musicians not claiming AP score up to 40% [18]. Here
      • AP possessors incorrectly identify tones that are 6% apart here (best possible case to meet AKA the hardest) – upper bound of accuracy
      • The response accuracy of melody comparison is shown separately in Figure 2 for the AP group and the non‐AP group. The chance level is 50%. In the C major (non-transposed) context, in which the two melodies could be compared at the same pitch level, both the AP and the non‐AP groups gave the highest level of performance; in contrast, in the E– and F# context, in which the comparison melody was transposed to different pitch level from the standard melody, both groups performed markedly worse. Notably, the AP group performed more poorly than the non‐AP group. Here (should our percentage of annoyance error depend on the pitch of the song?)
        • Non AP scores 40-60
        • AP scores 80-100
        • Avg = 70?
      • 1-5 people/10,000 have AP here => don’t worry about AP

 

  • Pitch will remain same with 25 cents (¼ semitone) marginal error

 

    • Will we have a relative pitch problem as songs don’t stay at one tone throughout?

 

  • Percentage of difference between pace and tempo until annoyance (pulsing)

 

      • Helpful pace/tempo matching Here
      • Helpful pace/tempo matching Here 
      • Runner’s side bpm measure here
      • 120-140 bpm is normal
      • Over 150 bpm probably too fast & will be distracting – we will ensure to stay under 150 bpm.
      • Range should be (inclusive of walking) 90-153 bpm

 

  • Room for error in pace detection (stride deviation)

 

 

  • How long between change in pace and change in tempo of currently playing song
  • Standard change in pace over run seems to be around + or – 5%
  • How to connect sensor on shoe to phone to share data

 

      • New pedometers have integration with phones using bluetooth and different apps

 

  • Which sensors to use / how to use sensors 

 

      • Best sensor would be a pedometer that you can attach to your shoe that connects to your phone using bluetooth
      • We can use this to track how accurate the step count is verse a phone/smart watch

 

  • Real-time feedback – how often?

 

      • Instead of time, let’s use # of footsteps
      • Since this gives a better relation to bpm than time does
      • Remeasure & calibrate every 20 steps? (conjecture based on my running experience. We should test amongst us & friends / random gym people for design proposal (mention this in project proposal)) 

 

  • calibration? 
  • Song choice algorithm
  • Spotify was forced to not make the song choice truly random

 

 

Shopping List:

  • Bluetooth wireless headphone(s)
  • Sensors
  • Extra smartwatch?
  • Android phone (ask Sullivan for 18-551 extras)

 

* scratch ML algo for tempo detection

* this app end goal is to be linked to spotify → metadata of songs

* long distance runners use this to help maintain

Week 1 – Abstract & Preliminary Design Thoughts

Pace Detection:

    • Challenges with step count detection: Therefore, various walking detection and step counting methods have been developed based on one or both of the following two physical phenomena: the moment when the same heel of a pedestrian strikes the ground once during each gait cycle results in a sudden rise in accelerations [17,18,19,20,21,22]; the cyclic nature of human walking results in cyclic signals measured by motion sensors [13,23,24,25,26]. However, existing methods rely on the usage of dedicated foot-mounted sensors [27] or constrained smartphones [28], which essentially imposes severe limitations on applying these methods in practice. In addition, the methods based on smartphones suffer from limited accuracy, especially when smartphone is held in an unconstrained manner [29], namely that the smartphone placement is not only arbitrary but also alterable. Therefore, precisely identifying walking motion and counting the resultant steps are still challenging.

 

  • However, we don’t care about accurate step count. We just need time distance between step counts

 

  • How to convert steps/second to beats/measure?

 

  • Possibilities: 

 

 

 

 

  • The frequency domain approaches focus on the frequency content of successive windows of measurements based on short-term Fourier transform (STFT) [30], FFT [31], and continuous/discrete wavelet transforms (CWT/DWT) [30,32,33,34], and can generally achieve high accuracy, but suffer from either resolution issues [34] or computational overheads [35]. In [31], steps are identified by extracting frequency domain features in acceleration data through FFT, and the accuracy of 87.52% was achieved. Additionally, FFT was employed in [36] too smooth acceleration data and then peak detection was used to count steps.

 

        • The feature clustering approaches employ machine learning algorithms, e.g., Hidden Markov models (HMMs) [37,38,39], KMeans clustering [40,41], etc., in order to classify activities based on both time domain and frequency domain features extracted from sensory data [14,42], but neither a single feature nor a single learning technique has yet been shown to perform the best [42].
        • A fair and extensive comparison has been made among various techniques in a practical environment in [29], and shows that the best performing algorithms for walking detection are thresholding based on the standard deviation and signal energy, STFT and autocorrelation, while the overall best step counting algorithms are windowed peak detection, HMM and CWT.
        • In this paper, we adopt the gyroscope that is becoming more and more popular in COTS smartphones and the efficient FFT method to implement a novel and practical method for simultaneous walking detection and step counting. Due to the advantages of the gyroscope and frequency domain approach, the proposed method relieves the restriction of most existing studies that assume the usage of smartphones in a constrained manner.

 

  • Android has ‘motion sensors’ documentation for accelerometer info

 

 

  • Android has a built in step counter & step detector!!! Use this LOL

 

 

Nontechnical Goals:

  • Real-time !!??!!
  • Play songs of current running pace
  • Play song timewarped at current running pace
  • Play song with added beats at current running pace
  • *** if running pace changes during song, real-time changes to music playing?
  • *** how else can we play with music with ML?

———————————————————————————————–

Technical Goals:

——————————————————————

  • Interface
  1. Smartwatch app & mobile app
  2. If this requires two sets of source code, ONLY smartwatch app
  3. If this isn’t possible, ONLY mobile app
  • *** to what EXTENT should we develop UI / user board / various screens? We could just have a button that starts the process. OR we could develop an app that looks like a real product that may exist in app store V1.0
  • *** smartwatch hardware may not match mobile hardware for computational abilities
  • Pace Detection ???

——————————————————————

  • Song choice
  1. Playlist exists, software chooses song of correct/nearest tempo for natural music 
    1. ML algo to determine a song’s tempo
    2. Database that holds pairs of song & its tempo
  2. Playlist exists, software plays songs in order with warping
  3. Suggested songs play based on pace & profile music data – ML

——————————————————————

  • Time-warp songs
  1. Wavelet Transform – previous groups mentioned this was more ‘accurate’ / ‘advanced’ & better logarithmic runtime
  2. Phase Vocoder – 2 previous groups actually used this method. changes speed without changing pitch. Uses STFT, modulation in frequency domain, then inv. STFT
  3. TDHS is also an option. Song hunter decided not to use this bc its not suited for polyphonic material (what does that mean?) – research this further and reprioritize methods to timewarp.

——————————————————————

  • Add beats at running pace 
  1. Heart beat / some other rhythm
  2. Just tempo beats

——————————————————————

  1. Simple impulses at pitch within a chord progression? At first glance, is this a complete mystery, or do we have an idea of how to begin
  2. If not of a certain pitch, insert impulses at running pace
  • Integration
    • Interface
    • Retrieve pace from phone
    • Send pace from phone data to app
    • App performs its job – find music & play it

——————————————————————

  1. MVP
    1. Mobile app with basic “Start” and options buttons
    2. Default list of music
    3. Pace detection 
    4. Time-warping 
    5. Song changes based on pace
  2. 2nd
    1. smartwatch
    2. Interface with profile / User authentication
    3. ML of song choice – specifically, tempo detection
  3. 3rd
    1. ML of song choice – song suggestions

 

Apps like this exist: https://www.iphoneness.com/iphone-apps/music-pace-running-app/

  • These apps match your choose from their own library or from your existing playlist to your pace automatically or manually

None of these apps alter songs themselves to match pace

Week 0 – Brainstorming & Preliminary Research

Drones That Provide Visual & Sensory Info

18551- Biometric Quadcopter Security: 

https://drive.google.com/open?id=1n7-88uXZdLNgPvCgoUYl53FvMd9fNqEN

https://drive.google.com/open?id=1S4pFCIYDsNxFYt50TVuXdDUyDG3hnyYr

  • Goal: Drone follows people it does not recognize
    • Simple face detection if person is facing drone
    • Complexity comes from gait identification and drone navigation
      • Gait identification is relatively new field
      • Have been multiple approaches in feature engineering with examples being background subtraction, optical flow, and motion energy.
      • Can try ML algorithms which have similar accuracy but come at the price of expensive computation
    • Hardware included drone and tablet
  • Achieved 87% accuracy for gait identification using a simple LDA + KNN classifier
    • Measured different accuracies for people walking at different angles in relation to the drone camera
  • The group did not get around to implementing gait recognition onto the drone- only the basic face recognition. However, they were able to show that the gait classification algorithm worked independently
    • They ran into problems with Autopylot code and integration

If we do this project:

  • Signals: Face mapping algorithm (PCA, LDA, NN), gait recognition algorithm (OpenCV algorithm or use ML since actual drones have succeeded by using it)
  • Software: Automatic drone navigation and communication between camera and database
  • Hardware/EE: Maybe Arduino & Raspberry Pi
  • Make a cheap version of https://www.theverge.com/2018/4/5/17195622/skydio-r1-review-autonomous-drone-ai-autopilot-dji-competitor basically
  • A lot of research would need to go into finding which drone would work best, but I think we need to find drones with flight controllers

18500- Snitch:

https://drive.google.com/open?id=13Rmza26JYVkNdvHaNwJTcUo68GZF2UFh

  • We should revisit this paper to review a list of problems/solutions this group faced
  • Goal: Create a “snitch” to avoid players and obstacles while flying around an area
    • The laptop, after computing where the players as well as the quadcopter were located, would send messages over Wi-Fi to the Raspberry Pi 3 onboard the quadcopter in the form of coordinate locations of players of interest and of the quadcopter itself. These coordinates would be coordinates in the 3D space of the pitch
    • Components: Raspberry Pi 3 would receive hall effect sensor data (from within IMU’s magnetic sensors) from the snitch ball, height information from an ultrasonic rangefinder (via an Arduino Nano), and gyroscopic information from a 9-Axis IMU
  • Can read this more in-depth to understand how to maybe work around network problems and how to work around issues associated with arduino/raspberry pi communication
  • Pivoted upon realizing their project was unsafe
    • Dangerous to have people grab things from rotating blades and hang too much weight from a drone
  • Faced issue with the drone not correctly reporting the angle it was at
  • Abandoned ROS for ROS-like framework in Python
  • Need relatively fast decisions in order to avoid obstacles
  • Pivoted to making an assisted flight room mapper that could somewhat avoid obstacles
    • New problem: Moving objects and moving the drone broke the map that was created by their mapping software (HECTOR SLAM)

If we go with this project:

  • Goal: Create 3d mapping tool from drone images. Allow person to move drone around to image and scan an area. We could sell this as a disaster relief drone that could identify missing people and help map out an area that rescuers would need to go into for safety
  • Signals: Facial recognition, 2d → 3d conversion algorithm
  • Software: Visualizing the 3d map. Maybe allow people to interact with map, maybe not. Also navigation for drone (flight mapping?) and handling communication between drone and remote
  • Hardware: Maybe wiring up an arduino/camera
  • Potential challenges: See the group above. Seriously. They had like a billion
  • There is already a whole bunch of software out there that tries to do this. I found several companies that are selling drones specifically for this purpose and I found a subreddit dedicated to the idea

 

Identification through Voice Authentication (for smart home?)

18551- Spoken Language Identifier: 

https://drive.google.com/open?id=1MyV1K0w8DoISeQXnmvOtuOsV3R_6l7wS

18500- YOLO:

http://course.ece.cmu.edu/~ece500/projects/s19-teamda/

  • Goal: One shot speech identification and speaker recognition via web app
    • Simple voice recognition with extra complexities
    • Complexity comes from speech id and speech recognition
      • This should be in real time – a recording of an authorized voice should not be able to pass
      • Should work within 10 seconds of recording
    • Web app and signal processing with ML
  • The group did not get around to implementing this specifically with smart home/iot devices, but it did work with a “logon service”
  • Goal: Create device/detector that matches voices to unlocking the ability to control a smart home/hub device
    • Simple voice recognition
    • Complexity comes from speech id and speech recognition
      • This should be in real time – a recording of an authorized voice should not be able to pass
    • Hardware included drone and tablet

If we do this project:

  • Signals: speech ID and recognition with ML
  • Software: Authenticator service that if voice is authorized then commands are issued
  • Hardware/EE: Maybe Arduino & Raspberry Pi or some sort of controller?
  • Probably need to by a smart home device or hub to use for testing

 

Run With It

18551- DJ Run: 

https://drive.google.com/open?id=1j7x7gFJguh4NDrayci-jIjX37-cb6uup

https://drive.google.com/open?id=1ACXJrRQjt2ibIdfNTTfwn2BPBIxn6bMA

18500- Song Hunter:

https://drive.google.com/open?id=1GKYRdOH90qN87q7cdNlU2zZHJN3Mt5LI

https://drive.google.com/open?id=11myryvI3wP7eiJqsaEPzYekTQjYbV7Gb

** manually assigned BPM labels for each song rather than detecting it (why?) this was a change from proposal to final project

  • mobile app (android or smart watch)
  • Detect runner’s pace (retrieve from phone’s accelerometer)
  • Choose songs based on pace (database with song tempo range stored) OR (ML algo that can detect pace of song)
    • Detect song pace – media lab MIT algorithm OR statistical beat detection (less computation, less accuracy)
  • Use existing songs & timewarp (signal processing up/downsampling)
    • Phase vocoder changes speed without changing pitch. Uses STFT, modulation in frequency domain, then inv. STFT  – both groups used this
    • Wavelet Transform is better bc advanced & has logarithmic precision  ** we will do this
    • TDHS is also an option. Song hunter decided not to use this bc its not suited for polyphonic material (what does that mean?)
  • Use existing songs & make automated remix (add in beats/impulses at desired pace of same intensity/pitch as the song playing
  • Music player with special features? OR fitness tracker with special features?
  • Quality of music subjective measure

Apps like this exist: https://www.iphoneness.com/iphone-apps/music-pace-running-app/

  • These apps match your choose from their own library or from your existing playlist to your pace automatically or manually
  • None of these apps alter songs themselves to match pace

 

Walkie Talkies for Location Tracking

18551- Marco:

https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-

https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-

  • New device or using smartphone or smartphone extension device
  • Bluetooth or other way of keeping track of the 2 devices 
  • Notification when going out of range
  • Use groundwaves to avoid obstructions – this just means certain frequency range
  • Or use satellite/cell tower – but how would that exist in a ‘mountain’
  • Mesh networking: smartphone that is using the app is the router creating the network.

Apps that exist: https://www.technologyreview.com/s/533081/communication-app-works-without-a-cellular-network/ (additional sources 

https://www.geckoandfly.com/22562/chat-without-internet-connection-mesh-network/

  • They use mesh networks
  • No internet connection, just cell to cell with bluetooth 
  • Text and call capabilities
  • Radius capabilities
    • 200 feet radius distance for this to work
    • 330 feet for person to person, greater than 330 feet requires a middleman within 330 feet radius.
  • Considered important for disaster situations
  • Briar is open-source

** this would also be a secure way of communicating. If messages are stored on the phone OR messages are encoded/decoded.

→ this tells us that communication is possible despite whatever obstructions.

→ 330 feet = .05 mile = thats a 1 minute walking distance… feasibility?

→ but we also don’t need to detect the data from the signal, just the strength of the signal

“Using bluetooth for localization is a very well known research field (ref.). The short answer is: you can’t. Signal strength isn’t a good indicator of distance between two connected bluetooth devices, because it is too much subject to environment condition (is there a person between the devices? How is the owner holding his/her device? Is there a wall? Are there any RF reflecting surfaces?). Using bluetooth you can at best obtain a distance resolution of few meters, but you can’t calculate the direction, not even roughly.

You may obtain better results by using multiple bluetooth devices and triangulating the various signal strength, but even in this case it’s hard to be more accurate than few meters in your estimates.” – https://stackoverflow.com/questions/3624945/how-to-measure-distance-between-two-iphone-devices-using-bluetooth (has more sources)

→ have a home base device at campsite or “meeting point”

→ use sound

→ metal devices are biggest obstruction. In mountain we would be ok. What are other use cases? Do we care if this is only targeted towards hikers/campers?

 

NOT SUPER CONFIDENT ABOUT THIS. we could probs make something work. But requires LOTS of scholarly research before we have any idea of what the design implementation will look like @ high level.