Status Report #2: 10/5 (Spencer)

  • Because I am presenting this week for design review, I focused on the design presentation slides and presentation preparation. 
  • I spent my time thinking about the overall narrative for the design presentation as well as making block diagrams for various components.
  • I also did preliminary investigation into what NLP system & potential filter designs we would want to use for some optimizations to try.

Status Report #1: 9/28 (Group Report)

  • Talked to Prof. Vyas about how to reframe the problem, since the problem space we specified is much larger than we can handle in a semester. He was concerned that even our normal goals were quite difficult to do and suggested we reframe the problem into attacking either the wake words or select query phrases.
  • How to reduce latency between when voice detected & audio playback? Seems to take slightly longer than the observed 8ms in our timing code. We are looking into how latency and buffers affect it: http://digitalsoundandmusic.com/5-2-3-latency-and-buffers/. We tried lowering the sample rate to fill the audio buffer more quickly but it did not seem to make a difference.
  • Risk management: Professor Vyas we consider jamming one or two specific commands instead of the wake word as a backup. This might be a good alternative if the latency is too much for the current version of the problem that we are targeting, because we do not need to generate the jamming input until the user speaks after saying the wake word (gives us more time).
  • Updated schedule: breaking project into 3 phases to reflect the updated project.
    • First phase: Determining jamming inputs (research phase)
      • Defining sample voice inputs and generate voice recordings 
      • Reducing latency after detection of audio 
      • Set up various black box systems 
      • Testing sample inputs on Siri/Google Home/Alexa
    • Second phase: Wake word detection
      • Building model for wake word detection 
      • Training model to recognize wake word 
      • Generating noise after wake word detected 
      • Detecting when user has stopped speaking
    • Third phase: Timing optimization / generalization of attack
      • Setting up timing infrastructure for testing attack 
      • Investigating model to predict time delay between wake phrase 
      • Building model for wake phrase length prediction 
      • Training/Testing model for wake phrase length prediction 
      • Integration 
      • Performance Tuning
      • Obfuscation from User
  • Next week: we need to find better metrics on how often our voice activated systems correctly interpret queries without attempted interference.

Status Report #1: 9/28 (Spencer)

  • Carried out experiments on how to “jam” wake word on Siri, since we did not have Google Home/Alexa yet. Tests were successful with human voices. However, playing a voice recording of the jamming voice in a loop seemed to give around a 50% success rate. (Done with Cyrus)
  • Spoke to Prof. Stern about challenges associated with black box systems / discussed current research in speech processing. (Done with Eugene)
  • Latency testing for the program that Eugene wrote using python and pyaudio: good results. It is very fast to detect input + spit out a predefined output. This is without a neural net in the middle. Establishes that what we are doing is possible (done with Cyrus)
  • Next week: 
    • Conduct experiments with Alexa (just arrived) and Google Home (if it arrives soon). Observe if there are differences between how they are activated.
    • Discuss how to refine our solution / handle problems with professors.
    • Work on design presentation, since I am presenting it.