Team Status Report 2/18

Summary 

Overall we made significant progress in designing our subsystems. We made a few design decisions that simplified our process, and we figured out what we could feasibly implement with the existing APIs. We were able to spend significant time at the Media Lab too which allowed us to work hands on with the lights, and control them effectively. Working on the design presentation together also gave us a lot more clarity into the inputs and outputs, and how we drive communication among the different subsystems. 

Risks 

We were able to mitigate risks from the previous week i.e control the lights using a Python API directly, and not rely on QLC++ for light communication. However, we are still a little worried about the latency of the program as it is still real time, and we want to hit our target metric i.e <100ms of delay between the audio input and light being triggered. Other risks associated with the latency arise in the signal processing engine. In the signal processing engine, we are still a little confused about what coefficients related to the audio we will extract, and how we will use the current state, and eliminate a few possibilities using the spotify parameters of the song. This might be a computationally intensive problem as first we need to be able to generate all possible combinations, see what’s in the set of allowed combinations based on the genre, and remove from the set, and randomize options

Changes 

  • We successfully ran our DmxPy controller that enables us to connect to the lights using a Python interface. This is very useful in reducing the latency of our program as well. We experimented with other APIs like OpenDMX, and UDMX controllers, and ran into some compilation issues with them. We spent a lot of time trying to debug this issue, but changing the API and starting afresh was a more optimal move. 
  • While fleshing out details for the Design presentation, we were able to go into detail on what parameters, and what each subsystem would include. 
    • For instance, we want to have a larger show class with smaller systems that feed into and out of the show class. We have a light set class that is the hardware-software interface to control the lights. The light set class feeds into the expressive lighting engine that consists of the Lighting logic, and the execution queue (maintains the order of requests)
    • Other systems that feed into this are the Genre Detection Classifier. This was initially a big chunk of the work for this project, but we scaled in down for 2 reasons:
      • The genre detection in general had very poor accuracy
      • We decided to use the Shazam API to recognize the song, and poll the song from Spotify and extract features from there instead. For microphone input, this design might vary a little depending on Shazam’s song recognition accuracy, and secondly whether Spotify has this or not. 

Updated Schedule 

Our schedule has been changed to reflect this:

Media

https://drive.google.com/file/d/1mZLzeK7y6zr4Z3o2eDmXsnbw8cZjb6rl/view?usp=share_link 

Simple fade from red to green

Leave a Reply

Your email address will not be published. Required fields are marked *