Summary
Overall as a team, we have put in an extra number of hours this week to prepare for the interim demo. We were able to make significant progress on the signal processing, lighting logic, and lighting execution subsystems, and were able to get a working demo. Now that we have a basic sense of how everything is going to fit together, we are planning to dive deeper into the integration. We also are beginning testing to make sure that all of our subsystems are robust, and that involves creating test files, and potentially having volunteers to test our system.
Risks
One of the risks that we ran into was that it may be difficult to do all the signal processing in real time and still get a good sense of how the song is changing and will be changing. For that reason, in order to mitigate that risk, we deprioritized the real time signal processing and are focusing on generating quality light shows for pre recorded audio files. We will still attempt the real time processing after integration.
Changes
One of the changes that we have is that we are going to be focusing on pre-processing the data and producing light shows ahead of time, and then allowing the sound to be played along with the light show. This would work well with cases like a dance performance where the dancers would upload a file for their dance routine, the system would generate a lightshow, and then it would play the lightshow as they performed the dance.
The reason for this change is that it might be difficult to make accurate assessments of the song dynamics in real time, and we want to have a working product by the end of the semester. If we are able to integrate well ahead of time, we can add the complexity of the real time signal processing as well.