Since last week, we have gained familiarity with audio processing on the web, and we know how we’re going to implement all in-browser operations. The biggest remaining challenge is streaming the audio data to and from the server. The most significant risk is that we will not be able to do this in real-time, and thus the recording musicians will not be able to hear each other. As we said last week, this is something we would love to have, but if we can’t figure it out, there are other solutions (e.g. uploading the audio after the whole track is recorded, or asynchronously but in larger chunks than would be workable for real-time). We need to decide on this early on, as it will effect many other facets of our project going in.
We haven’t made any big changes in our implementation, but we’re able to be more specific now about how things are going to get done. For example, the click track can be created using an audio buffer similar to the white noise synthesizer in Jackson’s status report. This way, the clicks can be played to recording musicians the entire time they are recording. Better yet, no server storage is needed for this. A buffer containing a click sound can be looped at a certain tempo (i.e. every quarter note), so only a small buffer is needed.
Since there aren’t any big changes to report, our initial schedule is still valid. That said, it is possible we will get the in-browser recording and click track aspects to work much sooner than anticipated, in which case we’ll have far more time to work on the challenges mentioned above (uploading and downloading audio data chunks in real-time). Research into websockets is still needed, and remains a critical part of our desired implementation.