Author: jhsu2

Jason’s Status Report 8

This week, I finished calibrating all our motion-controlled effects. Then, I worked with Jeffrey to integrate our frontend with real-time effects updating from gyroscope values. Next week, I will work with Jeff further to refine our frontend and with Michael to implement the instrument select with controller values.

Jason’s Status Report 7

This week, I focused on improving existing functionality with Tone.js. I successfully incorporated samples into the playing, so now there is a much greater variety of sounds Caprice is able to produce (piano, strings, etc. as long as a sample pack is provided). Additionally, we also began to brainstorm ways to control certain effects with the motion controller, and a very promising one so far is the 3D panner, which simulates the sound output in a 3D setting (ie. move to the right/left, closer/away, change orientation, etc.). This effect is very interactive and couples well with the gyroscopic data that we can get from the VR controller.

Jason’s Status Report 6

This week, I worked with Jeffrey to integrate the smartphone app with the Flask backend. After integrating the submodules, we now can play sounds on the Javascript frontend, with notes controlled by the smartphone. However, the latency initially was very slow and jittery, averaging around 150ms but very inconsistent. We discovered that the source of the latency comes from the socket connection between the smartphone and the Flask server, and tried several different socketing methods to try and resolve this issue.

The best solution we’ve found so far was to use an interval-based socketing mechanism, sending 30 updates/sec =~ 33ms between each update. This reduced the latency significantly, but the jitter is still an issue, and we are still unsure where the root of the problem lies (might be from internet load from other applications/people in my apartment?).

Besides that, I also added functionality in Tone.js to activate/deactivate effects and filters. Next week, we will work on integrating octave shifts and pitch shifts using the touchpad.

Jason’s Status Report 5

This week, I worked on integrating the javascript Tone.js submodule with the Flask server. Data is transmitted by sending a json string through a socket, and the json contains note information, when to activate note separation, effects toggle controls/parameters, and touchpad swipe data.

The Python backend can now communicate with Tone.js, play notes, set up effects, and configure effects settings in real time.

Next week, I will work with Jeffrey to integrate the smartphone app with Flask, hopefully with time to conduct our first end-to-end latency tests.

Jason Status Report 4

This last week, I kept working on the Tone.js audio module and developed an interface for Tone.js and Flask, allowing the team to develop simultaneously. Audio functionality now includes polyphony and the ability to configure effects chains, giving the user control to the sequence of effects, as well as the parameters for each effect. The switch to online classes did not affect my roadmap for this project in any significant way.

Jason’s Status Report 2

This week, I worked on playing sound through Python and trying to implement a wavetable. This proved to be very tedious and an unreliable method of generating sound, as smooth transitions between notes are not guaranteed and this introduced some noticeable latency to the program compared to GarageBand output last week. We decided that we should pivot from generating our own audio, as this introduces too much complexity into our project. We started looking at alternatives to audio generation, hopefully settling on a good one next week. By using a developed audio library, we can free up a lot of time for development of features that we think are more important to this project (user experience, polyphony, motion classification).

Jason’s Status Report 1

This week, I spent time understanding MIDI; its message types, its generation, and its parsing. I wrote some sample code in Python using the mido library to generate MIDI messages, mapping some of my keys to different notes. I then pipelined the MIDI output to GarageBand to observe the latency and smoothness of MIDI generation, and was pleasantly surprised when the latency was basically unnoticeable. Next week, I will work on implementing our own audio generator, maybe with a wavetable, and try to connect it with MIDI generation.

Jason’s Status Report 3

This week, I spent my time mostly working on generating live audio using tone.js and getting familiar with the package’s functionalities. It has all the features we needed, including pitch bends and effects chains. I also spent a lot of time cleaning up the project codebase and dependencies, which was very cluttered due to our frantic past weeks trying to figure out how to implement several different features and in different languages. After cleaning up the code, we created a new repo in GitHub to isolate the environments and packages that we decided to stick with.

Team Status Report 3

This week, we began generating code that outputs sound in real time, using MIDI signals and the tone.js library for audio generation. In parallel, we also began work on motion classification with the GearVR controller.

Before we started work on motion classification, we extended the functionality of the GearVR analysis tool we built last week to include data from all of the sensors (gyroscope, touchpad, and touchpad clicks). This helped us design some classification methods and calibrate our sensors.