Category: Jason’s Status Reports

Jason’s Status Report 8

This week, I finished calibrating all our motion-controlled effects. Then, I worked with Jeffrey to integrate our frontend with real-time effects updating from gyroscope values. Next week, I will work with Jeff further to refine our frontend and with Michael to implement the instrument select with controller values.

Jason’s Status Report 7

This week, I focused on improving existing functionality with Tone.js. I successfully incorporated samples into the playing, so now there is a much greater variety of sounds Caprice is able to produce (piano, strings, etc. as long as a sample pack is provided). Additionally, we also began to brainstorm ways to control certain effects with the motion controller, and a very promising one so far is the 3D panner, which simulates the sound output in a 3D setting (ie. move to the right/left, closer/away, change orientation, etc.). This effect is very interactive and couples well with the gyroscopic data that we can get from the VR controller.

Jason’s Status Report 6

This week, I worked with Jeffrey to integrate the smartphone app with the Flask backend. After integrating the submodules, we now can play sounds on the Javascript frontend, with notes controlled by the smartphone. However, the latency initially was very slow and jittery, averaging around 150ms but very inconsistent. We discovered that the source of the latency comes from the socket connection between the smartphone and the Flask server, and tried several different socketing methods to try and resolve this issue.

The best solution we’ve found so far was to use an interval-based socketing mechanism, sending 30 updates/sec =~ 33ms between each update. This reduced the latency significantly, but the jitter is still an issue, and we are still unsure where the root of the problem lies (might be from internet load from other applications/people in my apartment?).

Besides that, I also added functionality in Tone.js to activate/deactivate effects and filters. Next week, we will work on integrating octave shifts and pitch shifts using the touchpad.

Jason’s Status Report 5

This week, I worked on integrating the javascript Tone.js submodule with the Flask server. Data is transmitted by sending a json string through a socket, and the json contains note information, when to activate note separation, effects toggle controls/parameters, and touchpad swipe data.

The Python backend can now communicate with Tone.js, play notes, set up effects, and configure effects settings in real time.

Next week, I will work with Jeffrey to integrate the smartphone app with Flask, hopefully with time to conduct our first end-to-end latency tests.

Team Status Report

Updated Gantt Chart

Control Process

Selecting Notes

The desired note can be selected from the smartphone controller by holding a finger down on the note grid. The eight notes represented on the grid are solfege representations of whatsoever key and tonality (major/natural minor) selected from the smartphone menu. For example, a selection of the key of C Major would involve a grid with the following notes: C, D, E, F, G, A, B, C.

Octave Shift

To shift the range of notes in the smartphone note grid up or down an octave, swipe the thumb right or left on the touchpad of the VR controller (in the right hand). Swiping right once would denote a single octave shift up, and swiping left would denote an octave shift down. 

Chromatic Shift

To select a note that may not be in the selected key and tonality, the user can utilize the chromatic shift function. This is done by holding your right thumb on the top or bottom of the VR controller touchpad (without clicking down). Holding up would denote a half step shift up, and holding down would denote a half step shift down. For example, playing an E-flat in the key of C Major would involve selecting the “E” note in the left hand and holding the thumb down on the right hand touchpad. The same note can also be achieved by selecting “D” and holding the thumb up on the touchpad.

Triggering a Note

To trigger the start of a selected note, pull and hold down the trigger on the VR controller. The selected note will be played for as long as the trigger is held down, and any additional notes toggled on the left hand will be triggered as the note is selected in the left hand. If no notes are selected while the trigger is pulled, no sounds will be outputted.

Note Separation

If the user wishes to divide a selected note into smaller time divisions, there are two options:

  1. Release and toggle the trigger repeatedly
  2. Use motion to denote time divisions

The system recognizes a change in controller direction as a time division. For example, to subdivide held note(s) into two subdivisions, one would initialize the note with a trigger press and initiate the subdivision with a change in controller motion. The same outcome can be accomplished with just repeatedly pulling the trigger.


Polyphony can be simply achieved by holding down multiple notes on the smartphone grid while the trigger is pressed.

Toggling Effects

Four different effects can be toggled by clicking on any of the four cardinal directions on the VR controller touchpad.

Updated Risk Management

In response to the current virtual classroom situation and progress with our project, the risk has changed a bit.

For the phone grip, it doesn’t seem very feasible to build it in the same manner/design we had originally intended. We had planned to build the grip with a combination of laser cutters and maker space materials. Instead, we have decided to go for a simpler approach for attaching the phone to the users left hand. Instead, we want to use a velcro strap that would go around the user’s hand. We would then have the other end of the velcro attached to the back of the phone.

Another area of risk we found was the limitations of the current Javascript package we are using to generate sound. While there are many features in the library such as accurate pitch selection and instrument selection, there were some features we didn’t see. One of these features we wanted to use was the ability to pitch bend. A workaround we have brainstormed is to use a Python bending that does support pitch bending library. We could run this in parallel on our Flask server with the front end Javascript to achieve features we want from both libraries.

Jason Status Report 4

This last week, I kept working on the Tone.js audio module and developed an interface for Tone.js and Flask, allowing the team to develop simultaneously. Audio functionality now includes polyphony and the ability to configure effects chains, giving the user control to the sequence of effects, as well as the parameters for each effect. The switch to online classes did not affect my roadmap for this project in any significant way.

Jason’s Status Report 2

This week, I worked on playing sound through Python and trying to implement a wavetable. This proved to be very tedious and an unreliable method of generating sound, as smooth transitions between notes are not guaranteed and this introduced some noticeable latency to the program compared to GarageBand output last week. We decided that we should pivot from generating our own audio, as this introduces too much complexity into our project. We started looking at alternatives to audio generation, hopefully settling on a good one next week. By using a developed audio library, we can free up a lot of time for development of features that we think are more important to this project (user experience, polyphony, motion classification).

Jason’s Status Report 1

This week, I spent time understanding MIDI; its message types, its generation, and its parsing. I wrote some sample code in Python using the mido library to generate MIDI messages, mapping some of my keys to different notes. I then pipelined the MIDI output to GarageBand to observe the latency and smoothness of MIDI generation, and was pleasantly surprised when the latency was basically unnoticeable. Next week, I will work on implementing our own audio generator, maybe with a wavetable, and try to connect it with MIDI generation.

Jason’s Status Report 3

This week, I spent my time mostly working on generating live audio using tone.js and getting familiar with the package’s functionalities. It has all the features we needed, including pitch bends and effects chains. I also spent a lot of time cleaning up the project codebase and dependencies, which was very cluttered due to our frantic past weeks trying to figure out how to implement several different features and in different languages. After cleaning up the code, we created a new repo in GitHub to isolate the environments and packages that we decided to stick with.