Home

Jason’s Status Report 6

This week, I worked with Jeffrey to integrate the smartphone app with the Flask backend. After integrating the submodules, we now can play sounds on the Javascript frontend, with notes controlled by the smartphone. However, the latency initially was very slow and jittery, averaging around 150ms but very inconsistent. We discovered that the source of the latency comes from the socket connection between the smartphone and the Flask server, and tried several different socketing methods to try and resolve this issue.

The best solution we’ve found so far was to use an interval-based socketing mechanism, sending 30 updates/sec =~ 33ms between each update. This reduced the latency significantly, but the jitter is still an issue, and we are still unsure where the root of the problem lies (might be from internet load from other applications/people in my apartment?).

Besides that, I also added functionality in Tone.js to activate/deactivate effects and filters. Next week, we will work on integrating octave shifts and pitch shifts using the touchpad.

Jeffrey’s Status Report 6

This week, I completed the integration of the phone with the current backend workflow for handling sound generation. With 8 buttons, each button sends a certain command to the Flask server using sockets Along with sending certain notes, I also brainstormed ways of allowing the user to change octaves. We determined the best way to do this could be to use the right-hand swipe motions that have already been working. Then to send this data to the Flask server and then to the phone. The keys would then change by shifting up or down based on which direction the swipe was made. For example, if the user wanted to go half an octave up, the bottom 4 notes would shift up an octave while the top 4 would stay the same. This would act as a sliding window for the notes to go up and down half octaves. We also discovered there was quite a bit of latency in the sending of sockets from the phone to the laptop. We found the average to be around 100ms after many trials and are currently working towards solutions to lower this latency down. We also found the latency to be around 40ms on average between the remote and Flask server.

Michael’s Status Report 6

This week I continued work with integrating my VR controller code with the sound output code and have been continuing with tuning the gesture detection. In regards to integration, I am now able to trigger sounds using just the VR controller–as of now, sounds are hardcoded and will be more usable once integrated with the smartphone. I found that latency between controller to audio was not as good as we wanted, but after trials with Jeffrey and Jason’s setups, discovered that the latency was due to my laptop itself. They measured the latency on their systems to be around 40 ms, which was within our requirements.

Jason’s Status Report 5

This week, I worked on integrating the javascript Tone.js submodule with the Flask server. Data is transmitted by sending a json string through a socket, and the json contains note information, when to activate note separation, effects toggle controls/parameters, and touchpad swipe data.

The Python backend can now communicate with Tone.js, play notes, set up effects, and configure effects settings in real time.

Next week, I will work with Jeffrey to integrate the smartphone app with Flask, hopefully with time to conduct our first end-to-end latency tests.

Jeffrey’s Status Report 5

For this week, I focused on solely the phone side of our system. We had proved that it was possible to send signals from the phone to the Flask server through sockets. Now, I built off of that by creating a UI to represent 8 buttons the left-hand fingers would press. Each of these buttons corresponds to a command from 1-8 that is received by the Flask server. I also implemented a UI for changing the key that the current user is playing in. This status is also kept in sync with what the server knows. Along with UI, I also experimented with the accelerometer on the iPhone using the Expo accelerometer package. The API is easily accessible and can be customized by things such as update speed. Will plan to apply peak detection algo Michael worked on to detect bends in the left hand. The goal would be to match these bends to pitch bends generated from garage band.

Team Status Report

Updated Gantt Chart

https://docs.google.com/spreadsheets/d/1f5ktulxfieisyMuqV76F8fRJTVERRSck8v7_xr6ND80/edit?usp=sharing

Control Process

Selecting Notes

The desired note can be selected from the smartphone controller by holding a finger down on the note grid. The eight notes represented on the grid are solfege representations of whatsoever key and tonality (major/natural minor) selected from the smartphone menu. For example, a selection of the key of C Major would involve a grid with the following notes: C, D, E, F, G, A, B, C.

Octave Shift

To shift the range of notes in the smartphone note grid up or down an octave, swipe the thumb right or left on the touchpad of the VR controller (in the right hand). Swiping right once would denote a single octave shift up, and swiping left would denote an octave shift down. 

Chromatic Shift

To select a note that may not be in the selected key and tonality, the user can utilize the chromatic shift function. This is done by holding your right thumb on the top or bottom of the VR controller touchpad (without clicking down). Holding up would denote a half step shift up, and holding down would denote a half step shift down. For example, playing an E-flat in the key of C Major would involve selecting the “E” note in the left hand and holding the thumb down on the right hand touchpad. The same note can also be achieved by selecting “D” and holding the thumb up on the touchpad.

Triggering a Note

To trigger the start of a selected note, pull and hold down the trigger on the VR controller. The selected note will be played for as long as the trigger is held down, and any additional notes toggled on the left hand will be triggered as the note is selected in the left hand. If no notes are selected while the trigger is pulled, no sounds will be outputted.

Note Separation

If the user wishes to divide a selected note into smaller time divisions, there are two options:

  1. Release and toggle the trigger repeatedly
  2. Use motion to denote time divisions

The system recognizes a change in controller direction as a time division. For example, to subdivide held note(s) into two subdivisions, one would initialize the note with a trigger press and initiate the subdivision with a change in controller motion. The same outcome can be accomplished with just repeatedly pulling the trigger.

Polyphony

Polyphony can be simply achieved by holding down multiple notes on the smartphone grid while the trigger is pressed.

Toggling Effects

Four different effects can be toggled by clicking on any of the four cardinal directions on the VR controller touchpad.

Updated Risk Management

In response to the current virtual classroom situation and progress with our project, the risk has changed a bit.

For the phone grip, it doesn’t seem very feasible to build it in the same manner/design we had originally intended. We had planned to build the grip with a combination of laser cutters and maker space materials. Instead, we have decided to go for a simpler approach for attaching the phone to the users left hand. Instead, we want to use a velcro strap that would go around the user’s hand. We would then have the other end of the velcro attached to the back of the phone.

Another area of risk we found was the limitations of the current Javascript package we are using to generate sound. While there are many features in the library such as accurate pitch selection and instrument selection, there were some features we didn’t see. One of these features we wanted to use was the ability to pitch bend. A workaround we have brainstormed is to use a Python bending that does support pitch bending library. We could run this in parallel on our Flask server with the front end Javascript to achieve features we want from both libraries.

Michael’s Status Report 5

This week I successfully implemented a working algorithm for peak detection. I first implemented solely the dispersion based algorithm, but it was too sensitive and would detect peaks even when I would hold the controller still. After tuning the parameters a bit and adding some more logic atop the algorithm, I was able to pretty consistently detect peaks with realistic controller motion. I still have work to do in further tuning and testing in this regard.

I also started integrating the gesture detection code with Jason’s work with MIDI output. This is in progress as of now.

Jason Status Report 4

This last week, I kept working on the Tone.js audio module and developed an interface for Tone.js and Flask, allowing the team to develop simultaneously. Audio functionality now includes polyphony and the ability to configure effects chains, giving the user control to the sequence of effects, as well as the parameters for each effect. The switch to online classes did not affect my roadmap for this project in any significant way.

Michael’s Status Report 4

In light of the recent events of COVID-19, we have been spending this week setting up our project to work from three remote settings. The only setback from social distancing is that we now don’t have access to the campus labs or facilities, which we usually use to work together in person. We had also planned on fabricating a custom grip for our smartphone, but we had planned on 3D-printing the parts in the facilities that are now closed. Instead, we are using a more makeshift approach involving velcro straps to secure the hand to the smartphone. However, most of our project is software-based and we all have the required parts to get the platform working on our own setups, so there was no further refocusing required for the rest of our project.

During this week and the following, I plan on be exploring the different algorithms for peak detection as found in the paper in my last post. So far, I have altered the logging tool used for my data collection to be used by the different peak detection algorithms on the Python side of things. Jeffrey is going to be concurrently implementing the dispersion based algorithm for peak detection alongside me.

Jeffrey’s Status Report 3

This week, I initially focused on restructuring the code in our project. For our front-end, all the packages we had installed were installed manually by downloading and moving Javascript files for that library into our project folder. We knew this wasn’t scalable if we wanted to install more packages. So I set up NPM within our project so that packages could be installed with a package.json file and a command-line instruction. In this way, we wouldn’t have to manage versions for packages and could instantly get this project set up easily on any new computer. From this initial setup, I was able to install Socket.IO and Tone.js, the package we decided to use for sending MIDI instructions.

Additionally, I also experimented with buttons and sending specific inputs from the React Native phone application to the computer. I also looked into how to give user feedback from pressing specific buttons. Currently, there are several packages that give me the ability to send a vibrate command to the phone with a function.

Besides this, I spent the rest of my time preparing for the presentation and making slides. Specifically, making the block diagram for the project gave us all a better understanding of how each component would work and the functionality of our system.