Flushed out the frontend much better as well cleaned up the backend to handle more menus in a more scalable, general way. Added two other menus (key selection and filter selection). For key selection, there are 8 keys and 8 modes the user can pick from in edit mode. I also debugged some of the problems we were having with edit/play mode interactions. Users can now swipe up/down/left/right in edit mode to pick what they want to edit. We looked into some of the latency problems and addressed a few but there is still a base latency built into sockets but have reduced it down to about 15ms on each direction of sending data. We also realized that the frequency of data transmitted out of the controller may influence the general latency but that is something we can’t control.
Created a working phone application with the existing backend. Worked out exactly how we would send data from backend to frontend to keep everything in sync. Decided on the 4 menus we would allow the user the edit when they are in edit mode. Was able to develop several menus including instrument changing and parameter changing for certain effects. To maintain the same state on the front end back end for these editing menus, we decided to have the changes occur in the backend, and on any update to from the controller, that would have to be reflected in the frontend. To reflect this, a socket was set up for each menu and any changes (Ex. current selected instrument) were made by editing the CSS classes/ids responsively.
This week, I helped to finish combining our separate work into several functions on the python server. Got touch/press detection to work on the touchpad so that the user could go up and down half a pitch by pressing up or down on the touchpad. We decided to use this over swipes since these were more responsive than swiping. I also got octave changing to work where the user would touch left or right on the touchpad to go up/down half an octave. We also looked into the problem of latency and found that it was still an issue but isolated the problem to solely the internet that the phone is connected to. I also looked into ways to improve the UI/UX for the application on the mobile and laptop side.
This week, I completed the integration of the phone with the current backend workflow for handling sound generation. With 8 buttons, each button sends a certain command to the Flask server using sockets Along with sending certain notes, I also brainstormed ways of allowing the user to change octaves. We determined the best way to do this could be to use the right-hand swipe motions that have already been working. Then to send this data to the Flask server and then to the phone. The keys would then change by shifting up or down based on which direction the swipe was made. For example, if the user wanted to go half an octave up, the bottom 4 notes would shift up an octave while the top 4 would stay the same. This would act as a sliding window for the notes to go up and down half octaves. We also discovered there was quite a bit of latency in the sending of sockets from the phone to the laptop. We found the average to be around 100ms after many trials and are currently working towards solutions to lower this latency down. We also found the latency to be around 40ms on average between the remote and Flask server.
For this week, I focused on solely the phone side of our system. We had proved that it was possible to send signals from the phone to the Flask server through sockets. Now, I built off of that by creating a UI to represent 8 buttons the left-hand fingers would press. Each of these buttons corresponds to a command from 1-8 that is received by the Flask server. I also implemented a UI for changing the key that the current user is playing in. This status is also kept in sync with what the server knows. Along with UI, I also experimented with the accelerometer on the iPhone using the Expo accelerometer package. The API is easily accessible and can be customized by things such as update speed. Will plan to apply peak detection algo Michael worked on to detect bends in the left hand. The goal would be to match these bends to pitch bends generated from garage band.
This week, I initially focused on restructuring the code in our project. For our front-end, all the packages we had installed were installed manually by downloading and moving Javascript files for that library into our project folder. We knew this wasn’t scalable if we wanted to install more packages. So I set up NPM within our project so that packages could be installed with a package.json file and a command-line instruction. In this way, we wouldn’t have to manage versions for packages and could instantly get this project set up easily on any new computer. From this initial setup, I was able to install Socket.IO and Tone.js, the package we decided to use for sending MIDI instructions.
Additionally, I also experimented with buttons and sending specific inputs from the React Native phone application to the computer. I also looked into how to give user feedback from pressing specific buttons. Currently, there are several packages that give me the ability to send a vibrate command to the phone with a function.
Besides this, I spent the rest of my time preparing for the presentation and making slides. Specifically, making the block diagram for the project gave us all a better understanding of how each component would work and the functionality of our system.
For this week, I focused on solving the problem of not being able to establish a stable connection with the remote from my phone. I tried different sequences for how commands were sent for notifications and in a certain order/timings and were unsuccessful. I also tried to automatically reconnect to the remote after disconnecting by having a listener on disconnect. This also was also not a viable option because the time between disconnect and reconnect was about 400 ms. This disconnect would happen anytime between after 10 – 15 seconds after connecting so connecting to the remote from our phone just didn’t seem like a possible solution anymore.
Because of this, we had the idea to use the laptop as a place for central processing again and use Web Bluetooth since that was the only example that was able to work. To use this data from a front-end application, we needed a central place where one program could interpret both phone input from the user and sensor data from the Gear VR. To solve this problem, we decided to restructure the architecture by having a Flask and Socket server receive data. To connect the remote to this server, data was still being streamed to a front-end web application. However, once this data is received, it is instantly socketed over to Flask by using Socket.IO. For the phone, I installed Socket.IO in React and was able to use the phone as a client and send messages to the server. One thing is that the phone and laptop must be connected to the same network.
For this past week, I attempted to understand how we would connect to the Gear VR remote controller. We had several options to consider that each had specific tradeoffs. The controller could either be connected to the laptop and have sensor data be streamed there or we could have the controller stream data directly to the phone and eliminate the middle man (laptop). Initially from our research, we found someone that was able to successfully connect to the controller to a laptop. However, this was through the Web Bluetooth API and at the time seemed restricting since we would have to make our application on the web. For this reason, Michael decided to tackle the task of trying to connect the VR remote to a python program so that we would have access to this data from within python.
I focused on a different approach by trying to figure out how to connect the remote to the phone directly without having a laptop involved. In this case, the remote would act as the Peripheral and the phone would act as the Central instead of the laptop. We considered this approach because it would be a much more user-friendly experience to only have 2 components to use our application instead of having 3.
I initially started by looking into ways to develop for the phone and found React Native to be the most compatible and easy way to develop for either iPhone or Android. I started by looking into the best platforms to develop with React Native and found Expo.io. However, after trying to install the Bluetooth4 packages, I realized they were not compatible so instead decided to build a React Native application from scratch. After setting up the project, I tried two different Bluetooth packages: react-native-ble-manager and react-native-ble-plx. For react-native-ble-plx, I was able to successfully connect to the Gear VR but was unable to send notifications. I then tried the other package and was able to successfully connect and receive sensor data but was unable to establish a stable connection.