Team Status Report 3/18

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

I think our most significant risks in how we are going to be integrated our different components and subsystems. In order to be able to work, we have different branches in our repo for CV, Interface, and Audio, and plan on integrating them together in the master branch. However, when CV and Audio are integrated together, we will probably have to write in some threading capabilities so that CV can continually run  while MIDI timing is being taken with  the pressure sensor, so we will require a timing mechanisms.

In order to mitigate this, we will take a few steps. Firstly, we will edit points in our schedule so that we merge in each CV milestone with our interface. This will require us to set up the bare bones of the interface rather quickly, so we will be pushing the audio details later. In our interface, we will set up a software timer that runs/is checked as necessary, with a minimum BPM of 30, so we will always merge the CV milestones with a timer in place and software interrupts are triggered by a note being played via the pressure sensors to check the timer and send the MIDI message.

Our contingency plan is to have a MIDI message sent on each beat. This is not ideal, as we may have a note that starts on a half beat, but given that we are focusing on the note accuracy + volume and interface in this project, we feel that it would be okay to have this as a contingency plan.

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Given the feedback on our design report, we have decided to edit our implementation of the audio processing flow. We will not spend time trying to figure out the frequency spectrums of each note, but rather use a piano package with MIDI available through Python to render our sounds. We are currently exploring a few different options for these packages. This will reduce a lot of workload by using work already available to us rather than starting from scratch and trying to build deep sounds. Although we received feedback that suggested we use a soundcard as we are under budget, 1) we want to play the sounds through the phone for the portability aspects, as we expect our user to be using their laptop for actual score writing, and 2) we want to minimize cost to the user, as outlined in our use case.

Leave a Reply

Your email address will not be published. Required fields are marked *