Nora’s Status Report for 4/29

This past week I presented our final set of slides for the Final Presentation. Thus last Sunday I worked on creating the powerpoint and preparing for the oral presentation. Overall, the presentation went smoothly. We received one question about how we handle pieces where more than five notes at a time to which I answered that they are iterated through sequentially in the scheduling algorithm, and any notes after five are not played. If we have more time after the system is built, we may implement a more complex method that analyzes the chords and schedules accordingly.

During the week, I also worked on adding functionality in the code to prevent the program from crashing when it encounters errors and exceptions. This adds a layer of robustness for demo day in case unexpected errors appear that we don’t catch during testing. Going into next week we plan on conducting more testing of the entire system so that we catch those bugs before our demo.

I also looked into having the program run on startup. I have tried editing the rc.local file as well as placing the program in the init.d folder on the Raspberry Pi. I have not been able to do this successfully, but I will continue trying so that we can just plug in the Pi directly so that setup is more seamless on demo day. However, if it ultimately does not end up working, it will not impact the functionality of our overall system.

Next week, I will work with Aden and Rahul to complete the assembly process of our final accompanyBot and then carry out end-to-end testing so that we do not encounter hiccups during demo day. We will also be working together to complete our final poster, report, and video.

Nora’s Status Report for 4/22

Since the last status report, I have finalized the serial communication integration with Rahul. One issue that popped up when we were testing the integration was that the Arduino MKRZero we were using sometimes had difficulties connecting to the computer. We had to reset the board often and faced trouble with the bootloader. Thus we decided to switch to an Arduino Uno since it connected more reliably. Since the Uno only had one hardware Serial port, I had to look into a software serial library to do the same forwarding that we had on the MKRZero. This proved to be successful. The tradeoff here was that the Uno is slightly larger than the MKRZero, so we will need to design the casing around this constraint. However, we thought this was a necessary tradeoff since the user needs to be able to consistently connect to the hardware without having to touch the electronics in any way.

This week I have also hammered out additional features that enhance the user-friendliness and robustness of our product. Specifically, I added support for octave checking so that the scheduler takes into account not just the pitch of the note but also the octave it lands in. This makes the music more accurate to the original sheet music. Due to our one octave constraint, the accompanyBot must be placed over a fixed octave throughout the duration of a song. Thus during the scheduling algorithm, I added code to find the octave that is most frequently played. We return this octave number to the local application so that we can notify the user to move the accompanyBot over that specific octave. Then, during the piece, only notes falling in that range will be played. I also added code to dynamically calculate the max tempo of a song based on the song’s smallest note value, its time signature, and the physical limitation of our solenoids (which can only play four notes a second). If a user tries to update the tempo or upload a song that has a higher tempo than this maximum value, then the scheduler will cap the playing at this max tempo and also send the max tempo back to the application so that the user can be notified.

During the week, I also helped Aden with the 3D printing. I offered some suggestions for design changes after the initial prototype finished printing. Next week I will continue to help with the design and construction of the physical system.

Currently I am on schedule according to the Gantt chart. In the coming week, I will work on adding code that catches errors while the program is running, since we have discovered that connecting and disconnecting the serial communication results in certain bytes sent to the UART which are not able to be decoded properly and thus crash the scheduler program.

Nora’s Status Report for 4/8

This week, I continued serial integration with the local application. We were able to demonstrate end-to-end communication at the interim demo which was a promising milestone.

Early in the week, I collaborated with Rahul to confirm how we would define the byte commands sent between the Raspberry Pi and local application. These were the commands we decided on:

“S\n” → Start
“P\n” → Pause
“C[measure_number]\n” → Current measure number
“T[tempo_number]\n” → Current tempo
“F[file_name]\n” → File incoming

On the Raspberry Pi side, start and stops received from the serial port and update the state of the scheduler (namely whether or not the piece is paused and if it is started, what time and measure number it starts at). The current measure number received will update the current measure played, while current measure number packets are sent back to the computer after every measure change to let the GUI know to change the measure as well.

Once this specification was in place, I tested the roundtrip latency for serial communication. Using the serial port on my computer, I timed how long it took to send a test command to the Raspberry Pi and receive a response back. This resulted in around a 3 ms delay which was well below the requirements we had for starting and stopping the playing. Using a similar method, we tested sending an entire XML file over USB but found that the latency for one page was already reaching 13 seconds. The time it takes to send longer files would scale linearly, so we decided that serial communication was too slow from a user’s perspective. Thus we are deciding to use scp over ssh to copy files onto the Raspberry Pi.

In addition to running these latency tests to validate the system, I have been running tests as I have developed the scheduling algorithm. In terms of scheduling accuracy, I have been generating custom sheet music that targets specific features, converting them into XML files, and running them through the scheduling algorithm to see the outputs. I ran two chromatic scales with sharps and flats through the system to make sure that all the pins were accurately mapped and that all the possible pitches were activated correctly. I also have one piece that is a repeated quarter note at 60 bpm and another at 120 bpm that I have used to test the tempo accuracy. Using a metronome app, I measured the tempo to match the target tempo. Another piece attempted to play a six note chord to ensure that only five pins max were high at one time.

Currently I am on schedule since our updated Gantt chart has me finalizing the serial communication up until Carnival.

Next week, I plan to finish the serial communication to incorporate opening the files based on the file name sent through the serial port. I will also need to add support for changing the tempo, which should be relatively straightforward since I just need to update the variable that keeps track of the measure duration in milliseconds.

Nora’s Status Report for 4/1

This week, I worked with Rahul on integrating the Raspberry Pi with the computer’s local application. Since the computer will be sending and receiving measure numbers as points of reference for where in the piece to start playing and what part of the piece to display on the screen, I had to restructure the code so that the measure numbers were the keys in the dictionary that holds all the notes. This would make it easier for the RPi to look up which notes to play at specific times. With Rahul’s guidance, I re-implemented the scheduler to be compatible with this data structure change. Thanks to Rahul’s suggestion, I also changed the code so that the RPi loops continuously while checking the system time instead of sleeping unnecessarily as was done in the previous version. I was able to alter the code and get it back to a working implementation.

Another area I worked on was introducing the serial communication to the Raspberry Pi. As mentioned by Rahul, we did not have the required USB to UART converter that would allow us to interface with the RPi’s TX and RX pins. To avoid the extra cost and delay to our schedule that would result from purchasing an adapter, we decided to use Arduino microcontrollers that have built in serial ports. In tandem with Rahul, I experimented with using the Arduino MKRZero which has two serial ports: one between the Arduino and the computer and another to the on-board TX and RX pins. By using these two pins, and having the Arduino read the data from the USB and write to the TX/read from the RX and write back to the USB, I was able to set up the system to send and receive strings between my computer and the RPi. While one risk of this implementation is that there may be added latency, the fact that the serial communication is buffered should allow the Arduino to pass the information through at roughly the same baud rate. Tomorrow I will write a more comprehensive test using the system time to find the round trip latency between sending bytes from the computer and receiving a response from the RPi.

At the moment I am behind schedule since we have not completed the integration. However, we did anticipate that this part of the project would take more time and had incorporated slack time for it accordingly. Thus this is on schedule when considering that slack time.

Next week I will focus on further developing the serial communication. I will be meeting with Rahul to agree upon the command bytes that will be sent for various situations that are required by the application.

Nora’s Status Report for 3/25

This week I worked on adding code to incorporate chord playing support in the RPi. Since the chords are parsed differently from the notes, I had to figure out how to deal with them separately. Another case I had to consider was flats and sharps, since I had to add mapping that mapped equivalent sharps and flats to the same GPIO pin since the name of the note may change due to key signatures and accidentals, but the physical note stays the same. I also worked on making the overall process for scheduling the piece from XML file format to the scheduled notes more smooth.

I am behind schedule according to the previous Gantt chart, but after discussion with Rahul we decided that he would aid with scheduling and integration after his tasks for the GUI are completed. We will

Next week I hope to be able to have the RPi successfully take in inputs from the application. I will be working alongside Rahul to handle communication between the local application and the RPi to handle uploading the XML files, starting, stopping, and changing the tempo. This will put us in a good spot for the interim demo that is during the following week.

Nora’s Status Report for 3/18

This week, I worked on integrating the RPi GPIO outputs with the physical circuitry by attaching them to the gates of each MOSFET. A video of the chromatic scale that we showed in a previous status report being played on the solenoids can be found here

One thing we have not mentioned so far is that, from the rough piano layout seen on the paper, you can see there is an extra solenoid between the E and F spaces. This solenoid is placed in the area where there would be a black key in between the white keys. Another will be placed between the B and C spaces. This part of the design is so that we are able to move the accompanyBot up and down the scale. Since different starting keys of the octave will require different orders of black and white keys, we will just remap the solenoids to the correct keys. Thus we will be able to play more contiguous notes for certain melodies that do not start on a C. The downside of this design is that at any given time, two of the solenoids will be useless. However, the solenoids must be arranged like this if we do not want to have the user have to move the solenoids themselves each time they adjust the starting position.

To make the pin mapping more modularized, I have written functions that change the dictionary that maps the notes to pins based on the starting note passed into the function. This will allow the user to place the accompanyBot at different spots on the keyboard.

I also added additional XML files that can help in testing the tempo and the case where the piece tries to play more than five keys at a time

So far, I am on schedule according to the Gantt chart, although tomorrow I will need to add support for chords since they are not supported in the current implementation due to the fact that I did not realize they are a different data structure in the music21 library.

In the next week, I will work on making the process for playing a piece more streamlined since at the moment, I must manually change the files and rerun the code every time I want to switch the piece. I will also work with Rahul on accepting the user inputs to start and stop.

Nora’s Status Report for 3/11

I spent a majority of the week working on the design report with my team members. As a result, I was not able to conduct thorough testing of the scheduling. I did implement additional code to handle rests in the music as well as setting the GPIO pins low when a note is finished playing. This will handle cases where the same note is played twice or more in a row.

Since I was not able to test the timing accuracy of the scheduling, I am slightly behind schedule. However, I plan to catch up by allocating time tomorrow to fully test out the new code and verify the timing accuracy of the current code using a metronome.

Next week, I plan on modularizing the GPIO pin mapping and working with Aden to integrate with the actuators. I will need to check if the timing still holds once the hardware is involved, since one bottleneck we encountered earlier on was that time it took to activate the solenoids limited our max frequency.

Nora’s Status Report for 2/25

This week, I completed the first pass of the note scheduling on the Raspberry Pi using the music21 library to parse the notes, and the changes are now pushed to our new GitHub repository. The parse function is able to successfully convert XML files into Stream objects which hold hierarchical information on the Parts, Measures, and Notes (also music21 object types). Based off this info, I was able to extract values for the tempo and beatTypes which I used to calculate the number of ms each quarter note is worth. Dimensional analysis of calculation is shown below:

Additional Resource

From there, we can then scale it by our smallest note value to get the time delay that we will increment between each “timestamp” (a slice in time in which we can determine which notes are active to tell which keys are pressed or not) For simplicity, the current code assumes the smallest valued note we can play is a 16th note. This can be easily turned into a variable later on.

The function that does the main chunk of work is called

schedule

which iterates through the notes stored in the Stream and sets the bits in a bitmask based on the note being played (using mappings expressed through two changeable dictionaries) and then updates the notesToPlay dictionary that matches the bitmask to different timestamps.

As mentioned in the design review presentation, I utilized the pigpio library for controlling the GPIO pins simultaneously by setting and clearing a bank of bits. To visually see the results of the scheduling, I attached the GPIO outputs to LEDs arranged in a keyboard-like fashion. Pictured below is the setup, as well as the measured BPM from playing a simple quarter note C scale at 60 bpm. The code seemed to work pretty well!

Additionally, here is a video of it playing the following chromatic scale. It appears to capture the different note values pretty well.

The task I was scheduled to complete for this week was to “convert XML to scheduled GPIO inputs using music21.” As discussed above and with some help from Aden, I have met this goal satisfactorily, although I will need to work on ironing out a few extra details such as playing successive notes and accounting for rests. Therefore overall I am on schedule.

For the next week, I plan to continue improving the scheduling algorithm and conduct more thorough testing beyond using the built-in GuitarTuna metronome. However, I do anticipate spending a decent amount of time dedicated to writing up the Design Review Report with my teammates, so I may not be able to spend as much time on these tasks.

Nora’s Status Report for 2/18

This week I received the Raspberry Pi 4 from ECE Receiving. After picking up the RPi, I worked on setting it up so that we could control and program it. Since we didn’t have a wired keyboard, I had to install the VNC Viewer application to ssh into the RPi using its IP address. This allowed us to open up the RPi’s desktop and type inputs to it. After seeing the full capabilities of the RPi, we considered migrating all of the code, including the UI and OMR Python application, onto the RPi to reduce latency when starting and stopping. However, given our current toolchain, we require a Windows device to run Audiveris, so we decided the microseconds-milliseconds we save from integrating together was not worth the added effort of setting up a new environment and OS on the RPi.

On Saturday, I worked with Aden on testing the exploratory solenoids and transistors that we bought. I wrote a simple Python file using a basic GPIO library to set pins high so that we could test the power switching. The output voltage from the RPi was above the necessary threshold voltage of the MOSFET, so the power switching was quite seamless.

One challenge we encountered when testing the solenoids was that our initial code using the sleep function from the time library could only get the solenoids to depress and retract about four times a second (video linked here) which is below the six times a second target frequency in our use-case requirements. Since the sleep function takes an argument in seconds, one possibility for the bottleneck is that it couldn’t delay for a shorter amount of time. I will be working on installing and using the pigpio library so that we can have microsecond delays instead of the limited sleep function. However, if the bottleneck on the timing ends up being due to the hardware itself, then we will need to rescope that requirement and change the code that limits the max tempo/smallest note value to account for this frequency cap.

One big change we addressed in our team status report was the switch to the music21 library for parsing rather than a custom parsing process. This resulted in me being behind schedule, but the new Gantt chart developed for our Design Review Presentation accounts for this change.

Next week I will be looking more into using music21 to extract the notes from the data and converting them into GPIO high/low signals. I was able to convert the test XML file that Rahul generated into a Stream object, as shown in the image below. However, as you can see, there are several nested streams before the list of notes is accessible, so I will need to work on un-nesting the object if I want to be able to iterate through the notes correctly.

As for the actual process for scheduling, I will try to explain the vision here. We can have a count that keeps track of the current “time” unit that we are at in the piece, where an increment of 1 is 1 beat. This corresponds nicely to the offsets from music21 (i.e. the values in the {curly braces}). We can also calculate the duration of each beat (in milliseconds) by calculating 60,000/tempo. So we will iterate through the notes and at a given time, if a note is being played, we’ll set the GPIO pin associated with it to high and we will set the rest of the pins to low (which is easily accomplished with batch setting from the pigpio library). Thus we will also need a mapping function that connects the note pitch to a specific GPIO pin.

Overall, the classes I have taken that have helped me this week are 18-220 and 18-349. Knowledge of transistors and inductors as well as experience with the lab power supplies that I gained from 18-220 helped me when working on the circuitry. Embedded systems skills from 18-349 was very helpful when looking at documentation and datasheets for the microcontroller and circuit components respectively.

Nora’s Status Report for 2/11

At the start of this past week I helped polish the Proposal Presentation slides in preparation for the week’s presentations. In particular, I created the Physical Implementation Model mockup as well as the block diagram. I also helped Aden prepare for the oral presentation.

Since my main responsibility area for the project is handling the microcontroller component, I have been brainstorming ways to organize the information from the XML so that the microcontroller can translate each individual note into scheduled signals that can be sent to the transistors to switch our solenoids on or off. Currently, I am envisioning a dictionary-like structure whose keys are certain “time” pointes (where the length of a time unit are determined by the tempo from the parser) and the values are a list of tasks to execute at the corresponding time (i.e. turn a specific solenoid on or off). When the microcontroller is playing a song, it will increment a counter for the time and retrieve the corresponding tasks at that time.

Since the XML file contains a lot of extraneous text, I will be working on providing Rahul with specifications for what should be kept in the file that is sent to the microcontroller. At a minimum, I will need the tempo value (or text description) of the piece and a sequence of notes with their pitches and duration.

We are planning on using a 30V 5A DC power supply that I have on hand to power our solenoids. Based on some preliminary power calculations using data from last semester’s Talking Piano group (since we don’t have access to the physical components to test yet), we determined that we would need a max power of (12V)(1.5A)(5 solenoids) = 90W for 5 solenoids turned on at one time. If the solenoids we end up buying draw comparable current, we may have to reduce the number of active solenoids to three at a time so that we are below the 5A limitation of our power supply. To lower the overall power so we meet the 60W average power requirement, I looked into the possibility of using a PWM servo driver for switching the transistors so that we can lower the average power. However, since PWM inputs for solenoids are used for position control, we would have to make sure it is still able to press a key down fully.

I requested a Raspberry Pi 4 from the 18-500 parts inventory, so I am hoping to set it up and familiarize myself with the environment in the coming week.

While I do not have any physical code written, I am ready to start implementing the data structures and initial algorithm for the scheduler. Thus I believe I am on schedule and should have completed a first pass implementation by the next status report.