Nora’s Status Report for 2/25

This week, I completed the first pass of the note scheduling on the Raspberry Pi using the music21 library to parse the notes, and the changes are now pushed to our new GitHub repository. The parse function is able to successfully convert XML files into Stream objects which hold hierarchical information on the Parts, Measures, and Notes (also music21 object types). Based off this info, I was able to extract values for the tempo and beatTypes which I used to calculate the number of ms each quarter note is worth. Dimensional analysis of calculation is shown below:

Additional Resource

From there, we can then scale it by our smallest note value to get the time delay that we will increment between each “timestamp” (a slice in time in which we can determine which notes are active to tell which keys are pressed or not) For simplicity, the current code assumes the smallest valued note we can play is a 16th note. This can be easily turned into a variable later on.

The function that does the main chunk of work is called

schedule

which iterates through the notes stored in the Stream and sets the bits in a bitmask based on the note being played (using mappings expressed through two changeable dictionaries) and then updates the notesToPlay dictionary that matches the bitmask to different timestamps.

As mentioned in the design review presentation, I utilized the pigpio library for controlling the GPIO pins simultaneously by setting and clearing a bank of bits. To visually see the results of the scheduling, I attached the GPIO outputs to LEDs arranged in a keyboard-like fashion. Pictured below is the setup, as well as the measured BPM from playing a simple quarter note C scale at 60 bpm. The code seemed to work pretty well!

Additionally, here is a video of it playing the following chromatic scale. It appears to capture the different note values pretty well.

The task I was scheduled to complete for this week was to “convert XML to scheduled GPIO inputs using music21.” As discussed above and with some help from Aden, I have met this goal satisfactorily, although I will need to work on ironing out a few extra details such as playing successive notes and accounting for rests. Therefore overall I am on schedule.

For the next week, I plan to continue improving the scheduling algorithm and conduct more thorough testing beyond using the built-in GuitarTuna metronome. However, I do anticipate spending a decent amount of time dedicated to writing up the Design Review Report with my teammates, so I may not be able to spend as much time on these tasks.

Aden’s Status Report for 2/25

This past week I ordered the remaining parts, which included 14 solenoids and 6 transistors. This puts us at around half of our budget since solenoids that produce enough force for our purpose are expensive. Moreover, I ordered more parts than we needed to mitigate the risk that some parts could break when putting the project together. Hopefully, they will arrive soon, and I can assemble all the circuitry required for depressing the piano keys. Meanwhile, I have been helping Nora with the microcontroller portion of the project. We have been working on getting a proof of concept where we take a converted sheet of music from the OMR, parse it into a python object using music21, and schedule the notes on the correct gpio pins that will be connected to our solenoids. Instead of solenoids, we have been using LEDs in the configuration of piano keys to get a visual of which keys will be pressed by a solenoid at a given time. We successfully got the LEDs to light up in the order of a C and Chromatic C scale. Ultimately, we will need to refine the scheduling algorithm further to handle things like rests, chords, successive eighth notes, etc., but getting it to do a basic C scale was a big step. Additionally, when moving from LEDs to solenoids, we will have to account for factors that do not come with LEDs, such as the physical limit that the solenoids can move back and forth. Overall, aside from being unable to get my hands on the remaining parts, it was a productive week in terms of moving toward our final goal.

Not being able to receive our parts this week did not put me behind my current schedule. Although it would have been nice to get everything here this week, on our Gantt chart, I scheduled next week for building the circuitry we will need for the final version of our project. I would still like to keep some momentum going, so while I’m building the circuitry next week, I will also be thinking and drawing out rough sketches of what will be holding everything in place over the piano.

As I have already stated, next week, I hope to put together the circuitry for the final version of the project. Obviously, it may not be exactly what we need at the end of the semester, so I will not be soldering anything yet. Still, I hope to put something together so that we can integrate the microcontroller and solenoids to get an accurate feel for the timing. Additionally, I would like to aim for a rough design of our final chassis. This does not have to be what we end up going with, but just something to think about while I put together the circuitry.

Team Status Report for 2/25

This week, we made it past the Design Review Presentation and have entered a major portion of the development stage this week. Our team’s GitHub has been set up, and initial contributions have been made with respect to the OMR parsing scripts and Raspberry Pi microcontroller scripts. Additionally, we discussed and moved forward with ordering the remaining 14 solenoids and 6 NMOS transistors to meet our hardware specifications.

Integration of the accompanyBot apparatus with the raspberry pi may have a hidden difficulty. Through Nora’s initial GPIO code, we were able to see that LEDs and smaller components respond more quickly compared to the bulkier inductance based solenoids. This may affect our ability to reach the requirement of 6 key presses per second. To mitigate the risk of not meeting this requirement, we may need to rescope the threshold metric so that the hardware is able to meet it. No further changes were made to our design and requirements.

Teaming

One strategy that we have employed for working together effectively is maintaining constant communication. Apart from our Slack channel, we have a group chat where we discuss issues and different ideas for our respective aspects of the projects. For example, this past week we made sure to get one another’s approval before ordering the remaining parts for our build. We also meet up at least once a week outside of class time to update each other on our individual progress, write our team reports together, and work on any deliverables that we aimed to complete that week. Furthermore, we have made the following adjustments to fill in gaps related to challenges that have come up:

  • Rearrangement of scheduled tasks – Due to the introduction of music21 to aid in our scheduling process, we adjusted the work assignments so that more time could be allocated to familiarizing ourselves with the library. Additionally, Rahul joined Nora in setting up the notes scheduler algorithm.
  • Waiting for parts to arrive – While the remaining solenoids and transistors are being shipped, Aden has been helping Nora with her microcontroller scheduling tasks. This was anticipated and accounted for before ordering parts.

Rahul’s Status Report for 2/25

I delivered my team’s design review presentation this week, and overall conveyed the design requirements and status of our project well. Analyzing the feedback, I see we need to bring some more of the use case requirements into light. I will work with the team to ensure Nora emphasizes the use case metrics in our final presentation. 

Last week, I talked about the preliminary code skeleton which I have worked to expand upon. Since then I have made some modifications and additions. The powershell script to call Audiveris now executes in foreground to signal the completion of the OMR job. Following this, I modified the python script to unzip the generated mxl file after the powershell job completes. For testing purposes, I also added a testing function that runs the whole pipeline of reading music, unzipping, passing the xml to python data structures, and then playing the music data through computer speakers. This was possible after doing more research into the music21 library and understanding respective formats and syntaxes.

I have also better understood a portion of how to create our notes scheduling algorithm. Once music21 has loaded the musicxml file into a stream, it separates the notes into parts. For the case of piano music, a bass clef and treble clef. Then within each of these parts I can access an array of measures, each of which contains an array of notes, rests or chords (which are also sort of arrays of notes). More work will have to be done in integration to set up our own player which will line up the note to the appropriate solenoid, but otherwise whatever note(s) the application is sitting on in the music21 structure should be directing some solenoid with the GPIO signal to be on. 

Our team has also migrated code to GitHub. All of my contributions are pushed to my forked copy of the team repo. This allows us to verify modifications by inspecting each other’s commits before merging. Overall, I am on schedule. My task for the week was to “modify XML/MIDI output to integration specs”, and I accomplished this with my preliminary music21 code. Next up, I will diagram the front end layout for the application. I think if time permits (though I probably will dedicate the rest of my time to writing up the design report with my team) I should also research what framework will be best to implement the application in. At the moment, pygame seems like a reasonable choice to meet our design requirements (especially the 150 ms latency time).



Nora’s Status Report for 2/18

This week I received the Raspberry Pi 4 from ECE Receiving. After picking up the RPi, I worked on setting it up so that we could control and program it. Since we didn’t have a wired keyboard, I had to install the VNC Viewer application to ssh into the RPi using its IP address. This allowed us to open up the RPi’s desktop and type inputs to it. After seeing the full capabilities of the RPi, we considered migrating all of the code, including the UI and OMR Python application, onto the RPi to reduce latency when starting and stopping. However, given our current toolchain, we require a Windows device to run Audiveris, so we decided the microseconds-milliseconds we save from integrating together was not worth the added effort of setting up a new environment and OS on the RPi.

On Saturday, I worked with Aden on testing the exploratory solenoids and transistors that we bought. I wrote a simple Python file using a basic GPIO library to set pins high so that we could test the power switching. The output voltage from the RPi was above the necessary threshold voltage of the MOSFET, so the power switching was quite seamless.

One challenge we encountered when testing the solenoids was that our initial code using the sleep function from the time library could only get the solenoids to depress and retract about four times a second (video linked here) which is below the six times a second target frequency in our use-case requirements. Since the sleep function takes an argument in seconds, one possibility for the bottleneck is that it couldn’t delay for a shorter amount of time. I will be working on installing and using the pigpio library so that we can have microsecond delays instead of the limited sleep function. However, if the bottleneck on the timing ends up being due to the hardware itself, then we will need to rescope that requirement and change the code that limits the max tempo/smallest note value to account for this frequency cap.

One big change we addressed in our team status report was the switch to the music21 library for parsing rather than a custom parsing process. This resulted in me being behind schedule, but the new Gantt chart developed for our Design Review Presentation accounts for this change.

Next week I will be looking more into using music21 to extract the notes from the data and converting them into GPIO high/low signals. I was able to convert the test XML file that Rahul generated into a Stream object, as shown in the image below. However, as you can see, there are several nested streams before the list of notes is accessible, so I will need to work on un-nesting the object if I want to be able to iterate through the notes correctly.

As for the actual process for scheduling, I will try to explain the vision here. We can have a count that keeps track of the current “time” unit that we are at in the piece, where an increment of 1 is 1 beat. This corresponds nicely to the offsets from music21 (i.e. the values in the {curly braces}). We can also calculate the duration of each beat (in milliseconds) by calculating 60,000/tempo. So we will iterate through the notes and at a given time, if a note is being played, we’ll set the GPIO pin associated with it to high and we will set the rest of the pins to low (which is easily accomplished with batch setting from the pigpio library). Thus we will also need a mapping function that connects the note pitch to a specific GPIO pin.

Overall, the classes I have taken that have helped me this week are 18-220 and 18-349. Knowledge of transistors and inductors as well as experience with the lab power supplies that I gained from 18-220 helped me when working on the circuitry. Embedded systems skills from 18-349 was very helpful when looking at documentation and datasheets for the microcontroller and circuit components respectively.

Rahul’s Status Report for 2/18

Since our OMR solution will be run on Windows, this week I put some work into setting up the shell scripts and python actions to call such a script. While our main hub application development won’t start for a few weeks, I still wanted to build a skeleton of functionality for calling the OMR without the default Audiveris GUI that could be modified later on. For this I had to learn some features of the .ps1 or Windows powershell scripting language by consulting stack overflow. Though the syntax is not as kind as bash, the operations remain the same, and I was able to allow python to execute it via the os module. I recalled some libraries from 15-112 for file path opening through GUI and decided to incorporate those into the skeleton code, as this will make our UX better come app design time. 

I also have spent time preparing for the design review presentation next week, as I will be delivering the presentation on behalf of my team. In the effort to expand sections of our block diagram, I felt it best to segment our project in three dimensions: a transcription phrase, a scheduling phase, and an execution phase. 

As will appear in our presentation:

I hope this will provide our audience(s) some clarity to some of the uncertainties regarding technicalities of our project. By doing this, I uncovered that our notes scheduling was defined rather weakly, and deserves more planning time. As a group we knew that converting music scores to MusicXML format was the way to go and that the RaspberryPi could go from there. After generating the XML with Audiveris, and trying to move forward with its output, we realized how much extraneous information there is just in the readable XML. This led me to do some digging on open source XML “condensing” code, just so that it could be organized into data structures that might be more easily accessible and operable by our (to be determined) mode of scheduling. Fortunately, I found that MIT has poured in years of experience and expertise into developing music21, a python module for importing music file formats for conversion to data structures that can be easily traversed or manipulated, and permitting export of different file types or playing imported source directly (Plus, they have awesome documentation). Considering the RaspberryPi will be switching on and off the solenoids from a python script, I can foresee having music21’s to preprocess the XML being an important intermediate step. 

In terms of staying on schedule, I needed to configure the OMR to output XML/MIDI. I consider this accomplished, since MIDI was an idea that was not necessarily needed (plus I found there are many resources available for XML to MIDI conversion). Since music21 will be able to play back our XML, our sound quality testing will be facilitated as such. Next week, I will have to work with Nora and Aden on formalizing our scheduling better to determine most if not all of the necessary transformations of the transcribed XML. Hopefully, I may get to writing a portion of the corresponding code.



Team Status Report for 2/18

This past week, we ordered and received solenoids for testing. We also received a Raspberry Pi 4 from the 18500 Inventory. This allowed us to explore the parts to help gather data for metrics for the upcoming Design Review Presentation. During our meeting with Professor Sullivan, we also received feedback on a set of additional requirements that we would need to include during the Design Review

Principles of Engineering, Science, and Mathematics

From the 7 ABET principles of STEM, we believe this week we incorporated principles 3, 5, 6, and 7. 

Our rationale for these choices is as follows:

(3) Our work towards the upcoming design review presentation involves effectively engaging and communicating to our audience how feasible our project is turning out to be after having already put together some of the pieces.

(5) Every week, we make sure to meet up outside of class at least once to regroup our work and try to help debug or discuss integration strategies for future development based on current progress/knowledge. This week was focused on how we may go about scheduling our newly arrived solenoids for key pressing. 

(6) To make sure we are meeting the quantitative targets for our design, we must gather and analyze data that motivates our design decisions. This week specifically, we used laboratory power supplies and testing equipment to measure the voltage and current needed to power our solenoids. The data we collected from this experimentation helped us determine which solenoids out of the initial batch we would go with for our final implementation.

(7) Since we are engaging with a lot of new technology on both the hardware and software sides, it was crucial for us to acquire and apply new knowledge when fleshing out our design. For instance, Rahul needed to learn to write powershell scripts, and Nora and Aden figured out thresholding power for the solenoids.

Risk and Risk Mitigation

The main risk for our project is the issue of safely powering multiple solenoids at once. During our testing of the solenoids, we found that the 25N solenoid required around 0.6 A of current at 10 V which results in 6W to power one solenoid. This was much less power than we initially expected, and if we consider our worst case scenario with five solenoids being powered at once, the total average power will be far less than we expected (about 30 W). However, we were only able to test one solenoid, so having all solenoids drawing current at the same time may cause potential problems if the additional load results in a greater current that does not scale linearly (since our power supply has a max amperage of 5A). To mitigate this risk, we are willing to decrease our max number of solenoids to 3.

Of the initial order of solenoids, we noticed that one arrived broken and thus could not test it well. These mechanical components accelerate rather quickly and thus could be susceptible to damage or may cause harm in cases of malfunction. They also make a lot of noise, which may interfere with the sound of the actual piano. To solve such issues we may need to add padding or modify our circuit to better average out the impact.

Changes Made to Design

The main change we have made to the design is the choice to incorporate the music21 Python library to aid in the parsing of the music. We chose to go with this library instead of writing our own parser because the musicXML file generated by Audiveris is quite bulky and contains a lot of extraneous text whereas the music21 library has a lot of functionality that can aid in scheduling, which, as we have been warned, is a non-trivial task. While this comes at no extra cost budget wise, it does require alterations to the schedule in order to include time learning how to use the library and incorporate it into the project. Our updated Gantt chart is included in the Design Review Presentation slides.

Aden’s Status Report For 2/18

This week I completed everything I aimed to following proposal presentation week. I wanted to order a handful of solenoids to test before deciding which ones we will be using to depress the piano keys. Additionally, I ordered a handful of MOSFETs designed to handle high voltage and current, which is perfect for our project since solenoids require a high voltage and draw about one ampere of current. Being able to get my hands on those parts as soon as possible was my top priority, and since I was able to get them within the week, we were also able to test the solenoids with a power source, our raspberry pi, and the MOSFETs I ordered.  All the parts worked exactly as planned, and now that we have tested the different solenoids, we have decided to go with the Adafruit 25 N solenoids.

Image of our single MOSFET and solenoid circuit:

As of this week I am on track according to the first iteration of our Gantt chart. I had hoped to get some parts ordered and tested by the end of the week, and that is exactly what I accomplished. Furthermore, I have helped a little with our design presentation and will be the audience for Rahul tomorrow when he practices for the presentation.

Next week, I hope to order the rest of our soleniods and MOSFETs, so I can begin building what will hopefully be the final circuitry for our accompanyBot. Additionally, I would like to help Nora develop the python code that will turn the MOSFETs on and cause the solenoids to move according to the XML file produced by the optical music parser. Lastly, if I have time, I would like to think about what the final structure that will be holding the circuitry and microcontroller will look like and possibly have a rough mockup of it sketched.

Finally, I used skills primarily developed in 18220 throughout the course of the past week.  Specifically, skills relating to principles 6 and 5 enumerated in our team report. 18220 helped develop analytical and team skills that I was able to utilize when working on the project and putting together a basic circuit to power the solenoids.

Rahul’s Status Report for 2/11

I did further research into alternative OMR technology earlier in the week as I was having trouble building a custom version Audiveris on Mac. Since the default output is an MXL file and not XML, I wanted to edit the source code to build it to my needs. I figured out all the modifications necessary to make, however, ran into a dependencies issue. Audiveris requires an older version of an OCR (optical character recognition) library called tesseract. Since my Mac is of the M2 type, it was practically impossible to get a hold of the older version of this software for M2. This was as far as I could get:

I was able to download the relevant jar files, and made sure to specify the classpath to link the files, but it seems that I need dynamic link files as well which will be impossible to get.

This led me to look for an alternative python-based OMR solution Oemer, which essentially runs a pretrained ML model on a pdf of music. The simplicity of usage was great, however runs take a few minutes to complete and upon reconverting the xml back to pdf form I was very dissatisfied with its accuracy  on the Charlie Brown example from the Team Status Report (probably like 50%).

Last week, I mentioned how Audiveris was able to run fairly well on windows, though it was outputting MXL files which was annoying as it read like a compressed binary. I eventually discovered that these MXL files are just zipped XML files, and unzipping a few kB per page would hardly be expensive for meeting the parsing time requirements that we set.

Eventually, I will write a bash script to run the OMR, callable by the GUI application from our Proposal. The only thing to keep in mind is that it will have to use Windows commands (yikes). This is a sample of what the commands would look like.

Running the OMR:

This is able to generate the MXL in the default output directory (though there is another parameter that can be used to specify the directory). It also produces a log of what was executed:

If you check the time stamps of the log, you will see this took roughly 11 seconds to parse the single page, which is very reasonable and should not be too cumbersome for our end user.

Previously I was solely running the omr from the Audiveris GUI which though pretty would not be ideal for our pipeline app.

Audiveris GUI build:

Next week I will integrate the file generation and unzipping into a preliminary version of the bash script mentioned earlier. I also hope to test the OMR on more music scores to come up with a numeric metric for comparison with our goals. My current progress is good and is on schedule.

Aden’s Status Report For 2/11

This week, I spent a significant amount of my time creating, editing, and preparing our proposal presentation. I presented on Monday and look forward to reviewing the feedback I will receive in the next few days. I felt like I did a decent job, but I know that with time and experience, the nerves will go away, and I will be able to present much more fluidly. My group and I also appreciated the questions from other groups and have taken them into consideration.

Aside from preparing for the presentation, I have also been looking into possible solenoids that I can use for the actuator part of our accompanyBot. We have selected two 25 N solenoids to test and one 5 N solenoid. After looking through previous projects with similar ambitions to ours, we have determined that getting solenoids with a higher force output, like 25 N, would likely be better when pressing down the piano key. Unfortunately, they are more expensive, so we would also like to order a 5 N solenoid to verify that they do not work as effectively before buying all 25 N solenoids.

Furthermore, I have done some preliminary research into how we will manage the power required for our system and what else will likely be necessary for the circuitry component of the project.

To conclude, I am currently on track with the Gantt chart presented in our project proposal. I hope to be more productive in the coming weeks as I aim to talk with our TA and order some parts at the start of next week. Hopefully, our parts ship fast, and I can begin testing and determining which solenoid we should use; however, if that is not the case, I plan on helping Nora develop and implement a plan for the microcontroller component of the project as we wait for parts.