Team Status Report for 12/07/2024

See Shravya’s individual report for more details relating to UART debugging progress; in summary, while all UART-related code has been written and run for about a week now, debugging is still underway and taking much longer time than estimated. This bottlenecks any accuracy and latency testing Shravya can conduct with the solenoids playing any song fed in from the parser (accuracy and latency of solenoids behave as expected when playing a hardcoded dummy sequence though). Shravya hopes to get a fully-working implementation by Monday night (so there is ample time to display functionality in the poster and video deliverable), and conduct formal testing after that. She has arranged to meet with a friend who will help her debug on Monday. 

Testing related to hardware:

As a recap, we have a fully functional MIDI-parsing script that is 100% accurate at extracting note events. We also are able to control our solenoids with hardcoded sequences. The final handshaking that remains is connecting the parsing script to the firmware to allow the solenoids to actuate based on any given song. Once the UART bugs are resolved, we will feed in manually-coded MIDI files that encompass different tempos and patterns of notes. We will observe the solenoid output and keep track of the pattern of notes played to calculate accuracy, which we expect to be 100%

  • During this phase of the testing, we will also audio record the output with a metronome playing in the background. We will manually set the timestamps of each metronome beat and solenoid press, and use those to calculate latency. 

Some tests that have been completed are overall power consumption and ensuring functionality of individual circuit components:

  • Using a multimeter, we measured the current draw of solenoids under the 3 possible different scenarios. Obviously, the maximum power consumption occurs when all solenoids in a chord are actuated simultaneously, but even then we stay right under our expected power limit of 9 Watts.
  • To ensure the functionality of our individual circuit components, we conducted several small-scale tests. Using a function generator, we applied a low-frequency signal to control the MOSFET and verified that it reliably switched the solenoid on and off without any issues. For the flyback diode, we used an oscilloscope to measure voltage spikes at the MOSFET drain when the solenoid was deactivated. This allowed us to confirm that the diode sufficiently suppressed back EMF and protected the circuit. Finally, we monitored the temperature of the MOSFET and solenoid over multiple switching cycles to ensure neither component overheated during operation.

Shravya’s MIDI parsing code has been verified to correctly parse any MIDI file, either generated by Fiona’s UI or generated by external means, and handles all edge cases (rests and chords) which caused troubles previously.  

Testing related to software:

Since software integration took longer than expected, we are still behind on the formal software testing. Fiona is continuing to debug the software, and plans to start formal testing on Sunday (see more in her report). For a reminder of our formal testing plans, see: https://course.ece.cmu.edu/~ece500/projects/f24-teamc5/2024/11/16/team-status-report-for-11-16-2024/. We are worried that we might have to restrict these testing plans a little bit, specifically by not testing on multiple faces, due to the fact that many people are busy with finals, but we will do our best to have a complete idea of the functionality of the system. One change we know for certain we will make is that our ground truth for eye-tracking accuracy will not be based on camera play-back, but on which button the user is directed to look at, for simplicity and to reduce testing error.

Last week, Peter did some preliminary testing on the accuracy of the UI and eye-tracking software integration in preparation for our final presentation, and the results were promising. Fiona will continue that testing this week, and hopefully will have results before Tuesday in order to include them in the poster.

Fiona’s Status Report for 12/07/2024

Book-Keeping

I made a list of the final elements we have to wrap up in the last two weeks of the semester, and met with Shravya and Peter to assign tasks.

New Software Functionalities

I adjusted the software so that the user could open an existing composition without opening an external window. In order to do so, I implemented some logic where the program opens the next available file and loops back around to the first if it reaches the end of the available songs in the folder. Again, this obviously has some drawbacks in that the user may have to iterate through many compositions before reaching the end of the list, but it is accessible and simple, two of the most important foundations of our UI.

Another quick fix I made was to make the buttons different colors, at Peter’s suggestion, in order to make it easier for the users to differentiate between the different buttons and memorize different commands easier. Peter also mentioned that it would be good if there was a delay between calibration and the eye-tracking beginning right away so the user could orient themselves to the UI, so I also added in a short delay for that.

I also created a separate file that users could easily adjust sampling delay, gain and number of iterations for confirmation for the eye-tracking, in order for the user to be able to adjust those things as best suits them. 

I had previously identified a bug in which only the first page of sheet music would appear on the secondary UI, even if the cursor was on another page. I thus added two more buttons on the screen so the user could move back and forth between the pages of the sheet music. I verified that this worked with sample compositions from this repository: [1]. (That were greater than one page).

I also made it so that the rest and stack buttons would be highlighted when they were selected but before they were performed (so the user is aware of them).

Finally, since I have been having so much trouble with highlighting the cursor location on sheet music, mainly due to the variable locations of any one cursor location depending on any sharps/flats and preceding notes, etc., I decided to show the user the current cursor position on the main UI as a number (e.g. “Cursor at note 1.”). This was not our original plan, but I still believe it to be a viable solution to ensure the user knows where in the piece they are.

Debugging

I fixed a small bug where the C sharp note was highlighted when the high-C note was confirmed and another small bug in which the number of notes did not reset to 0 when a new file was opened.

Then, I did some stress testing to confirm that the logic used to build rests and chords up to three notes was completely sound. While testing the eye-tracking, I had been running into some situations in which it appeared that the logic broke, but I could not figure out why. I used a different Python script that would run the command responses on key press (rather than eye-press) to easily test this functionality. Here were the edge case bugs I identified and fixed:

  • When a rest is placed after a single eighth note, the eighth note is extended to a quarter note. I discovered with pretty certain confidence that this is actually a bug in the open-source sheet-music generator that we are using [1], because when I would play the composition in GarageBand, the eighth notes would play where expected even though the sheet music generator was interpreting them as quarter notes. I further verified by using another online sheet music generator, which also produced the expected value: https://melobytes.com/en/app/midi2sheet. Since I have already been struggling to modify this open-source program to highlight the cursor location and this is a very specific edge case, I decided it would be a better use of my time to not try to fix that bug within the program, and instead focus on our code. I have left a note in the README identifying the bug to users. 
  • Removing chords did not work successfully. I fixed this bug and verified it did not create a bug for single-note removal. Also, I verified that chords could be removed from anywhere in the piece, not just the end.
  • There were some logic errors with chords meant to have rests before them, which I fixed. I also optimized the three note chord logic to avoid bugs, although I hadn’t identified any yet.

Next Week

After double-checking that the eye-coordinates, of which there have been some discrepancies, I plan to start performing the latency, accessibility and accuracy tests on the software. As a group, we will work on the poster and video next week so my primary goal is to finish software testing ASAP for those deliverables, but I will also work on other elements of them as needed. Additionally, I have to collaborate with Shravya once her UART code is working to integrate our two systems, but I have already set up some code for that so I do not anticipate that taking too long. For the final report, I will work on Use-Case & Design Requirements, Design Trade Studies (UI), System Implementation (UI), Testing (Software), and Related Works sections.

References

[1] BYVoid. (2013, May 9) MidiToSheetMusic. GitHub. https://github.com/BYVoid/MidiToSheetMusic

Team Status Report for 11/30/24

It has been two weeks since the last status report. In the first week, we completed the interim demo. Then, we started working on integrating our three subsystems. The eye-tracking to application pipeline is now finished and the application to hardware integration is very close to finished. 

There are still some tasks to complete for full integration: The eye tracking needs to be made more stable, and the CAD model for the solenoid case needs to be completed and 3D-printed. We suspect there are some issues with STM32cubeIDE settings when we attempt to integrate UART to dynamically control solenoids based on parsed MIDI data. 

Our biggest risk right now is not finishing testing. Our testing plans are extensive (see last week’s team report), so they will not be trivial to carry out. Since we have a shorter time frame to complete them than expected, we might have to simplify our testing plans, but that would not be optimal.

Fiona’s Status Report for 11/30/2024

Last Week

Interim Demo Prep

On Sunday, I worked on preparing my subsystem for the interim demo on Monday. That required some bug fixes:

  • Ensured the cursor would not go below 0 after removing a note at the 0-th index in the composition.
  • Fixed an issue that caused additional notes added onto the chord to be 0-length notes. But the fix requires that notes of the same chord to be the same length, which I will want to fix later.
  • Fixed an issue affecting notes written directly after a chord, in which they were stacked onto the chord even if the user didn’t request it and also could not be removed properly.

I also added some new functionality to the application in preparation for the demo.

  • Constrained the user to chords of two notes and gave them an error message if they attempt to add more.
  • Allowed users to insert rests at locations other than the end of the composition.

Then, before the second interim demo, I fixed a small bug in which the cursor location did not reset when opening a new file.

This Week

Integrating with Eye-Tracking

I downloaded Peter’s code and its dependencies [1][2][3][4] to ensure it could run on my computer. I also had to downgrade my Python version to 3.11.5 because we had been working in different versions of Python. Fortunately there were no bugs I noticed immediately when I downgraded my code.

In order to integrate the two, I had to adjust the eye-tracking program so that it did not display the screen capture of the user’s face and so that the coordinates would be received on demand rather than continuously. Also, I had to remove the code’s dependence on the wxPython GUI library, because it was interfering with my code’s use of the Tkinter GUI library [5][6].

The first step of integration was to draw a “mouse” on the screen indicating where the computer things the user is looking [7][8].

Then, I made the “mouse” actually functional such that the commands are controlled by the eyes instead of the key presses. In order to do this and make the eye-tracking reliable, I made the commands on the screen much larger. This required me to remove the message box on the screen, but I added it back as a pop-up that exists for ten seconds before deleting itself.

Additionally, I had to change the (backend) strategy in which the buttons were placed on the screen so that I could identify the coordinates of each command for the eye-tracking. In order to identify the coordinates of the commands and (hopefully) ensure that the calculations hold up for screens of different sizes, I had the program do calculations based on width and height of the screen within the program.

I am still tinkering with the number of iterations and the time between each iteration to see what is optimal for accurate and efficient eye tracking. Currently, five iterations with 150ms in between each seems to be relatively functional. It might be worthwhile to figure out a way to allow the user to set the delay themselves. Also, I currently have it implemented such that there is a longer delay (300ms) after a command is confirmed, because I noticed that it would take a while for me to register that the command had been confirmed and to look away.

Bug Fixes

I fixed a bug in which the note commands were highlighted when they shouldn’t have been. I also fixed a bug that caused the most recently seen sheet music to load on start up instead of a blank sheet music, even though that composition (MIDI file) wouldn’t actually be open.

I also fixed some edge cases where the program would exit if the user performed unexpected behavior. Instead, the UI informs the user with a (self-deleting) pop-up window with the relevant error message [9][10]. The edge cases I fixed were:

  • The user attempting to write or remove a note to the file when there was no file open.
  • The user attempting to add more than the allowed number of notes to a chord (which is three)
  • The user attempting to remove a note at the 0-index.

I also fixed a bug in which removing a note did not cause the number of notes in the song to decrease (internally), which could lead to various issues with the internal MIDI file generation.

New Functionality

While testing with the eye-tracking, I realized it was confusing that the piano keys would light up while the command was in progress and then also while waiting for a note length (in the case that pitch was chosen first). It was hard to tell if a note was in progress or finished. For that reason, I adjusted the program so that a note command would be highlighted grey when finished and yellow when in progress.

I also made it such that the user could add three notes to a chord (instead of the previous two) before being cut off, which was the goal we set for ourselves earlier in the semester.

I made it so that the eye-tracking calibrates on start-up of the program [11], and then the user can request calibration again with the “c” key [12]. Having a key press involved is not ideal because that is not accessible, however since calibration automatically happens on set-up, hopefully this command will not be necessary most of the time.

Finally, I made the font sizes bigger on the UI for increased readability [13].

Demo

After the integration, bug fixes, and new functionality, here is a video demonstrating the current application while in use, mainly featuring the calibration screen and an error message: https://drive.google.com/file/d/1dMUQ976uqJo_J2QwzubH3wNez9YMPM8Y/view?usp=drive_link

(Note that I use my physical cursor to switch back and forth between the primary and secondary UI in the video. This is not the intended functionality, because the secondary UI is meant to be on a different device, but this was the only way I could video-record both UIs at once).

Integrating with Embedded System

On Tuesday, I met with Shravya to set up the STM32 environment on my computer and verified that I could run the hard-coded commands Shravya made for the interim demo last week with my computer and set-up.

On Saturday (today), we met again because Shravya had written some Python code for the UART communication. I integrated that with my system so that the parsing and UART could happen on the demand of the user, but when we attempted to test, we ran into some problems with the UART that Shravya will continue debugging without me.

After that, I made a short README file for the entire application, since the files from each subsystem were consolidated.

Final Presentation

I made the six slides for the presentation, which included writing about 750 words for the presentation script corresponding to those slides. I also worked with Shravya on two other slides and wrote another 425 words for those slides.

Next Week

Tomorrow, I will likely have to finish working on the presentation. I wrote a lot for the script, so I will likely need to edit that for concision.

There is still some functionality I need to finish for the UI:

  • Writing the cursor on the sheet music. I spent quite a while trying to figure that out this week, but had a lot of trouble with it.
  • Creating an in-application way to open existing files (for accessibility).
  • Adding note sounds to the piano while the user is hovering over it.
  • Have the “stack notes” and “rest” commands highlighted when that option is selected.

And some bug fixes:

  • Handle the case in which there are more than one page of sheet music and how to determine which page the cursor is on.
  • Double-check that chords & rests logic are 100% accurate. I seem to be running into some edge cases where stacking notes and adding rests do not work, so I will have to do stress tests to figure out exactly why that is happening.

However, the primary goal for next week is testing. I am still waiting on some things from both Peter (eye-tracking optimization) and Shravya (debugging of UART firmware) before the formal testing can start, but I can set-up the backend for the formal testing in the meantime.

Learning Tools

Most of my learning during this semester was trial and error. I generally learn best by just testing things out and seeing what works and what doesn’t, rather than by doing extensive research first, so that was the approach I took. I started coding with Tkinter pretty early on and I think I’ve learned a lot through that trial and error, even though I did make a lot of mistakes and have to re-write a lot of code.

I think this method of learning worked especially well for me because I have programmed websites before and am aware of the general standards and methods of app design, such as event handlers and GUI libraries. Even though I had not written a UI in Python, I was familiar with the basic idea. Meanwhile, if I had been working on other tasks in the project, like eye tracking or embedded systems, I would have had to do a lot more preliminary research to be successful.

Even though I didn’t have to do a lot of preliminary research, I did spend a lot of time learning from websites online while in the process of programming, as can be seen in the links I leave in each of my reports. Formal documentation of Tkinter and MIDO were helpful for getting a general idea of what I was going to write, but for more specific and tricky bugs, forums like StackOverflow and websites such as GeeksForGeeks were very useful.

References

[1] https://pypi.org/project/mediapipe-silicon/

[2] https://pypi.org/project/wxPython/

[3] https://brew.sh/

[4] https://formulae.brew.sh/formula/wget

[5] https://www.geeksforgeeks.org/getting-screens-height-and-width-using-tkinter-python/

[6] https://stackoverflow.com/questions/33731192/how-can-i-combine-tkinter-and-wxpython-without-freezing-window-python

[7] https://www.tutorialspoint.com/how-to-get-the-tkinter-widget-s-current-x-and-y-coordinates

[8] https://stackoverflow.com/questions/70355318/tkinter-how-to-continuously-update-a-label

[9] https://www.geeksforgeeks.org/python-after-method-in-tkinter/

[10] https://www.tutorialspoint.com/python/tk_place.htm

[11] https://www.tutorialspoint.com/deleting-a-label-in-python-tkinter

[12] https://tkinterexamples.com/events/keyboard/

[13] https://tkdocs.com/shipman/tkinter.pdf

Team Status Report for 11/16/2024

This week, our team prepared for the interim demo that takes place next week. We met with our advisors, Professor Bain and Joshna, to receive advice on what to work on before our demo. Peter and Fiona also met with a grad student to discuss the eye-tracking implementation. 

Currently, our plan for testing is mostly unchanged, however it is reliant on the integration of our subsystems, which has not yet happened. This is why one of our biggest risks currently is running into major issues during integration. 

To recap, our current plan for testing is as follows.

  • For the software (user interface and eye-tracking software): We will run standardized tests of different series of commands on multiple users. We want to test with different users of the system in order to test different parameters, like face shape/size, response time, musical knowledge and familiarity with the UI. We plan for these tests to cover a range of different scenarios, like different expected command responses (on the backend) and different distances and/or time between consecutives commands. We also plan to test edge cases in the software, like the user moving out of the camera range, or a user attempting to open a file that doesn’t exist.
    • Each test will be video-recorded and eye-commands recognized by the backend will be printed to a file for comparison, both for accuracy (goal: 75%) and latency (goal: 500ms).
  • For the hardware (STM32 and solenoids): We will give different MIDI files to the firmware and microcontroller. Like with the software testing, we plan to test a variety of parameters, these include different tempos, note patterns, and the use of rests and chords. We also plan to stress test the hardware with longer MIDI files to see if there are compounding errors with tempo or accuracy that cannot be observed when testing with shorter files. 
    • To test the latency (goal: within 10% of BPM) and accuracy (goal: 100%) of the hardware, we will record the output of the hardwares commands on the piano with a metronome in the background. 
    • Power consumption (goal: ≤ 9W) of the hardware system will also be measured during this test.

We have also defined a new test for evaluating accessibility: we plan to test that we can perform every command we make available to the user without having to use the mouse or the keyboard, after setting up the software and hardware. An example of an edge case we would be testing during this stage is ensuring that improper use, like attempting to send the a composition to the solenoid system without the hardware being connected to the user’s computer, does not crash the program, but is rather handled within the application, allowing the user to correct their use and continue using just eye-commands.

Fiona’s Status Report for 11/16/2024

This Week

User Interface Testing [1][2][7]

This week, I began by coding a testing program for the user interface. Since the eye-tracking is not ready to be integrated with the application yet, I programmed the backend to respond to commands from the keyboard. Via this code, I was able to test the functionality of the progress bars and the backend that tests consistent command requests.

In order for the user to pick a command with this program, they have to press the corresponding key five times without pressing any other key intermediately, which will reset the progress bar.

It will hopefully be a quick task to integrate the eye-tracking with the application, based on the foundation of this code.

Here is a short video demonstrating some basic functionality of the program, including canceling commands: https://drive.google.com/file/d/16NyUo_lRSzgLYIwVOHCEuoKJ6CYjiKip/view?usp=sharing.

Secondary UI [3][4][5][6][7]

After that, I also made the integration between the primary and secondary UI smoother by writing a secondary window with the sheet music that updates as necessary.

In the interim, I also included a note about where the cursor is in the message box, since I haven’t yet figured out how to implement that on the image of the sheet music.

Pre-Interim Demo

On Wednesday, I demonstrated my current working code to Professor Bain and Joshna, and they had some suggestions on how to improve my implementation:

  • Highlighting the note length/pitch on the UI after it’s been selected.
  • Electronically playing the note while the user is looking at it (for real-time feedback, makes the program more accessible for those without perfect pitch).
  • Set up a file-getting window within the program for more accessible use.

I implemented the first functionality into my code [3][4][6], and attempted to implement the second, but ran into some issues with the python sound libraries. I will continue working on that issue.

I also decided to leave the third idea for later, because that is a more dense block of programming to do, but it will improve the accessibility of the system so it is important.

(With the new updates to the code, the system currently looks slightly different from the video above, but is mostly the same.)

Eye-Tracking

On Thursday, Peter and I also met with Magesh Kannan to discuss the use of eye-tracking in our project. He suggested we use the MediaPipe library to implement eye-tracking, so Peter is working on that. Magesh offered his help if we need to optimize our eye-tracking later, so we will reach out to him if necessary.

Testing

There are no quantitative requirements that involve only my subsystem; the latency requirements, for example, involve both my application and the eye-tracking, so I will have to wait until the subsystems are all integrated to start those tests.

However, I have been doing more qualitative testing as I’ve been writing the program. I’ve tested various sequences of key presses to view the output of the system and these tests have revealed several gaps in my program’s design. For example, I realized after running a larger MIDI file from the internet through my program that I had not created the logic to handle more than one page of sheet music. My testing has also revealed some bugs having to do with rests and chords that I am still working on.

Another thing I have been considering in my testing is accessibility. Although our official testing won’t happen until after integrating with the eye-tracking, I have been attempting to make my application as accessible as possible during design so we don’t reveal any major problems during testing. Right now, the accessibility issue I need to work on next is opening files from within the program, because using an exterior file pop-up necessitates a mouse press.

Next Week

The main task for next week is the interim demo. On Sunday, I will continue working to prepare my subsystem (the application) for the demo, and then on Monday and Wednesday during class, we will present our project together.

The main tasks after that on my end will be continuing to work on integrating my application with Shravya’s and Peter’s work, and also to start working on testing the system, which will be a large undertaking.

There are also some bug fixes and further implementation I need to keep working on, such as the issues with rests and chords in the MIDI file and displaying error messages.

References 

[1] tkinter.tkk – Tk themed widgets. Python documentation. (n.d.). https://docs.python.org/3.13/library/tkinter.ttk.html

[2] Keyboard Events. Tkinter Examples. (n.d.). https://tkinterexamples.com/events/keyboard/

[3] Overview. Pillow Documentation. (n.d. ). https://pillow.readthedocs.io/en/stable/

[4] Python PIL | Image.thumnail() Method. Geeks for Geeks. (2019, July 19). https://www.geeksforgeeks.org/python-pil-image-thumbnail-method/

[5] Tkinter Toplevel. Python Tutorial. https://www.pythontutorial.net/tkinter/tkinter-toplevel/

[6] Python tkinter GUI dynamically changes images. php. (2024, Feb 9). https://www.php.cn/faq/671894.html

[7] Shipman, J.W. (2013, Dec 31). Tkinter 8.5 reference: a GUI for Python. tkdocs. https://tkdocs.com/shipman/tkinter.pdf

Team Status Report for 11/09/2024

This week our team worked on preparing for the interim demo, which is the week after the next. We all have individual tasks to finish, but have also begun work on integrating those tasks. 

Shravya began this week with MIDI-parsing code that was capable of accurately parsing simpler compositions (by integrating and cross-verifying with Fiona’s MIDI compositions), and since then has identified some edge cases. These include handling rest-notes (which she has successfully been able to resolve) as well as overlapping notes (which she is still working on). She worked with Peter to ensure that all components of the solenoid control circuitry are functioning properly. 

Fiona worked more on debugging the code to make MIDI files. She also worked on some other miscellaneous tasks (see report).

We are conversing with Marios Savvides and Magesh Kannan who are CV + eye-tracking biometrics experts referred by Professor Bain for guidance on our eye-tracking system.

Right now, our biggest risk as the interim demo approaches is that we discover issues while integrating the eye-tracking, the application and the physical hardware. We are hopefully well-prepared for this, because we have been working to coordinate along the way.

Fiona’s Status Report from 11/09/2024

This Week

MIDI Backend and User Interface [1][2]

I realized that my MIDI code did not cover every edge case. I updated it to make it possible to add chords to and delete them from the composition. I also added the functionality to handle cases in which the sheet music extends beyond one page and I had to update the code for editing pre-existing MIDI files so that they would not be overwritten every time they were opened and edited.

After that, I had to update my final UI to allow for chords (a new button was necessary), see below.

I also identified some errors with the logic having to do with the rest-notes in the composition, but I wasn’t able to identify what the issue was after some debugging, so I will have to continue working on that next week.

Secondary Device

I am currently using Sidecar [3] to make my iPad the secondary device to display the sheet music of the composition during testing, but it is not very accessible because it requires moving the computer mouse to update the image and also requires a recent version of an iPad and a Mac, so I am considering other options. 

Miscellaneous Tasks

There were a few smaller miscellaneous tasks I worked on this week:

I started researching some different eye-tracking options on GitHub, because our previous option hasn’t worked out yet. I’ve identified some options, but haven’t started testing with or integrating with any of them.

I attempted to figure out how to highlight notes in the sheet music (to indicate to the user where their cursor is) and also attempted to fix some alignment issues with the sheet music, but am still working on both of those tasks [4].

Next Week

Next week we will continue preparing for the interim demo, so my main goal will be integrating all of the systems together and continuing to debug so that we can demonstrate a working system.

There is also a lot of remaining functionality to wrap up with the backend code (coordinates to commands and UI updates, mostly), so I will work on that as well. 

References

[1] Overview. Mido – MIDI Objects for Python. (n.d.). https://mido.readthedocs.io/en/stable/index.html

[2] Standard MIDI-File Format Spec. 1.1, updated. McGill. http://www.music.mcgill.ca/~ich/classes/mumt306/StandardMIDIfileformat.html

[3] Apple. (n.d.). Use an iPad as a second display for a mac. Apple Support. https://support.apple.com/en-us/102597 

[4] BYVoid. (2013, May 9) MidiToSheetMusic. GitHub. https://github.com/BYVoid/MidiToSheetMusic

Fiona’s Status Report for 11/02/2024

This Week

Ethics

On Sunday, I met with my group to discuss the ethical considerations of our project, and discussed again with our classmates on Monday for an outside perspective.

Secondary UI (Sheet Music Updates)

I downloaded the code I identified last week as being a candidate for MIDI to sheet music conversion [1][2], and also Mono, the framework the author used [3]. I had to make one simple edit to the makefile in order for the program to run, but otherwise the code was compatible with the most current version of Mono, despite being 11 years old.

From there, it was fairly straightforward to implement the functionality to convert the MIDI file to sheet music on button press. To finish this task, I added some code to make the image update in the system on each edit by the user. For file saving and running the executable, is used the os module in Python [4].

Below is an example of what the sheet music output might look like for a simple sequence of notes.

Seeing the sheet music in front of me made me realize that my program had been saving the MIDI notes in the wrong way. The note pitches appeared to be correct, but the lengths were note, and rests appeared that I had not placed.

MIDI File Updates [9]

Because of this, I had to go back into my code from last week and identify the issue. I examined the Mido messages from one of the example MIDI files in BYVoid’s repository [1] against the sheet music it generated, and discovered I had misunderstood the Mido “time” parameter; I thought it was relative to the entire piece, but rather it is relative to the note itself (so, notes start at 0 unless preceded by a rest). After fixing that error in my code, it appears that time and pitch are both correct.

I also added the functionality to create a new MIDI file from within the UI [4], which will allow the user to create multiple compositions without opening and closing the application. Additionally, I coded a function that allows the user to open an existing MIDI file from the UI, using a Tkinter module, filedialog [5].

Finally, I added the code to move the cursor backwards and forwards in the MIDI file, inserting and deleting notes at those locations. This marks the completion of the MIDI file responses task on the Gantt chart.

Frontend

Next, I started working on finalizing the frontend code for the project. Previously, I had been using a UI for testing responses with buttons, but we will also need a final UI for the eye-tracking functionality, so I started writing that, see below [6][7].

Among other changes, I added progress bars, which is a widget that the Tkinter library offers [8], above each command in order to make it easier to add the functionality to show the user how long they have to look at a command.

Right now the UI is pretty simple; I will ask my group if they think we should incorporate any colors or other design facets into it. I also would like to test the UI on other machines to ensure that the size-factor is not off on different screens.

UI Responses

In the UI, I set up some preliminary functionality for UI responses, like a variable string input to the message box, and each of the progress bars. I did not make other progress on the UI responses.

Next Week

I am still a little behind in the Gantt chart, but making good progress. From the previous tasks, I still need to complete:

  • the integration between the MIDI to sheet music code and our system, such that the current cursor location is marked in the sheet music. This will require me to update the MIDI to sheet music code [1].
  • the identification of the coordinate range of the UI commands, which can be done now that I have a final idea of the UI.
  • the coordinates to commands mapping program, and loading bars. I’ve outlined some basic code already, but I will need to start integrating with the eye-tracking first to make sure I’ve got the right idea of it.
  • error messages and alerts to the user.

In the Gantt chart, my task next week is to integrate the primary and secondary UI, so I will work on that.

Another thing I want to do next week which is not on the Gantt chart is organize my current code for readability and style, and upload it to a GitHub repository, since there are more and longer files now.

References

[1] BYVoid. (2013, May 9) MidiToSheetMusic. GitHub. https://github.com/BYVoid/MidiToSheetMusic

[2] Vaidyanathan, M. Convert MIDI Files to Sheet Music. Midi Sheet Music. (n.d.). http://midisheetmusic.com/

[3] Download. Mono. (2024). https://www.mono-project.com/download/stable/

[4] os – Miscellaneous operating system interfaces. python. (n.d.). https://docs.python.org/3/library/os.html

[5] Tkinter dialogs. Python documentation. (n.d.). https://docs.python.org/3.13/library/dialog.html

[6] Shipman, J.W. (2013, Dec 31). Tkinter 8.5 reference: a GUI for Python. tkdocs. https://tkdocs.com/shipman/tkinter.pdf

[7] Graphical User Interfaces with Tk. python. (n.d.). https://docs.python.org/3.13/library/tk.html

[8] tkinter.tkk – Tk themed widgets. Python documentation. (n.d.). https://docs.python.org/3.13/library/tkinter.ttk.html

[9] Overview. Mido – MIDI Objects for Python. (n.d.). https://mido.readthedocs.io/en/stable/index.html

Fiona’s Status Report for 10/26/2024

This Week

This week, I worked on the assigned reading and write-up for the ethics assignment.

MIDI File Responses

I also started working on the backend for the MIDI file saving and editing. I first tested with the Mido Python library [1], which has functions for MIDI file editing. Using the frontend framework I made with Tkinter a few weeks ago, I was able to write a sequence of note pitches and lengths into a MIDI file by button press. In order to do so, I found a MIDI file spec that details the way the files are formatted and how to designate pitch and tempo [2].

One issue I am struggling with right now for the MIDI file saving backend is determining the best way to allow the user to edit the MIDI file name and location, while still keeping our UI simple and easy to use. I will continue to work on that problem.

Secondary UI

This week, I also identified some code we may be able to use in order to convert the MIDI file data into sheet music [3]. This code is a simplified version of another project [4], but I might try to simplify it further for our project.

User Interface

I made some minor edits to the user interface framework this week. For the time being, I am continuing to code the UI using the Python library Tkinter, but if I run into trouble later on, I plan to try using Django.

Mapping Coordinates to the UI

The last thing I started working on this week was the code that maps eye-coordinates to command responses on the UI. Previously, I had been testing UI responses with a button press, since we have not finished implementing eye tracking yet, but once we do, I want to have some code finished to connect the eye-tracking to the UI responses.

Next Week

The MIDI file saving is not complete: I still need to add file saving/opening functionality, and the functionality to move the cursor backwards and forwards in the piece.

Next week, I will also begin working on the UI responses (e.g., something to indicate that the user has pressed a button, error messages, etc.), which means I will need to do more work to finalize the UI.

I also plan to begin integrating the MIDI to sheet music conversion code with ours next week and continue working on mapping coordinates to responses.

I am still behind schedule this week, but I have made considerable progress in the areas that I wanted to.

References

[1] Overview. Mido – MIDI Objects for Python. https://mido.readthedocs.io/en/stable/index.html

[2] Standard MIDI-File Format Spec. 1.1, updated. McGill. http://www.music.mcgill.ca/~ich/classes/mumt306/StandardMIDIfileformat.html

[3] BYVoid. (2013, May 9) MidiToSheetMusic. GitHub. https://github.com/BYVoid/MidiToSheetMusic

[4] Vaidyanathan, M. Convert MIDI Files to Sheet Music. Midi Sheet Music. http://midisheetmusic.com/