Peter’s Status Report from 11/30/2024

Week Nov 17-23.

Using OpenCV to get frames to pass into a Mediapipe function to track the user’s irises, I was able to create a basic gaze-tracker that estimates where you are looking on a computer screen based on the location of the user’s irises. In Figure 1, the blue dot—located to the left of the head—shows the user the estimated location of where they are looking on the screen. Currently, there are two main configurations that the gaze-tracking can have. Configuration (1) has precise gaze tracking that requires head movement to make up for the small movements of the eyes, and for Configuration (2) gaze tracking only requires eye-movement, but also requires the head to be kept impractically still and has jittery estimates. In order to improve the Configuration 2, the movement of the head needs to be taken into consideration when calculating the estimated gaze of the user.

 

Figure 1: Mediapipe gaze-tracker

 

Week Nov 24-30.

Tested the accuracy of Fiona’s implementation of UI and eye-tracking integration. To test accuracy of the current gaze-tracking implementation, in Configuration 1 and Configuration 2, we looked at each command twice, and if it did not correctly identify a command, I kept trying to select the intended command until the correct command was selected. Using this method to test accuracy, Configuration 1 had precise control and, had 100% accuracy. Configuration 2, while having 89.6% accuracy in testing, had a very jittery gaze estimation, making it difficult to feel confident about the the cursor’s movements, and the user’s head has to be kept too still to be practical for widespread use. Preferably, the user only needs to move their eyes to track their gaze. As a result, the eye-tracking will be updated next week to take head movement into consideration, hopefully making the gaze estimate more smooth.

 

Tools for learning new knowledge

Using Mediapipe and OpenCV are new to me. To get comfortable with these libraries, read the online Mediapipe documentation and I followed different online video tutorials. Following these tutorials, I was able to discover applications for the Mediapipe library functions that were useful for my implementation.

 

This Week

This week, I hope to complete the eye-tracking, taking into consideration head movements to make, what is currently being referred to as, configuration 2 a more viable solution.

Shravya’s Status Report for 11/30/2024

This week, I successfully handled the MIDI parsing of chords and rest notes. I can now say that the MIDI parsing is fully functional. I also completed the core implementation of UART communication, which required updates to both my Python code and STM32Cube firmware code. In Python, I added functionality to parse MIDI files and send formatted commands (e.g., notes, chords, and rests) to the STM32 over UART. On the STM32 side, I integrated UART reception code to parse incoming data and trigger the appropriate solenoid actuation logic. I met with Fiona today (Saturday) to try to get this all working on her laptop as that is the local machine we will be using in our final demo. One trivial issue that wasted about an hour of our time is that the USB to USBC converter is weak and only works when the cord is positioned at a very specific angle. Another hour and a half was spent on true debugging. Later in the night, I bought a new USB-USBC converter.

I will be the one presenting at the final presentation and I spent several hours reciting (including in front of parents + friends) and memorising my script. We set an early deadline for ourselves to submit the slides (4 pm) so that I’d have plenty of time to practice. In addition, I spent several hours over the span of three days writing the script with Fiona; I contributed about 1000 words, and we had 2400 words when done. I then condensed everything to about 1700 words total, which according to speech-speed estimators online is appropriate for a 12 minute speech. I timed myself to be around 11:30- hopefully I keep this up at the real event.

Challenges: While the bulk of the logic appears to be functioning as expected, I encountered some issues with the STM32Cube settings, particularly with UART peripheral initialisation and interrupt handling. These bugs caused intermittent failures in receiving commands accurately. I’ve debugged the main issues but need an additional 2-3 hours to fully resolve the remaining inconsistencies and ensure the system operates smoothly.

Next Steps:

  • Finalize and test the STM32Cube configuration to eliminate remaining bugs.
  • Conduct integration tests with the complete system, ensuring seamless communication between the Python parser and STM32 firmware.
  • Begin preparing for final demo and report by collecting test data and documenting system performance.

Overall, the core logic for UART communication does seem to be complete, and I am confident the system will be ready for testing and fine-tuning soon.

To implement UART communication, I needed to learn about both Python serial communication and STM32CubeIDE’s UART configuration. Key areas of new knowledge include:

  1. STM32CubeIDE Peripheral Configuration:
    • I learned how to enable and configure USART peripherals in the STM32Cube device configuration tool, including setting baud rates, TX/RX pins, and interrupt-based data handling.
    • Learning Strategy: I referred to STM32Cube’s official documentation and watched YouTube tutorials for step-by-step guidance. Debugging required forum posts and STM32-specific threads for common pitfalls.
  2. Python Serial Communication:
    • I learned how to use the pyserial library to send and receive data over UART. This included handling issues such as buffer management and encoding data correctly for the STM32.
    • Learning Strategy: I consulted the official pyserial documentation, followed by informal learning via online coding examples on Stack Overflow.
  3. Debugging UART Communication:
    • I learned to use tools like serial terminal emulators (e.g., Tera Term, minicom, puTTy) to test data transmission and reception. I also gained experience debugging embedded systems using live logs and peripheral monitoring.
    • Learning Strategy: This was mostly trial and error, supported by online forums and STM32 user groups.

Learning Strategies Used:

  • Online Videos and Tutorials: YouTube tutorials were instrumental for understanding STM32Cube setup and Python UART implementation.
  • A Reddit thread that helped me understand how to handle the absolute time vs relative time issue in Python Mido (my final solution ended up being different but this thread was a good start)
  • Documentation and Forums: STM32, pyserial, and Python MIDO official documentation provided technical details, while forums (e.g., Stack Overflow and STM32 Community) helped address specific bugs.
  • Trial and Error: Debugging UART behavior was primarily hands-on, using systematic testing and iterative improvements to isolate and fix issues.

Team Status Report for 11/30/24

It has been two weeks since the last status report. In the first week, we completed the interim demo. Then, we started working on integrating our three subsystems. The eye-tracking to application pipeline is now finished and the application to hardware integration is very close to finished. 

There are still some tasks to complete for full integration: The eye tracking needs to be made more stable, and the CAD model for the solenoid case needs to be completed and 3D-printed. We suspect there are some issues with STM32cubeIDE settings when we attempt to integrate UART to dynamically control solenoids based on parsed MIDI data. 

Our biggest risk right now is not finishing testing. Our testing plans are extensive (see last week’s team report), so they will not be trivial to carry out. Since we have a shorter time frame to complete them than expected, we might have to simplify our testing plans, but that would not be optimal.

Fiona’s Status Report for 11/30/2024

Last Week

Interim Demo Prep

On Sunday, I worked on preparing my subsystem for the interim demo on Monday. That required some bug fixes:

  • Ensured the cursor would not go below 0 after removing a note at the 0-th index in the composition.
  • Fixed an issue that caused additional notes added onto the chord to be 0-length notes. But the fix requires that notes of the same chord to be the same length, which I will want to fix later.
  • Fixed an issue affecting notes written directly after a chord, in which they were stacked onto the chord even if the user didn’t request it and also could not be removed properly.

I also added some new functionality to the application in preparation for the demo.

  • Constrained the user to chords of two notes and gave them an error message if they attempt to add more.
  • Allowed users to insert rests at locations other than the end of the composition.

Then, before the second interim demo, I fixed a small bug in which the cursor location did not reset when opening a new file.

This Week

Integrating with Eye-Tracking

I downloaded Peter’s code and its dependencies [1][2][3][4] to ensure it could run on my computer. I also had to downgrade my Python version to 3.11.5 because we had been working in different versions of Python. Fortunately there were no bugs I noticed immediately when I downgraded my code.

In order to integrate the two, I had to adjust the eye-tracking program so that it did not display the screen capture of the user’s face and so that the coordinates would be received on demand rather than continuously. Also, I had to remove the code’s dependence on the wxPython GUI library, because it was interfering with my code’s use of the Tkinter GUI library [5][6].

The first step of integration was to draw a “mouse” on the screen indicating where the computer things the user is looking [7][8].

Then, I made the “mouse” actually functional such that the commands are controlled by the eyes instead of the key presses. In order to do this and make the eye-tracking reliable, I made the commands on the screen much larger. This required me to remove the message box on the screen, but I added it back as a pop-up that exists for ten seconds before deleting itself.

Additionally, I had to change the (backend) strategy in which the buttons were placed on the screen so that I could identify the coordinates of each command for the eye-tracking. In order to identify the coordinates of the commands and (hopefully) ensure that the calculations hold up for screens of different sizes, I had the program do calculations based on width and height of the screen within the program.

I am still tinkering with the number of iterations and the time between each iteration to see what is optimal for accurate and efficient eye tracking. Currently, five iterations with 150ms in between each seems to be relatively functional. It might be worthwhile to figure out a way to allow the user to set the delay themselves. Also, I currently have it implemented such that there is a longer delay (300ms) after a command is confirmed, because I noticed that it would take a while for me to register that the command had been confirmed and to look away.

Bug Fixes

I fixed a bug in which the note commands were highlighted when they shouldn’t have been. I also fixed a bug that caused the most recently seen sheet music to load on start up instead of a blank sheet music, even though that composition (MIDI file) wouldn’t actually be open.

I also fixed some edge cases where the program would exit if the user performed unexpected behavior. Instead, the UI informs the user with a (self-deleting) pop-up window with the relevant error message [9][10]. The edge cases I fixed were:

  • The user attempting to write or remove a note to the file when there was no file open.
  • The user attempting to add more than the allowed number of notes to a chord (which is three)
  • The user attempting to remove a note at the 0-index.

I also fixed a bug in which removing a note did not cause the number of notes in the song to decrease (internally), which could lead to various issues with the internal MIDI file generation.

New Functionality

While testing with the eye-tracking, I realized it was confusing that the piano keys would light up while the command was in progress and then also while waiting for a note length (in the case that pitch was chosen first). It was hard to tell if a note was in progress or finished. For that reason, I adjusted the program so that a note command would be highlighted grey when finished and yellow when in progress.

I also made it such that the user could add three notes to a chord (instead of the previous two) before being cut off, which was the goal we set for ourselves earlier in the semester.

I made it so that the eye-tracking calibrates on start-up of the program [11], and then the user can request calibration again with the “c” key [12]. Having a key press involved is not ideal because that is not accessible, however since calibration automatically happens on set-up, hopefully this command will not be necessary most of the time.

Finally, I made the font sizes bigger on the UI for increased readability [13].

Demo

After the integration, bug fixes, and new functionality, here is a video demonstrating the current application while in use, mainly featuring the calibration screen and an error message: https://drive.google.com/file/d/1dMUQ976uqJo_J2QwzubH3wNez9YMPM8Y/view?usp=drive_link

(Note that I use my physical cursor to switch back and forth between the primary and secondary UI in the video. This is not the intended functionality, because the secondary UI is meant to be on a different device, but this was the only way I could video-record both UIs at once).

Integrating with Embedded System

On Tuesday, I met with Shravya to set up the STM32 environment on my computer and verified that I could run the hard-coded commands Shravya made for the interim demo last week with my computer and set-up.

On Saturday (today), we met again because Shravya had written some Python code for the UART communication. I integrated that with my system so that the parsing and UART could happen on the demand of the user, but when we attempted to test, we ran into some problems with the UART that Shravya will continue debugging without me.

After that, I made a short README file for the entire application, since the files from each subsystem were consolidated.

Final Presentation

I made the six slides for the presentation, which included writing about 750 words for the presentation script corresponding to those slides. I also worked with Shravya on two other slides and wrote another 425 words for those slides.

Next Week

Tomorrow, I will likely have to finish working on the presentation. I wrote a lot for the script, so I will likely need to edit that for concision.

There is still some functionality I need to finish for the UI:

  • Writing the cursor on the sheet music. I spent quite a while trying to figure that out this week, but had a lot of trouble with it.
  • Creating an in-application way to open existing files (for accessibility).
  • Adding note sounds to the piano while the user is hovering over it.
  • Have the “stack notes” and “rest” commands highlighted when that option is selected.

And some bug fixes:

  • Handle the case in which there are more than one page of sheet music and how to determine which page the cursor is on.
  • Double-check that chords & rests logic are 100% accurate. I seem to be running into some edge cases where stacking notes and adding rests do not work, so I will have to do stress tests to figure out exactly why that is happening.

However, the primary goal for next week is testing. I am still waiting on some things from both Peter (eye-tracking optimization) and Shravya (debugging of UART firmware) before the formal testing can start, but I can set-up the backend for the formal testing in the meantime.

Learning Tools

Most of my learning during this semester was trial and error. I generally learn best by just testing things out and seeing what works and what doesn’t, rather than by doing extensive research first, so that was the approach I took. I started coding with Tkinter pretty early on and I think I’ve learned a lot through that trial and error, even though I did make a lot of mistakes and have to re-write a lot of code.

I think this method of learning worked especially well for me because I have programmed websites before and am aware of the general standards and methods of app design, such as event handlers and GUI libraries. Even though I had not written a UI in Python, I was familiar with the basic idea. Meanwhile, if I had been working on other tasks in the project, like eye tracking or embedded systems, I would have had to do a lot more preliminary research to be successful.

Even though I didn’t have to do a lot of preliminary research, I did spend a lot of time learning from websites online while in the process of programming, as can be seen in the links I leave in each of my reports. Formal documentation of Tkinter and MIDO were helpful for getting a general idea of what I was going to write, but for more specific and tricky bugs, forums like StackOverflow and websites such as GeeksForGeeks were very useful.

References

[1] https://pypi.org/project/mediapipe-silicon/

[2] https://pypi.org/project/wxPython/

[3] https://brew.sh/

[4] https://formulae.brew.sh/formula/wget

[5] https://www.geeksforgeeks.org/getting-screens-height-and-width-using-tkinter-python/

[6] https://stackoverflow.com/questions/33731192/how-can-i-combine-tkinter-and-wxpython-without-freezing-window-python

[7] https://www.tutorialspoint.com/how-to-get-the-tkinter-widget-s-current-x-and-y-coordinates

[8] https://stackoverflow.com/questions/70355318/tkinter-how-to-continuously-update-a-label

[9] https://www.geeksforgeeks.org/python-after-method-in-tkinter/

[10] https://www.tutorialspoint.com/python/tk_place.htm

[11] https://www.tutorialspoint.com/deleting-a-label-in-python-tkinter

[12] https://tkinterexamples.com/events/keyboard/

[13] https://tkdocs.com/shipman/tkinter.pdf

Peter’s Status Report from 11/16/24

This week Fiona and I met with Professor Savvides’s staff member, Magesh, to discuss how we would develop the eye-tracking using computer vision. Magesh gave us the following implementation plan.

  • Start with openCV video feed
  • Send frames to mediapipe python library
  • Mediapipe returns landmarks
  • From landmarks, select points that correspond to the eye-region
  • Determine if you are looking up, down, left, or right
  • Draw a point on the video feed to show where the software thinks the user is looking so there is live feedback.

Drawing a point on the video feed will serve to verify that the software is properly tracking the user’s iris and correctly mapping its gaze to the screen.

So far, I have succeeded in having the openCV video feed appear. I am currently bug fixing to get a face-mesh to appear on the video feed, using mediapipe, to verify that the software is tracking the irises properly. I am using Google’s Face Landmark Detection Guide for Python to help me implement this software1. Once I am able to verify this, I will move on to using a face landmarker to interpret the gaze of the user’s irises on the screen, and return coordinates to draw a point where the expected gaze is on the screen.

 

Resources

 Google AI for Developers. (2024, November 4). Face Landmark Detection Guide for Python. Google. https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker/python

Team Status Report for 11/16/2024

This week, our team prepared for the interim demo that takes place next week. We met with our advisors, Professor Bain and Joshna, to receive advice on what to work on before our demo. Peter and Fiona also met with a grad student to discuss the eye-tracking implementation. 

Currently, our plan for testing is mostly unchanged, however it is reliant on the integration of our subsystems, which has not yet happened. This is why one of our biggest risks currently is running into major issues during integration. 

To recap, our current plan for testing is as follows.

  • For the software (user interface and eye-tracking software): We will run standardized tests of different series of commands on multiple users. We want to test with different users of the system in order to test different parameters, like face shape/size, response time, musical knowledge and familiarity with the UI. We plan for these tests to cover a range of different scenarios, like different expected command responses (on the backend) and different distances and/or time between consecutives commands. We also plan to test edge cases in the software, like the user moving out of the camera range, or a user attempting to open a file that doesn’t exist.
    • Each test will be video-recorded and eye-commands recognized by the backend will be printed to a file for comparison, both for accuracy (goal: 75%) and latency (goal: 500ms).
  • For the hardware (STM32 and solenoids): We will give different MIDI files to the firmware and microcontroller. Like with the software testing, we plan to test a variety of parameters, these include different tempos, note patterns, and the use of rests and chords. We also plan to stress test the hardware with longer MIDI files to see if there are compounding errors with tempo or accuracy that cannot be observed when testing with shorter files. 
    • To test the latency (goal: within 10% of BPM) and accuracy (goal: 100%) of the hardware, we will record the output of the hardwares commands on the piano with a metronome in the background. 
    • Power consumption (goal: ≤ 9W) of the hardware system will also be measured during this test.

We have also defined a new test for evaluating accessibility: we plan to test that we can perform every command we make available to the user without having to use the mouse or the keyboard, after setting up the software and hardware. An example of an edge case we would be testing during this stage is ensuring that improper use, like attempting to send the a composition to the solenoid system without the hardware being connected to the user’s computer, does not crash the program, but is rather handled within the application, allowing the user to correct their use and continue using just eye-commands.

Fiona’s Status Report for 11/16/2024

This Week

User Interface Testing [1][2][7]

This week, I began by coding a testing program for the user interface. Since the eye-tracking is not ready to be integrated with the application yet, I programmed the backend to respond to commands from the keyboard. Via this code, I was able to test the functionality of the progress bars and the backend that tests consistent command requests.

In order for the user to pick a command with this program, they have to press the corresponding key five times without pressing any other key intermediately, which will reset the progress bar.

It will hopefully be a quick task to integrate the eye-tracking with the application, based on the foundation of this code.

Here is a short video demonstrating some basic functionality of the program, including canceling commands: https://drive.google.com/file/d/16NyUo_lRSzgLYIwVOHCEuoKJ6CYjiKip/view?usp=sharing.

Secondary UI [3][4][5][6][7]

After that, I also made the integration between the primary and secondary UI smoother by writing a secondary window with the sheet music that updates as necessary.

In the interim, I also included a note about where the cursor is in the message box, since I haven’t yet figured out how to implement that on the image of the sheet music.

Pre-Interim Demo

On Wednesday, I demonstrated my current working code to Professor Bain and Joshna, and they had some suggestions on how to improve my implementation:

  • Highlighting the note length/pitch on the UI after it’s been selected.
  • Electronically playing the note while the user is looking at it (for real-time feedback, makes the program more accessible for those without perfect pitch).
  • Set up a file-getting window within the program for more accessible use.

I implemented the first functionality into my code [3][4][6], and attempted to implement the second, but ran into some issues with the python sound libraries. I will continue working on that issue.

I also decided to leave the third idea for later, because that is a more dense block of programming to do, but it will improve the accessibility of the system so it is important.

(With the new updates to the code, the system currently looks slightly different from the video above, but is mostly the same.)

Eye-Tracking

On Thursday, Peter and I also met with Magesh Kannan to discuss the use of eye-tracking in our project. He suggested we use the MediaPipe library to implement eye-tracking, so Peter is working on that. Magesh offered his help if we need to optimize our eye-tracking later, so we will reach out to him if necessary.

Testing

There are no quantitative requirements that involve only my subsystem; the latency requirements, for example, involve both my application and the eye-tracking, so I will have to wait until the subsystems are all integrated to start those tests.

However, I have been doing more qualitative testing as I’ve been writing the program. I’ve tested various sequences of key presses to view the output of the system and these tests have revealed several gaps in my program’s design. For example, I realized after running a larger MIDI file from the internet through my program that I had not created the logic to handle more than one page of sheet music. My testing has also revealed some bugs having to do with rests and chords that I am still working on.

Another thing I have been considering in my testing is accessibility. Although our official testing won’t happen until after integrating with the eye-tracking, I have been attempting to make my application as accessible as possible during design so we don’t reveal any major problems during testing. Right now, the accessibility issue I need to work on next is opening files from within the program, because using an exterior file pop-up necessitates a mouse press.

Next Week

The main task for next week is the interim demo. On Sunday, I will continue working to prepare my subsystem (the application) for the demo, and then on Monday and Wednesday during class, we will present our project together.

The main tasks after that on my end will be continuing to work on integrating my application with Shravya’s and Peter’s work, and also to start working on testing the system, which will be a large undertaking.

There are also some bug fixes and further implementation I need to keep working on, such as the issues with rests and chords in the MIDI file and displaying error messages.

References 

[1] tkinter.tkk – Tk themed widgets. Python documentation. (n.d.). https://docs.python.org/3.13/library/tkinter.ttk.html

[2] Keyboard Events. Tkinter Examples. (n.d.). https://tkinterexamples.com/events/keyboard/

[3] Overview. Pillow Documentation. (n.d. ). https://pillow.readthedocs.io/en/stable/

[4] Python PIL | Image.thumnail() Method. Geeks for Geeks. (2019, July 19). https://www.geeksforgeeks.org/python-pil-image-thumbnail-method/

[5] Tkinter Toplevel. Python Tutorial. https://www.pythontutorial.net/tkinter/tkinter-toplevel/

[6] Python tkinter GUI dynamically changes images. php. (2024, Feb 9). https://www.php.cn/faq/671894.html

[7] Shipman, J.W. (2013, Dec 31). Tkinter 8.5 reference: a GUI for Python. tkdocs. https://tkdocs.com/shipman/tkinter.pdf

Shravya’s status report for 11/16/2024

My newest update is that I am now able to use GPIO commands to drive 13 solenoids! Here is a video demonstrating a dummy sequence.

https://drive.google.com/file/d/15j3L2sPWW3enOPX8U6tXd_phpCFsdEaZ/view?usp=sharing 

And here is the full firmware script I used to do so: https://docs.google.com/document/d/1EplQO1Y9GYS4SYbjHKwGDTp9yScVESmYp59CTWjzwfk/edit?usp=sharing

  1. Hardware Challenges:
    • I spent many, many hours this week getting the solenoid circuitry to work consistently and reliably. Last Friday, it was mostly functional (except for an issue where as solenoids actuating for a while after disconnecting from power). The next time we tried to use our circuit, it was suddenly not working (even though we didn’t change anything on our breadboard). This is because some wires got bent while I placed the breadboard in my bag. Also, I noticed that the tip of the positive terminal wire of one of the solenoids I am using is frayed thus doesn’t adhere to breadboard holes very well- it can slightly shift if I don’t take the routine effort to make sure it is plugged in COMPLETELY.
    • The DC power supply I usually use in HHA101 was being used by someone else, so I tried to use an analog DC power supply to input 3.3V into my MOSFET gate. Nothing happened at all and I spent a lot of time debugging the circuit, component-by-component. with an oscilloscope etc. before I thought of testing if this power supply is reliably outputting at all. Using an oscilloscope, I found this power supply was simply broken (it had fluctuating voltages from the scale of microvolts to 1 volt). Ultimately I was able to get back the bench supply I use and know to be functional.
    • I tried to use a function generator to apply a square wave gate input but saw that all the BNC connectors required for this were broken
    • I realised that if I am not inputting 3.3V into the gate, it still has to be connected to something (ideally ground through a 10K ohm pull-down resistor) to avoid a floating voltage at the gate. This eliminates unreliable operation. Since I am not able to find those resistors today, my workaround was keeping the 3.3V gate input on, while using the 12V power supply to dictate whether the circuit was on or not.
    • I moved everything onto the other half of our breadboard in case we caused some damage underneath the breadboard
    • Earlier this week, when we realised that our circuit wasn’t working, we tried some troubleshooting. I used a different MOSFET than the one I ordered (one of my partners kept some components after 18100).  I realise now this isn’t suitable. This ZVN3306A MOSFET doesn’t fully turn on at 3.3V (it seems it operates in triode mode) and is not built to drive large currents which are critical for activating solenoids. The ZVN3306A MOSFET overheated and there were delays before I realised I should switch back to the IRLZ44NPBF. The IRLZ44NPBF is a logic-level MOSFET which completely allows for saturation operation at a Vgs of 3.3V, whereas the ZVN3306A can require over 4 V.
  2. Software Challenges:
    • The Python MIDI parsing code still needs some debugging to handle chords properly, which has taken longer than expected. I am waiting for Fiona to compose a new midi file and send that to me.

Goals for Next Week

  1. Hardware Improvements:
    • Add pull-down resistors from gate to ground to stabilize the gate input permanently.
    • Measure the solenoid current during operation and make sure it is as expected.
  2. Finish MIDI Parsing:
    • Fix any remaining issues with the Python code, especially with the use of chords.
    • Make sure the output is formatted correctly so the STM32 can process it. I am thinking of changing my MIDI-parsed output from a string format to a 2d-array to better handle transmitting chord data to the microcontroller (a string would only show every event serially, but for chords we need to portray that some notes play in parallel)
  3. Work on Firmware:
    • Test the STM32 firmware I wrote to handle UART communication and actuate solenoids based on received MIDI data.
    • Arrange solenoids on a piano/keyboard and verify that their force when being driven by GPIO signals is adequate to push keys.
  4. Prepare for Integration:
    • After the demo, I must make sure that Fiona’s MIDI-file generation and my MIDI-parsing work together seamlessly and handle all edge cases.

Peter’s Status Report from 11/9/2024

This Week

This week was spent testing our solenoid circuit with the newly arrived parts. The circuit worked mostly as intended. The solenoid extends when the NMOS’s gate is powered to 3.3V, and turns off when the NMOS’s gate is not powered. However, there is an unexpected delay, usually a few seconds, between the NMOS’s gate being powered off and the solenoid retracting. When the power is turned off from the power brick, there is no delay. Further testing will be done to determine what is causing this unexpected behavior.

Additionally, Professor Bain helped to connect me with Professor Savvides and one of his staff members to help with develop eye tracking using computer vision for use in our project. 

 

Next Week

Next week Fiona and I will meet with Professor Savvides’s staff member, Magesh, to help us understand how to develop an eye tracking system with computer vision. Additionally, I will do further testing with Shravya to find out why the solenoid remains extended, as if powered, after the NMOS’s gate is unpowered.

Team Status Report for 11/09/2024

This week our team worked on preparing for the interim demo, which is the week after the next. We all have individual tasks to finish, but have also begun work on integrating those tasks. 

Shravya began this week with MIDI-parsing code that was capable of accurately parsing simpler compositions (by integrating and cross-verifying with Fiona’s MIDI compositions), and since then has identified some edge cases. These include handling rest-notes (which she has successfully been able to resolve) as well as overlapping notes (which she is still working on). She worked with Peter to ensure that all components of the solenoid control circuitry are functioning properly. 

Fiona worked more on debugging the code to make MIDI files. She also worked on some other miscellaneous tasks (see report).

We are conversing with Marios Savvides and Magesh Kannan who are CV + eye-tracking biometrics experts referred by Professor Bain for guidance on our eye-tracking system.

Right now, our biggest risk as the interim demo approaches is that we discover issues while integrating the eye-tracking, the application and the physical hardware. We are hopefully well-prepared for this, because we have been working to coordinate along the way.