Team Status Report for 12/07/2024

See Shravya’s individual report for more details relating to UART debugging progress; in summary, while all UART-related code has been written and run for about a week now, debugging is still underway and taking much longer time than estimated. This bottlenecks any accuracy and latency testing Shravya can conduct with the solenoids playing any song fed in from the parser (accuracy and latency of solenoids behave as expected when playing a hardcoded dummy sequence though). Shravya hopes to get a fully-working implementation by Monday night (so there is ample time to display functionality in the poster and video deliverable), and conduct formal testing after that. She has arranged to meet with a friend who will help her debug on Monday. 

Testing related to hardware:

As a recap, we have a fully functional MIDI-parsing script that is 100% accurate at extracting note events. We also are able to control our solenoids with hardcoded sequences. The final handshaking that remains is connecting the parsing script to the firmware to allow the solenoids to actuate based on any given song. Once the UART bugs are resolved, we will feed in manually-coded MIDI files that encompass different tempos and patterns of notes. We will observe the solenoid output and keep track of the pattern of notes played to calculate accuracy, which we expect to be 100%

  • During this phase of the testing, we will also audio record the output with a metronome playing in the background. We will manually set the timestamps of each metronome beat and solenoid press, and use those to calculate latency. 

Some tests that have been completed are overall power consumption and ensuring functionality of individual circuit components:

  • Using a multimeter, we measured the current draw of solenoids under the 3 possible different scenarios. Obviously, the maximum power consumption occurs when all solenoids in a chord are actuated simultaneously, but even then we stay right under our expected power limit of 9 Watts.
  • To ensure the functionality of our individual circuit components, we conducted several small-scale tests. Using a function generator, we applied a low-frequency signal to control the MOSFET and verified that it reliably switched the solenoid on and off without any issues. For the flyback diode, we used an oscilloscope to measure voltage spikes at the MOSFET drain when the solenoid was deactivated. This allowed us to confirm that the diode sufficiently suppressed back EMF and protected the circuit. Finally, we monitored the temperature of the MOSFET and solenoid over multiple switching cycles to ensure neither component overheated during operation.

Shravya’s MIDI parsing code has been verified to correctly parse any MIDI file, either generated by Fiona’s UI or generated by external means, and handles all edge cases (rests and chords) which caused troubles previously.  

Testing related to software:

Since software integration took longer than expected, we are still behind on the formal software testing. Fiona is continuing to debug the software, and plans to start formal testing on Sunday (see more in her report). For a reminder of our formal testing plans, see: https://course.ece.cmu.edu/~ece500/projects/f24-teamc5/2024/11/16/team-status-report-for-11-16-2024/. We are worried that we might have to restrict these testing plans a little bit, specifically by not testing on multiple faces, due to the fact that many people are busy with finals, but we will do our best to have a complete idea of the functionality of the system. One change we know for certain we will make is that our ground truth for eye-tracking accuracy will not be based on camera play-back, but on which button the user is directed to look at, for simplicity and to reduce testing error.

Last week, Peter did some preliminary testing on the accuracy of the UI and eye-tracking software integration in preparation for our final presentation, and the results were promising. Fiona will continue that testing this week, and hopefully will have results before Tuesday in order to include them in the poster.

Team Status Report for 11/30/24

It has been two weeks since the last status report. In the first week, we completed the interim demo. Then, we started working on integrating our three subsystems. The eye-tracking to application pipeline is now finished and the application to hardware integration is very close to finished. 

There are still some tasks to complete for full integration: The eye tracking needs to be made more stable, and the CAD model for the solenoid case needs to be completed and 3D-printed. We suspect there are some issues with STM32cubeIDE settings when we attempt to integrate UART to dynamically control solenoids based on parsed MIDI data. 

Our biggest risk right now is not finishing testing. Our testing plans are extensive (see last week’s team report), so they will not be trivial to carry out. Since we have a shorter time frame to complete them than expected, we might have to simplify our testing plans, but that would not be optimal.

Team Status Report for 11/16/2024

This week, our team prepared for the interim demo that takes place next week. We met with our advisors, Professor Bain and Joshna, to receive advice on what to work on before our demo. Peter and Fiona also met with a grad student to discuss the eye-tracking implementation. 

Currently, our plan for testing is mostly unchanged, however it is reliant on the integration of our subsystems, which has not yet happened. This is why one of our biggest risks currently is running into major issues during integration. 

To recap, our current plan for testing is as follows.

  • For the software (user interface and eye-tracking software): We will run standardized tests of different series of commands on multiple users. We want to test with different users of the system in order to test different parameters, like face shape/size, response time, musical knowledge and familiarity with the UI. We plan for these tests to cover a range of different scenarios, like different expected command responses (on the backend) and different distances and/or time between consecutives commands. We also plan to test edge cases in the software, like the user moving out of the camera range, or a user attempting to open a file that doesn’t exist.
    • Each test will be video-recorded and eye-commands recognized by the backend will be printed to a file for comparison, both for accuracy (goal: 75%) and latency (goal: 500ms).
  • For the hardware (STM32 and solenoids): We will give different MIDI files to the firmware and microcontroller. Like with the software testing, we plan to test a variety of parameters, these include different tempos, note patterns, and the use of rests and chords. We also plan to stress test the hardware with longer MIDI files to see if there are compounding errors with tempo or accuracy that cannot be observed when testing with shorter files. 
    • To test the latency (goal: within 10% of BPM) and accuracy (goal: 100%) of the hardware, we will record the output of the hardwares commands on the piano with a metronome in the background. 
    • Power consumption (goal: ≤ 9W) of the hardware system will also be measured during this test.

We have also defined a new test for evaluating accessibility: we plan to test that we can perform every command we make available to the user without having to use the mouse or the keyboard, after setting up the software and hardware. An example of an edge case we would be testing during this stage is ensuring that improper use, like attempting to send the a composition to the solenoid system without the hardware being connected to the user’s computer, does not crash the program, but is rather handled within the application, allowing the user to correct their use and continue using just eye-commands.

Team Status Report for 11/09/2024

This week our team worked on preparing for the interim demo, which is the week after the next. We all have individual tasks to finish, but have also begun work on integrating those tasks. 

Shravya began this week with MIDI-parsing code that was capable of accurately parsing simpler compositions (by integrating and cross-verifying with Fiona’s MIDI compositions), and since then has identified some edge cases. These include handling rest-notes (which she has successfully been able to resolve) as well as overlapping notes (which she is still working on). She worked with Peter to ensure that all components of the solenoid control circuitry are functioning properly. 

Fiona worked more on debugging the code to make MIDI files. She also worked on some other miscellaneous tasks (see report).

We are conversing with Marios Savvides and Magesh Kannan who are CV + eye-tracking biometrics experts referred by Professor Bain for guidance on our eye-tracking system.

Right now, our biggest risk as the interim demo approaches is that we discover issues while integrating the eye-tracking, the application and the physical hardware. We are hopefully well-prepared for this, because we have been working to coordinate along the way.

Team Status Report for 11/02/2024

On Sunday, we met as a group and discussed our individual ethics assignments. We identified some ways that our project could cause harm, such as malfunctioning after the user has become dependent on it for income or another need, or demonstrating racial bias, which we know is a grave issue in other facial recognition technologies. Then, we discussed our project with our peers during class on Monday, and they pointed out some more ethical considerations, such as the potential of the solenoids to break the piano and eye-strain from looking at the UI for long periods of time. This week, the biggest risk we have been considering are all of these ethical considerations.

We also worked on some tasks individually. Fiona continued to work on the MIDI backend, including integrating existing MIDI to sheet music code with our project. She also worked on the application frontend. Shravya worked to finish MIDI file parsing and set up a framework for testing this parsing feature on sample MIDI files, and began working on firmware (UART communication and actuating solenoids). Peter continues work on the eye tracking software and is considering using eye tracking cameras instead of computer vision to help increase the accuracy of the eye tracking program.

Team Status Report for 10/26/24

Team Status Report for 10/26/2024

This week was mostly spent on individual work. Fiona worked on the UI responses to update the MIDI file in the system, locating a MIDI to sheet music conversion software and mapping coordinates to responses in the code. Shravya worked on the MIDI-to-firmware conversion code, and devised small-scale testing plans to ensure functionality of components that will arrive this week. Peter worked on the 3D model for the box which will hold the components of our product.

More specifics are in the individual progress reports.

Our current major risk is that we are behind schedule, but we allocated slack time at the beginning of the semester for this scenario.

Next week the group will work on the ethics assignment together by meeting to discuss our responses. 

Team Status Report for 10/20/2024

As a team, we spent most of Week 7 working on our design report. This was a time intensive experience, as we had to do a lot of research, and discuss within our group facets of our design, testing and purpose, as well as how to convey all of that information efficiently and understandably in our report. 

Because of this, our main risk right now is that we have fallen behind schedule. We had planned to have made further progress with the eye-tracking software and application at this point in the semester. However, we did allot weeks 12-14 and finals week for slack time, so we are hopeful that we have ample time to catch up and complete the tasks we have set for ourselves.

While working on the design report, we updated our Gantt chart, see below.

Global Considerations, Written by Shravya

Our solution goes beyond local or technological communities. Globally, there are millions of people living with disabilities (particularly, disabilities relating to hand/arm mobility) who may not have access to specialized instruments or music creation tools tailored to their specific needs. These people exist across various socio-economic and geographical (urban vs remote) contexts. Our solution offers a low-cost, technologically accessible means of composing and playing music, making it a viable option not just in academic or well-funded environments, but also in regions with limited access to specialized tools. By providing compatibility with widely used MIDI files, minimal physical set-up, and an eye-tracking interface we aim to make as user-intuitive as possible, users around the world will be able to express themselves musically without extensive training or high-end technology. 

Cultural Considerations, Written by Fiona

Music is an important part of many cultural traditions. It is a tool for communication, and allows people to share stories and art across generations and between cultures. For example, many countries have national anthems that can be used to communicate what is important to that country. Broader access to music is thus important to many people, because it would allow them to participate in their culture or others, if they wished. Recognizing the importance of music for many individuals and groups, we hope that our project can be a stepping stone for more accessibility to musical cultural traditions.

Environmental Considerations, Written by Peter

By utilizing laptop hardware which is likely to already be owned by users, we are able to reduce the amount of electronic waste, which can be toxic and nonbiodegradable [1], that could be created by our product. Along with this, by working to minimize our products’ power consumption, we are minimizing our products’ contribution to pollution that results from non-renewable energy sources.

References

[1] Geneva Environment Network. (2024, October 9). The Growing Environmental Risks of E-Waste. Geneva Environment Network. https://www.genevaenvironmentnetwork.org/resources/updates/the-growing-environmental-risks-of-e-waste/ 

Team Status Report for 10/05/2024

A lot of time this week was spent preparing for our design review presentation. This meant refining and formalizing our ideas for what the software and hardware would both look like. This is a good starting point for us as we write our design report document next week.

One of our main risks right now is the interdependency of different parts of the project. Since there are many different subsystems that are all reliant on others in different ways, it would be difficult to develop some of them without considering how or if others are working. To combat this risk, we have been working on finding ways to divide the projects into more explicit sections, at least for initial design. For example, we expect to do initial testing of the eye-tracking accuracy via sub-sections of the screen, which does not require the UI to be completely finalized, and initial programming of the UI with buttons to create backend functionality without needing eye-tracking to be perfectly accurate (see Fiona’s Report for more on this). We will continue to consider this in our design going forward. 

We’ve always known that we need to convert the MIDI file output from the eye-tracking system to a format that is suitable for STM32, but only fleshed out the details on this while preparing for our design presentation this week. This step (the very first step in the “processing path” diagram below) isn’t just a trivial file reformatting or conversion into a different language. We need to process and extract key information, like the scheduling of solenoid on/off events and duration of each note. We will also need to filter out anything we won’t be implementing with our solenoid actuation, like note dynamics, which may have had default values assigned in the MIDI file. For the most efficiency, we want to ensure that the STM32 handles execution, not computation, meaning it should only receive the most essential data. So, instead of parsing the raw MIDI file on the STM32, a task that will be somewhat computationally heavy, we will use Python’s Mido library on our local machine. 

Last week we placed an order for two push-pull solenoids (Adafruit 412) for testing. We expected them to arrive this week but they have been delayed. When they do arrive, which is hopefully sometime this week, we will continue with the plan identified in our last weekly report: determining if they are sufficient for our project requirements and ordering more if they are.

Team Status Report for 09/28/2024

As a group, we spent a lot of time this week on our design review presentation. We have also made some progress on the project itself: Fiona has been getting familiarized with an OpenCV eye tracking program, Shravya has come up with a preliminary design of the solenoid control circuit, and Peter has been researching components to order. Each of our personal reports will delve into more detail on those fronts. 

This week we ordered two push-pull solenoids (Adafruit 412) that we had shortlisted (one for testing and one as a back-up). We plan to figure out how to program them and verify that their properties (e.g., sizing, depth of press, latency and power consumption) are suitable for our use case and design requirements. If so, we will order more.

Risks

We are concerned that when users unintentionally move their heads, despite continuing to look in the same area, the system may register it as a change in eye movement and record a wrong command. Even with that risk mitigated by the wait time to confirm a command, this may make the user experience tedious, so we plan to have slack time after integrating each part of the system together to deal with bugs like these that might come up. We are also hoping to do some preliminary testing with the eye-tracker with just squares on a screen (rather than our unique UI commands) to identify any problem such as these as early as possible.

Changes to Design

We were originally thinking of using a custom PCB to keep our circuit components neatly organized, but we want to pivot to using a solderable breadboard as it is easier (i.e., no need for PCB layout) and allows for more iterations.

Schedule Updates

Fiona redesigned and reordered most of her application tasks for the coming weeks.

 

Part A: Public Health, Safety, and Welfare, Written By Fiona

Music can be extremely beneficial to mental health; it can be used in therapy and is also a very common hobby. For many, music is a tool to better their mental health, like sports, reading, or other types of art. It is an important part of life for many people, not just professional musicians, and our project is intended to make the enjoyment of music more accessible, so we believe that it has the potential to better the mental health of a broader range of people.

Part B: Consideration of Social Factors, Written By Shravya

Our project provides a more inclusive way to engage with musical instruments like the piano, giving users with impaired hand/arm mobility an opportunity to express themselves and participate in an art form that might otherwise be inaccessible to them.

Beyond being a personal creative outlet, this project reflects broader trends of inclusivity and accessibility in design. The music industry, like many others, can sometimes marginalize those with physical limitations, and this product aims to reduce those barriers and expand participation. In human history, music has always been an avenue for fostering social connections, transcending language + cultural barriers, and is an integral part of many social gatherings. Individuals who were previously excluded from music activities will now be able to engage, contribute, and form connections within musical communities. 

Part C: Consideration of Economic Factors, Written By Peter

We aim to reduce the price of our product without sacrificing functionality. For the housing of the electrical components that go over the piano keys, we are doing this by 3D printing the body. This in-house manufacturing method cuts the costs of paying a manufacturer, and keeps the weight of the unit down. Additionally, once testing with breadboards is completed, PCBs, of an identical design, could be ordered in bulk with pre-placed parts that can be done cheaply overseas. Additionally, the UI utilizes a computer, which most people already own, further reducing the price to utilize our product.

Shravya’s Status Report for 09/28/24

This week, my primary focus was on preparing for the design review presentation. As part of this effort, I created the hardware system block diagram, which outlines how the different components of our project will interact with one another- find attached below. Additionally, I worked on designing the electrical circuit (visible in the second image) for the solenoid control system. This design includes obvious components, but I’ve realized that integrating MOSFET amplifiers is critical to making the circuit function properly. This is because the signal output by the GPIO pin is 3.3V which is too low of a voltage to input into solenoids + activate them. Hence, a common source NMOS configuration can amplify.

Unfortunately, I fell slightly behind schedule this week due to two midterms and the preparation required for the design presentation. I wasn’t able to begin running the Cadence simulations, as planned, but I will prioritize this first thing next week. To catch up, I’ve already blocked out additional time to focus on running these simulations and ensure the electrical circuit I designed operates as expected.

Next week, I plan to finalize and run the simulations in Cadence, ensuring the circuit is functioning as intended. Additionally, I will focus on learning more about how Pulse Width Modulation (PWM) works and how it can be integrated into our system to improve power efficiency. I’ll be working with Peter to begin testing how one solenoid works.