General updates:
- We worked as a team to piece together the RPi into its case with the display wiring and new SD card. Danny transferred the data on the 16GB SD card to the larger 128GB SD card. Shaye pieced the accelerator to the RPi, ensuring that the GPIO pins were exposed. Jessie had to redo the display wiring many times in this process.
![](http://course.ece.cmu.edu/~ece500/projects/f24-teamc4/wp-content/uploads/sites/331/2024/11/Screenshot-2024-11-16-at-2.27.56 PM.png)
- In general, the team’s goal for this week was to have an integrated system for the demo on Monday/Wednesday. Individual parts of the system aren’t fully done, but the basic workflow and basic integration have been completed. We are confident we can complete integration the following week.
- Shaye and Jessie worked to interface the blaze model on the RPi with the tension-tracking algorithm Shaye had previously written. See Shaye’s status reports for more info on the integration.
- Danny and Jessie worked together to interface button clicking from the web application to start/stop video recording as well as start/stop calibration. Jessie wrote some code to start and stop recording to test the button-clicking mechanism (this code will not be used in the final product) that can be displayed for demo purposes. She also wrote some code for the start/stop calibration. For more information on this code see Jessie’s status report. For more information on the web application button clicking integration see Danny’s status report.
- Shaye collected more video data & analyzed the video for more tension algorithms. They also cut the gathered video data into snippets for the ground-truth Google form. See Shaye’s status report for more information.
- Jessie worked off of the integrated blaze model with tension-tracking to add the buzzer feature for when tension is detected. See Jessie’s status report for more information.
- Danny has moved the code for the web application onto the RPi. He has continued working on trimming the unnecessary content and creating the functionality desired for the project. See Danny’s status report for more information.
Verification of Subsystems:
Stand Set-up/Take-down:
To test whether the user can set up and take down the camera stand within the targeted time, we plan to simply write some instructions to direct the user on how to set up and take down the camera stand. We then plan to time how long it takes the user to follow these instructions and successfully set up/take down the stand. The stand is considered successfully set up when the entirety of the piano is within the camera’s view and is parallel to the frame of the camera. The stand is considered successfully taken down when all the components are placed in the tote bag. We plan to test both new users (1st time setting up and taking down) and experienced users. This way, we can get a feel for how easy the instructions are to follow as well as get a sense of how long it would take a user to set up our system if it were a part of their daily practice routine.
If the set-up/take-down time is too long, we plan to modify the instructions so that they are easier to follow (i.e. adding more pictures) and fix more of the components of the system together so there is less for the user to put together. The specific modifications we’ll make will be dependent on our observation (i.e. if users often got stuck at a specific step) and their feedback (i.e. they thought step 2. In the instructions was poorly worded).
Tension Detection Algorithm:
We have run some informal tests to determine the effectiveness of our tension detection algorithm. We roughly tested the effectiveness of our algorithm by collecting data from Professor Dueck’s students– we asked them to play a list of exercises with and without tension. We then check the output of our system to see if it aligns with how the pianist intended to play. For more information on the results of these informal tests, see Shaye’s status reports.
To more formally test the correctness of our tension detection algorithm, we have collected more data from Professor Dueck’s students; however, this time we asked them to prepare a 1-minute snippet of a piece they were comfortable with and a 1-minute snippet of a piece they were still learning. Shaye has divided these recordings into 10-second snippets, which we plan to ask Professor Dueck and her students to identify as tense or not; this will help establish a ground truth to compare our system’s output. The data we collected from Professor Dueck’s students is good because we can run our algorithm on a variety of pianists and a variety of pieces. Additionally, we can use the output of the gathered data to compare the output of the tension detection algorithm using the MediaPipe model and the Blaze model to see if there was any reduction in accuracy after converting models. We can then adjust the tension detection algorithm accordingly. See Shaye’s status report for more information on how the tension detection algorithm can be tweaked.
System on RPi:
We want to ensure that our system can process data fast enough and provide feedback within a reasonable time. For information on how we plan to test our system’s live feedback latency/frame rate and the system’s post-processing time, see Jessie’s status report.
Web Application:
We want to ensure that our web application is easy and intuitive for users to use. For information on how we plan to test the intuitiveness of our web application, see Danny’s status report.
Validation:
Finally, we want to ensure that our system is meeting our user’s needs. The main way we will be ensuring that our user’s needs have been met is through polling them. Currently, we are planning on polling Professor Dueck and her students on different aspects of our system. These aspects include but are not limited to whether or not our system provides enough accurate feedback for our system to be useful, how easy was it to set up and use our system, was there any difficulties when using our system and how we could improve upon it, and whether or not the way they received the feedback was helpful for their uses. Through the use of this polling, we will validate whether or not our tension detection algorithm on the RPi provides accurate and helpful feedback for our users. Additionally, our users will be interfacing with our system through the use of the web application. Our polling will help ensure that our web application is intuitive and easy to use and if they have any suggestions for a better user experience.