Team Status Report for 12/07

General updates:

  • At the beginning of the week, the team prepared for the final presentation by finalizing slides and doing some practice runs.
  • Jessie and Shaye worked together to adjust the live feedback code so the tension for 2 hands could be identified. Previously, our tension detection algorithm only worked for 1 hand. For more information, see Shaye’s status report.
  • Jessie continued to work on testing the post-processing and average FPS of various piano-playing videos. For more information view her status report. 
  • Danny continued working on improving the web application and displaying the videos on the website. For more information view his status report.
  • At the end of the week, the team also discussed the content we want to present on our poster. 

 

Unit testing:

 

UI/UX (Clarity of instructions, stand setup/take down)

  • We tasked one of our pianists to run through the instructions on the web application. This involved setting up the stand, placing the stand behind the piano bench and adjusting the camera, navigating to the calibration and recording page for a mock recording session, and then taking down the entire system.
  • We timed how long it took for the pianist to complete each of the steps and they were within most of our time limits. One step we underestimated was how long it takes to set up the stand so we will be working on that.
  • We received feedback afterwards that our instructions were not the clearest so the pianist was not sure what they had to do at each step. They also commented that providing pictures of each component would help clear up the instructions as well.
  • As the pianist was able to complete most of the tasks within time, we are not too worried about making big changes to our system. We have made our instructions more clear and are going to add pictures to our instructions to help our users. Additionally, we are not currently planning on changing anything related to our stand or how setup/takedown is done as changing the instructions is easier and was our main complaint.

Hardware (latency, frame rate)

  • We created a small data set of videos with varying types of pieces and of varying lengths. We used this data set to test both the average frame rate and post-processing time.
  • Frame rate
    • To find the average frame rate for each video, we have an array of the captured frame rate and then average this at the end of the video. The base code already included the code to find the frame rate. It uses a ticker to track time and after 10 frames, will find the frame rate by dividing 10 frames by the time that has passed. We collect and average these values.
    • We found that our average frame rate was about 21-22 fps, which is within the targeted range. Therefore, we made no changes to our design due to the frame rate.
  • Post-processing time
    • To find the post-processing time of each of the videos we create a time-stamp before the frames of the video are written into a video and afterwards.
    • We found that the post-processing time was way below our targeted processing rate of ½ (processing time is half of the video length). The rate that we achieved is around ⅙. However, we plan to test this more extensively with longer video lengths. It’s possible that our code will change if these longer videos fail, which could slow down our processing rate. However, since the rate is so far below our target, we are not concerned.
    • No major design decisions were made based on our findings. However, changes were made to our code based on the fact that the post-processing did not work for longer videos. Instead of storing the frames of the video in an array, the frames were stored in a file. This likely slowed down the frame rate and post-processing time due to the extra time to read and write to files; however, with this change, we are still within the target range.
  • Live feedback latency
    • To test the live feedback latency of our system, we inputted clearly tense and not tense movements into our system by waving and abruptly stopping. Our tension algorithm detects tension by horizontal degree deviation, therefore the waving is extremely non-tense and the abrupt stop followed by a pause is extremely tense. We had multiple iterations of waving and stopping and recorded when the waving stopped and when the audio feedback was produced. The difference between these 2 timestamps was effectively our system’s latency.
    • We found that the system had a latency of around 500ms, which is way below our target of 1 second. These results are from only testing tension in one hand as we are still finishing our tension detection algorithm; however, we expect the latency will go up with 2 hands as with 2 hands in the frame the frame rate drops. We don’t think this drop in frame rate will cause us to go over our target though since the drop is not very big (28 fps to 21 fps). Because we are meeting our target, no changes were made based on these results.

Model (accuracy of angles)

  • Compared Mediapipe to hand measurements & Mediapipe to Blaze. 
  • Was fine, no changes 
  • Graphs of angles & summary of percent match between every output

 

Tension detection algorithm (comparison to ground truth)

  • Run detection pipeline on clips. Compare to ground truth results 
  • Compiled two datasets the our tension form that we sent out—one where at least 4/6 respondents agree, and one where 5/6 respondents agreed
    • We sent out 21 video clips to be labelled
    • The 5/6 dataset contained 7 videos 
    • The 4/6 dataset contained 15 videos
  • Overall, the datasets are very iffy & tension detection is inconclusive. However, because of the mixed ground truth and feedback for a better angle, that could’ve been an approach if we had more time for this project.
  • Preliminary results are as summarized: 

 

Integrated tests:

 

Tension on RPI: 

  • To test whether the tension detection algorithm maintains a high accuracy when run on the RPi, we plan to use the same as tension detection testing from earlier (but on the RPI w/ full system instead). We plan to run this soon and currently have no results for this test.

 

Overall workflow UI/UX testing: 

  • Because we are not quite finished integrating our system, we plan to do this in the near future and have yet to collect results. However, to test our full system, we plan to first have Shaye, our local pianist, run through the workflow to see if there are any major integration issues. Once this works, we will ask a pianist who is unfamiliar with our system to try and go through the workflow to see if the UI is clear.

 

Leave a Reply

Your email address will not be published. Required fields are marked *