Jessie’s Status Report for 11/09

This week’s tasks:

  • CONNECTING RPI TO CAMPUS WIFI: At the beginning of the week, I successfully connected RPi to school wifi by registering it to CMU-DEVICE.
  • RUNNING PROGRAM AT BOOT: I also investigated having a program run automatically at boot (so we won’t need to ssh into the RPi each time we want to run a program). After tinkering around a bit, I was able to get a sample program that writes “hello world” to a file to run automatically at boot. I wonder if a program that continuously loops (likely like the program we will end up writing) will be able to work in the same way. I can experiment more with it next week or with the integrated code. 
  • The next time I came to work on the project I was unable to boot the RPi. It seems like the OS couldn’t be found on the RPi. When I tried to reflash the SD card, the SD card couldn’t be detected, indicating that something happened to it; we suspect we broke it when we were putting the RPi into the case. We had another SD card on hand; however, it was a 16 GB SD card instead of a 128 GB one. I redid my work on the RPi with the 16 GB SD card (installing necessary programs and starting a program automatically at boot). This would have been fine had we not planned to put video files for testing purposes on the Pi. Therefore we will likely have to transfer the data on the 16 GB SD card to a different 128 GB SD card in the future. 
  • ACCELERATED RPI TUTORIAL: I finished following the tutorial to put the hand landmark model onto the accelerated RPi
    • Overall it was pretty straightforward. It was difficult to attach the accelerator to the Pi at times (the Pi wasn’t picking up that it was connected). 
    • I was stuck for a bit because there was no video output as the tutorial said should pop up. Danny helped me out and we figured out it was because I didn’t have X-11 forwarding enabled when I ssh-ed into the Pi on my laptop. Once I had X-11 forwarding enabled, the video output was very laggy. As a sanity check, I re-ran the direct media pipe model on the Pi (no acceleration) like I did last week, and it had a much slower frame rate (~4 fps instead of the previously observed 10 fps). Danny also helped me figure out this because last week I used a monitor to output the video instead of X-11 forwarding. Once I connected the Pi to a monitor to output the video, I was able to achieve a frame rate of around 21-22 fps on the accelerated RPi. The video output causing a drop in frame rate should not be a concern for us as we don’t care much about the video output for live feedback and only need to use the outputted landmark information (in the form of a vector) for our tension calculations. 
    • Video of accelerated RPi model output: https://drive.google.com/file/d/1msm4iRN0igps-D62fNLJPaKeeLQqsShn/view?usp=sharing
  • ACTIVE BUZZER: I had ChatGPT quickly write some new code for the newly acquired active buzzer. There are 2 versions of code on the GitHub repo that I tried, which output at different frequencies. On my branch (jessie_branch) active_buzzer.py outputs at a higher frequency and active_buzzer_freq.py outputs at a lower frequency. We can tinker with this more at a later time– I think the high frequency can be very distracting and alarming. 
    • higher frequency video: https://drive.google.com/file/d/10R0AOH2a84ZJaFh7ogOY7JOEHq7ynsl4/view?usp=sharing
    • lower frequency video: https://drive.google.com/file/d/1epzOV_M6fjuQC4USLHoPRA3CS1ay1ExG/view?usp=sharing
  • CALIBRATION DISPLAY: The 2-inch display arrived this week! I tried to follow this spec page to get the display set up: 
    • I hooked the display up to the RPi as the page indicated; however, I was unable to get the examples to work successfully. Danny was able to get the examples to work and more information can be found in his status report. 
    • Once the examples were working I was able to work with ChatGPT to write some code to mirror the webcam output onto the display for the calibration step. The code can be found at backintune/accelerated_rpi/LCD_Module_RPI_code/RaspberryPi/python/mirror_cam.py on the jessie_branch. I had to edit their outputted code a bit to properly rotate the video output onto the display. The webcam output has a huge delay and low frame rate. We don’t think this is a huge issue as the webcam mirror will only be used during the setup/calibration step to help the user ensure their hands and the entirety of the piano are within the frame; therefore, a high frame rate is not necessary but could be frustrating to work with. If there is time in the future, I can look further into optimization possibilities.
    • video of laggy display: https://drive.google.com/file/d/1eLImQROqo-vjqi8m00PNjzmGhWVeltTi/view?usp=sharing
  • I also responded to AMD’s Andrew Schmidt’s email (and Prof Bain’s Slack message) asking for feedback. 

Schedule:

I am very much on schedule, even completing less time-sensitive tasks. At Joshna’s suggestion, we decided to combine the web app hosting RPi and the accelerated RPi onto one Pi, therefore the previous UART interface is not necessary. Next week I’m hoping to make a lot of progress with full system integration. 

Next week’s deliverables:

  • Make a Google form and spreadsheet to collect and organize ground truth data from Prof Dueck and her students. 
  • Interface the output of the hand landmark model on the RPi with Shaye’s code.
  • Work with Danny to interface the web app with the RPi. Specifically, try to get programs to run by clicking buttons on the web app. 
  • Start looking into how to post-process video and transfer it to the web application. 

Less time-sensitive tasks:

  • Experiment with optimizing the webcam mirroring onto the display 

Leave a Reply

Your email address will not be published. Required fields are marked *