Alexa’s Status Report for 12/6/25

Before Thanksgiving break, I implemented two new features for our AIR device. First, I added a mechanism for the force sensor to send an additional notification to the main program that stops any currently playing sounds. This is most similar to muting the strings of a guitar to halt ringing notes. Second, I implemented instantaneous speed and angular-velocity sensing to determine the delay between notes in a strum. This enables users to strum slowly to increase the delay between individual notes of a chord. To compute instantaneous speed, we low-pass-filter the acceleration signal to reduce high-frequency noise, then integrate the result.

In addition to preparing the final presentation content, I also conducted tests to include in both the presentation and the report. I designed experiments to evaluate strum accuracy at different BPMs and performed connection-related testing.

 

Alexa’s Status Report for 11/22/2025

This week I worked on improving the strumming algorithm and adding functionality to our glove. I implemented stopping the sounds when the touch sensor has been opened, so users can control how long a sound continues to play. I also added functionality to make the delay controlled by the speed at which the user is moving their hand. The speed estimates are still not very accurate, and we think more filtering and tuning is required for this to work well.

I also conducted latency experiments for our bluetooth communication scheme. The latency of sending a BLE message from the esp32 to the python program is around 30ms, and I used a combination of round trip times samples to get this value. Overall, we also thought about how to design experiments for strum detection accuracy. Now that the pipeline is all connected together, we need to polish everything.

New skills or tools I’ve learned to accomplish capstone tasks:

I needed to learn several new technical skills and knowledge areas.

I learned how to code for the Arduino platform, including reading sensor data and implementing algorithms for strumming detection. I also needed to understand Bluetooth communication, specifically how to establish a connection and send data between a Python program and an ESP32 microcontroller. Additionally, I learned how to use the MediaPipe libraries for motion tracking, which involved understanding how to process and interpret the data from these libraries to integrate with my project.

To acquire this knowledge, I used a combination of strategies. I read documentation for the hardware and software tools, which helped me understand the APIs and limitations of the sensors and microcontroller. I also watched tutorial videos online, which were especially helpful for seeing practical implementations and debugging strategies. Finally, I used AI tools to clarify concepts and suggest solutions when I encountered coding challenges. Combining these strategies allowed me to learn efficiently and apply the knowledge directly to design, implement, and debug my project.

 

Alexa’s Status Report for 11/15/25

This week I worked on adding new features to our app, and continued testing and integration with our temporary sensor when our IMU got fried. We think something might have gone wrong with the seeed studio board power management, because when the IMU got fried, the bluetooth communication also stopped working. I spent hours trying to debug the bluetooth connectivity issue, but ended up making it work only by switching our board to our backup esp32 board. This is important because now we know about this mode of failure for future development and debugging.

For adding the guitar mode, this involved finding a high quality individual guitar note .wave file, then pitching the node higher and lower for a full scale to add to our library. Since our IMU broke and we didn’t have a way to get velocity input, I was able to move forward with implementing the delay between individual notes. I did this by adding a slider to introduce zero to a few milliseconds delay.  Since this is already implemented into the code structure, adding the bluetooth input from the IMU should be streamlined.

Team Status Report for 11/8/25

This week in preparation for our first testing session on Wednesday and the interim demo sessions next week, we worked a lot to get the whole pipeline working together. We were able to get the minimum number of parts brought up to allow for strumming and gesture detection to work together.

We created a testing feedback form to prepare for testing with Prof. Dueck’s class on Wednesday. That day,  all the musicians and Prof Dueck were able to test our instrument and provide feedback for our next iteration. After collecting results from the feedback form, the overall consensus was that we need to make it more sensitive and incorporate stopping the notes. We are also working to incorporate the force sensitive resistor and time between strums.

Alexa’s Status Report for 11/8/25

This week I successfully set up the bluetooth communication between the ESP32 and our python app, and incorporated the communication scheme into our app. In order for the bluetooth communication setup to start the same time we start the app, I use the python library qasync to combine two tasks into the same event loops. The first task is the bluetooth listening for creating a connection with our esp32, the second task is the PyQt app that lucy built that updates the GUI and reads hand gestures from the camera.  After the bluetooth connection is established, the user can now play sounds using IMU motion detection, and do chord selection with the left hand with the computer camera.

For incorporating the IMU data, I’m using the notification scheme with Bluetooth Low Energy (BLE). This means one message is sent every time an event is signaled, and this prompts a customizable handler to handle the notification. This is much simpler than a reader/writer scheme where we stream data. To force the handler to be promptly run in the event loop, I incorporate this notification as a signal inside the PyQt app. This allows the handler to be run inside the event loop first despite other tasks being scheduled already.

Alexa’s Status Report 11/1/25

This week I spent a lot of time trying to establish solid wireless connection between the esp32 and our python program. Initially we planned to use ESP-NOW, a lightweight communication protocol to send packets, but I found out that it only works for python environments for microcontrollers. So it’s best for device to device communication, but doesn’t work to talk with our python program running on an operating system. The next easy option to set up software-wise would’ve been wifi, but since CMU wifi requires a certificate, and devices without browsers need to connect to CMU-DEVICE not CMU-SECURE, I didn’t see an easy way to communicate through wifi.

So, I set up the script for bluetooth low energy (BLE) communication on the eps32. It seems to be working on the esp32, but I still can’t get the python program to connect. The next step is to use a BLE scanner on my phone to debug this issue.

Team Status Report for 10/25/25

This week, we’re making solid progress on our project and staying on schedule, thanks to everyone’s individual contributions. In preparation for Prof. Dueck’s class on Wednesday, we had several parts of our project ready to present and test. The main part was the hand gesture algorithm developed by Lucy. Here is a video from the hand gesture algorithm. We also showcased our designs for the right-hand motion strumming glove.

After testing the hand gesture computer vision component, we received positive feedback regarding its ease of learning, accessibility for users with varying levels of mobility, and low latency. Another key activity this week was recording voice samples from the vocalists, which will be incorporated into the instrument’s sound output. Additionally, the music majors shared ideas on improving the naturalness of chord progressions. We believe this addition can be implemented through a purely software modification that enhances our project.

 

Alexa’s Status Report for 10/25/25

For our project I worked on the wireless communication from the esp32 to python program on the computer. We want to implement a low latency way of communicating IMU and touch sensor data to our python program for further calculation. In our design report, we evaluate on why we choose the ESP-NOW protocol, for its ease of use with the esp32 model and lightweight setup. It doesn’t require a router, and allows us to do peer to peer communication between the board and a python listener. I wrote the Arduino code for transmitting basic structs, and also the python setup for listening. The next step is to get the testing setup working on our esp32 board, and measuring basic latency values to see if this approach meets our needs.
This week we also all worked in prof Dueck’s class. After this week’s class and discussion, I’m preparing a form for our next session with the music students. The form will be used to analyze user experience, and we hope to get feedback on our general setup in 2 weeks.

Alexa’s Status Report for 10/18/25

After getting feedback that our design needed more implementation details and stronger technical support, I spent some time researching Wi-Fi communication protocols for the ESP32 and ended up choosing ESP-NOW for our communication layer.

I also built an early version of our left-hand detection system using MediaPipe, turning it into a small Python app that could detect which finger is bent on one hand. While testing it, we realized that the one-finger chord-playing idea was a bit too tricky in practice, so we decided to pivot to a simpler approach, and maybe more intuitive for musicians, using hand gesture numbers to match chord numbers within a key.

Alexa’s Status Report for 10/4/2025

This week, along with completing our design presentation, I focused on both ordering the harware and software integration for our project. On the hardware side, I placed orders for key components including the ESP32 microcontroller, an IMU (Inertial Measurement Unit), and capacitive touch sensors. After more research, I also identified and ordered a secondary back-up IMU that meets our project requirements and has much resources on documentation and usage online, which should help speed up integration and troubleshooting.

On the software side, I successfully integrated the MediaPipe hand tracking program with our frontend HTML interface. The system can now detect a left hand in real time through the web application, providing a foundation for finger chord selection. The next step will be to identify which specific finger is being selected. Once finger-level detection is implemented, we can start playing different notes of a chord.