Team Status Report for Apr 29th

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this moment building the gloves. Originally we planned to sew the chips onto the gloves. Since none of us know how to sew, we end up deciding to hot glue the chips to Velcro tapes, which are then taped to our wool gloves. The tape can also act as a buffer to prevent the pins on the chip from poking through the gloves and causing harm. As a contingency plan, we could ask Karen’s friend to teach us sewing, but I think it was hard to set a time to meet given our schedules.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There was no change to our design.

  • Provide an updated schedule if changes have occurred.

The schedule has not changed.

  • Component you got working.

The radio chips have been integrated with the synthesizer. The left hand’s pitch angle now can control the volume, while the right hand’s roll angle now can control the pitch bend. We will post a video here once we got to the lab tomorrow.

Oscar’s Status Report for Apr 29th

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I was trying to attach the chips to the gloves last week. I tried zip ties, but they seemed too thick to go through the gloves.  I also tried sewing them on,  but the pins on the back of the chip can sometimes poke through the gloves and weren’t comfortable to wear. I think I will hot glue the chips to 3M Velcro strips and then stick them onto the gloves. That way, I can rearrange the components easily and take out the batteries for charging.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule?

I am behind schedule. I planned to make one glove this week, but sewing is harder than expected. I did not complete building one glove and I was out of town last weekend. I will spend most of tomorrow in the lab with Yuqi to finish building the gloves.

  • What deliverables do you hope to complete in the next week?

By the end of Thursday, I hope to build both gloves.

By the end of Friday, I hope to merge my code base with Karen’s and clean up dead code.

By the end of Sunday, I hope to finish all integration testing and get out system ready for the final demo.

Team Status Report for Apr 22th

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this moment is the radio circuits. According to Yuqi, they work when they are powered by Arduino Unos while connected to the laptop. However, it is not working when I tried to power it using the Arduino Nanos. Yuqi and Oscar will be testing the radio chips with power supplies and power regulators+battery tomorrow. There are two contingency plans. First, we might end up putting Arduino Unos on the gloves to power the chips. Second, we could also buy several power regulator chips and power the radio chips directly.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There was no change to our design, but we might change the way we power our radio chips. This might mean we would either buy extra Arduino Unos or power regulator chips in the future.

  • Provide an updated schedule if changes have occurred.

The schedule has not changed.

  • Component you got working.

The radio chips are now working. Two transmitters can successfully send one integer each to the receiver. However, the transmitters are all powered by Arduino Unos, which are in turn powered by the laptop.

Oscar’s Status Report for Apr 22th

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I was trying to integrate the radio chips with my synthesizer last week. However, I could not get them to work. I have checked that they are neither faulty nor my Arduino Nano. Also, since they worked when Yuqi tested them with Arduino Uno and bypass capacitors, I think the problem lies with the Arduino Nano’s current draw limit. I think the radio chip might need to draw current directly from the batteries, instead of from the Nano.

Update (Sunday, 4/23)

After some debugging, the radio chips are working and they can be powered directly by the Nanos. Here’s a demo video. Although in the video, the Nanos are powered by USB cables, we’ve also verified that they can be powered by two 3.7 LiPO batteries in series. Since the radio chip now works when using shorter wires to connect them and the Nanos, I think that it was the long wires and the breadboard that I previously used that caused too much noise.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule?

I am behind schedule. I thought I could debug and integrate the radio circuits with my synthesizer this week. To make up for the progress, I will meet with Yuqi tomorrow and fix the radio circuits.

  • What deliverables do you hope to complete in the next week?

I hope to debug and integrate the radio circuits and build one glove next week.

Team Status Report for April 8th

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this moment is our integrated system speed not meeting our time metric requirements. The latency between gesture input and sound output is relatively high, and there is a clear lag that can be felt by users. Currently we are changing to a color-based hand tracking system to reduce the lag of the hand tracking part, and wavetable synthesis to reduce the lag of the synthesizer. Because we are essentially using convolution and a filter to track colors in a video frame, we can lower the resolution of the image and/or search patches of image to speed up the process.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Instead of using a hand tracking model via mediapipe, we end up reverting back to the initial design where we use color to locate the hand. We reduced the number of colored targets from 3 to 1 because it is easier for classification and the user can figure out what sound they are producing earlier. We also bought a webcam so that we don’t have to tune our color filters based on individual laptop webcam. Besides the financial cost, there were no additional costs as the webcam has already been integrated into our system (both on Windows and macOS)..

  • Provide an updated schedule if changes have occurred.

The schedule is the same as the new gantt chart from last week.

  • Component you got working.

Now we have a basic system that allows the user to produce sound by moving their hands across the screen. The system will track the user’s middle finger through color differences (users will wear a colored finger cot) and produce the note corresponding to the finger location. So far the system supports 8 different notes (8 different quadrants on the screen). Compared to last week, this system now supports sampling arbitrary instrument sounds and dual channel audio.

Oscar’s Status Report for April 8th

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I  rewrote the synthesis section of my synthesizer. Before it uses additive synthesis (adding sine waves of different frequencies). Now it uses wavetable synthesis (sampling an arbitrary periodical wavetable). I also added dual-channel support for my synthesizer. With wavetable synthesis, I only need to perform two look-ups in the wavetable and linear interpolation to generate a sample in the audio buffer. Previously I have to add results of multiple sine functions just to generate a sample. Here’s a demo video.

In short, compared to additive synthesis, wavetable synthesis is much faster and can mimic an arbitrary instrument more easily.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am a little behind schedule. This week, Karen and I decided to use color tracking instead of hand tracking to reduce the lag of our system, so we are behind on our integration schedule. However, I will write a preliminary color-tracking program and integrate it with my synthesizer tomorrow as a preparation for the actual integration later.

  • What deliverables do you hope to complete in the next week?

I am still hoping to integrate the radio transmission part into our system.

  • Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

We ran a basic latency test after we merged our synthesizer and hand tracking program. Because the lag was unbearable, we decided to switch to color tracking and wavetable synthesis. For next week, I will be testing the latency of the new synthesizer. Using a performance analysis tool, I found that the old synthesizer takes about 8ms to fill up the audio buffer and write it to the speaker. Next week, I will make sure that the new synthesizer has a latency under 3ms, which gives the color tracking system about 10-3=7ms to process each video frame.

Team Status Report for April 1

Team Report for 4/1

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this moment is our integrated system speed not meeting our time metric requirements. The latency between gesture input and sound output is relatively high, and there is a clear lag that can be felt by users. Currently, we are thinking about using threads and other parallel process methods to reduce the latency created by processing the actual hand tracking command and the write audio buffer function, which contributes to most of the time delay. We are also possibly looking at different tracking models that are faster.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There were no changes to the existing design of the system. All issues so far seem resolvable using the current approach and design

  • Provide an updated schedule if changes have occurred.

The updated schedule is shown below in the new Gantt chart.

  • Component you got working.

Now we have a basic system that allows the user to produce sound by moving their hands across the screen. So far the system supports 8 different notes (8 different quadrants on the screen). Here’s a demo video.

Oscar’s Status Report for April 1

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I merged most of my repo with Karen’s. Right now, our program uses the index finger’s horizontal position to determine the pitch and the vertical position to determine the volume. I haven’t integrated the code I wrote for communicating with the gyroscope chip yet because of our performance issue. Right now, tracking and playing notes simultaneously is a little laggy. Here’s a demo video.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am a little behind schedule this week. I was expecting to merge all of our code bases together. I will be working extra time tomorrow (Apr 2nd) to reduce the lag and integrate the serial communication part into our code base.

  • What deliverables do you hope to complete in the next week?

Next week, I hope to integrate the radio transmission part into our system.

Team Status Report for Mar 25

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk right now is the integration for the interim demo. Since the basic functionalities of the synthesizer and the tracking program have been implemented, we will start early and integrate these two parts next week. However, if there are some issues with python libraries, we will work to support either Windows or macOS first and make both compatible after the interim demo.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There were no changes to the existing design of the system.

  • Provide an updated schedule if changes have occurred.

So far there is no change to our current schedule. However, depending on the integration task next week, our schedule might get changed.

  • Component you got working.

The gyroscope chip can now control the synthesizer’s volume as well as bend its pitch. Here’s a demo video.

Oscar’s Status Report for Mar 25

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I added the pitch bend function to my synthesizer. Now, the roll angle of the gyroscope chip can bend the pitch up or down by 2 semitones. Here’s a demo video. I spent quite some time lowering the latency of serial communication. I have to design a data packet to send volume (pitch angle) and pitch bend (roll angle) information in one go. Otherwise, the overhead of calling serial write and serial read two times will create pauses between writes to the audio buffer and shatter the audio production.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am on schedule. However,  the integration task next week will be more challenging.

  • What deliverables do you hope to complete in the next week?

Next week, I will be working with Karen to integrate all the software components together as we prepare for the interim demo. If we have some extra time, I will research ways to create oscillators based on audio samples of real instruments instead of mathematical functions.