Oscar’s Status Report for April 8th

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I  rewrote the synthesis section of my synthesizer. Before it uses additive synthesis (adding sine waves of different frequencies). Now it uses wavetable synthesis (sampling an arbitrary periodical wavetable). I also added dual-channel support for my synthesizer. With wavetable synthesis, I only need to perform two look-ups in the wavetable and linear interpolation to generate a sample in the audio buffer. Previously I have to add results of multiple sine functions just to generate a sample. Here’s a demo video.

In short, compared to additive synthesis, wavetable synthesis is much faster and can mimic an arbitrary instrument more easily.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am a little behind schedule. This week, Karen and I decided to use color tracking instead of hand tracking to reduce the lag of our system, so we are behind on our integration schedule. However, I will write a preliminary color-tracking program and integrate it with my synthesizer tomorrow as a preparation for the actual integration later.

  • What deliverables do you hope to complete in the next week?

I am still hoping to integrate the radio transmission part into our system.

  • Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

We ran a basic latency test after we merged our synthesizer and hand tracking program. Because the lag was unbearable, we decided to switch to color tracking and wavetable synthesis. For next week, I will be testing the latency of the new synthesizer. Using a performance analysis tool, I found that the old synthesizer takes about 8ms to fill up the audio buffer and write it to the speaker. Next week, I will make sure that the new synthesizer has a latency under 3ms, which gives the color tracking system about 10-3=7ms to process each video frame.

Karen’s Status Report for April 8

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I worked on switching from a hand tracking model (tracking using mediapipe) to a color tracking model because mediapipe is relatively slow since it needs to keep track of many points on the hand, while we only need to know where the hand is. Instead, we used colorful tapes and finger cots to keep track of the user’s middle finger position and identify where it is. I worked mainly on adjusting the RGB value identified for each color (so far the system should support Red, green, yellow, blue, and purple). In the end, we probably won’t need that many colors, but this is just for testing purposes (and in the extended case where the user has specific colored clothes that will require a different color finger to be tracked)

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up to the project schedule?

I think my progress is still a little behind because changing from hand tracking to color tracking is not part of our intended schedule. However this does kind of solve the problem of time delay, so it is sort of progress speed up in a different place. Next week because of carnival there will  be fewer classes so more time for work

  • What deliverables do you hope to complete in the next week?

Next week I will try to get the color recording fully integrated with the synthesizer (with note area classification) and continue to work on the gesture recognition section

  • Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Currently tests that I am currently running/have finished include color detection accuracy(using different color objects under different lighting settings to see if the system can still find the assigned color), color detection speed (time elapse between color input (middle finger locating) and sound output based on quadrant. Tests will be conducted in the following week include battery testing and gesture recognition accuracy, which are based on unfinished components.

Team Status Report for April 1

Team Report for 4/1

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this moment is our integrated system speed not meeting our time metric requirements. The latency between gesture input and sound output is relatively high, and there is a clear lag that can be felt by users. Currently, we are thinking about using threads and other parallel process methods to reduce the latency created by processing the actual hand tracking command and the write audio buffer function, which contributes to most of the time delay. We are also possibly looking at different tracking models that are faster.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There were no changes to the existing design of the system. All issues so far seem resolvable using the current approach and design

  • Provide an updated schedule if changes have occurred.

The updated schedule is shown below in the new Gantt chart.

  • Component you got working.

Now we have a basic system that allows the user to produce sound by moving their hands across the screen. So far the system supports 8 different notes (8 different quadrants on the screen). Here’s a demo video.

Karen’s Status Report for April 1

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I worked with Oscar to get the hand tracking module integrated with the synthesizer. So far we were able to get the two components working together, although it is slightly slower than expected. We are still working on trying to improve the speed.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up to the project schedule.

I think my progress is slightly behind because some integration work are not working as intended, and I did not have much time last week due to exams. I am working on this class most of the week next week.

  • What deliverables do you hope to complete in the next week?

Next week I will try to get the gesture recognition section of the software to work properly and try to get it working together with the gesture mapping section. In addition I will work with Oscar to get the synthesizer-gesture recognition system to work faster.

Oscar’s Status Report for April 1

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

I merged most of my repo with Karen’s. Right now, our program uses the index finger’s horizontal position to determine the pitch and the vertical position to determine the volume. I haven’t integrated the code I wrote for communicating with the gyroscope chip yet because of our performance issue. Right now, tracking and playing notes simultaneously is a little laggy. Here’s a demo video.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am a little behind schedule this week. I was expecting to merge all of our code bases together. I will be working extra time tomorrow (Apr 2nd) to reduce the lag and integrate the serial communication part into our code base.

  • What deliverables do you hope to complete in the next week?

Next week, I hope to integrate the radio transmission part into our system.

Yuqi’s status report for March 25

Yuqi’s status report for 3/25

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I continued working on the code of the radio transmitter. I write the codes following the instruction.  The code was complied successfully.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am  behind schedule because the parts arrived late. (for about 2-3 weeks.) I will spend more time to catch up with the schedule.

  • What deliverables do you hope to complete in the next week?

Next week, I will upload the codes to the radio transmitter. If the radio transmitter is good, I will make the circuit of our gloves.

Karen’s Status Report for March 25

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I worked mostly on a creating the hand tracking module using python, some external libraries and the existing tracking models. So far I am not really sure how good the gesture recognition portion works because I only worked on the tracking part that tells where the hand is and which area is belongs on the screen (which note it is assigned to).

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up to the project schedule.

I am so far still slightly behind because I only got 1/2 of the modules created and working, I still need to do more work tomorrow.

  • What deliverables do you hope to complete in the next week?

Next week I will be working with Oscar in integrating the software components of the system. Mainly we will try to combine the sound production with the gesture input to get the sound controlled by detection.

Team Status Report for Mar 25

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk right now is the integration for the interim demo. Since the basic functionalities of the synthesizer and the tracking program have been implemented, we will start early and integrate these two parts next week. However, if there are some issues with python libraries, we will work to support either Windows or macOS first and make both compatible after the interim demo.

  • Were any changes made to the existing design of the system? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There were no changes to the existing design of the system.

  • Provide an updated schedule if changes have occurred.

So far there is no change to our current schedule. However, depending on the integration task next week, our schedule might get changed.

  • Component you got working.

The gyroscope chip can now control the synthesizer’s volume as well as bend its pitch. Here’s a demo video.

Oscar’s Status Report for Mar 25

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I added the pitch bend function to my synthesizer. Now, the roll angle of the gyroscope chip can bend the pitch up or down by 2 semitones. Here’s a demo video. I spent quite some time lowering the latency of serial communication. I have to design a data packet to send volume (pitch angle) and pitch bend (roll angle) information in one go. Otherwise, the overhead of calling serial write and serial read two times will create pauses between writes to the audio buffer and shatter the audio production.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up with the project schedule.

I am on schedule. However,  the integration task next week will be more challenging.

  • What deliverables do you hope to complete in the next week?

Next week, I will be working with Karen to integrate all the software components together as we prepare for the interim demo. If we have some extra time, I will research ways to create oscillators based on audio samples of real instruments instead of mathematical functions.

Karen’s Status Report for March 18

  • What did you personally accomplish this week on the project? Give files or
    photos that demonstrate your progress.

This week I am mainly focusing on understanding the recognition models and setting up the system on my computer. I already have the yoha demo set up on my computer, and working on how to integrate this to what is already existing.

  • Is your progress on schedule or behind? If you are behind, what actions will be
    taken to catch up to the project schedule.

I am still a little behind process because of the ethics assignment and some technical difficulty that came out of nowhere ;-; So far I decide to switch to a different device to hopefully avoid the issue.

  • What deliverables do you hope to complete in the next week?

Next week I hope to try to get a gesture recognition model binded with the ui and starting to get hand tracking synced with the synthesizer.