Nish Status Report 4/29

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I’ve been working on redoing the App interface on a Desktop version for easier CV + Bluetooth integration. I also spent a lot of time learning how to do to the Bluetooth integration and getting that set up to pass to Caroline, who is working on the actual glove.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Slightly behind because I got sick and wasn’t able to get out of bed for a few days! But getting back on track, don’t really have a choice since the demo is due in a week. Should have the CV + app integrated by Sunday night or Monday morning, and then spend the rest of the time testing and integrating bluetooth.

What deliverables do you hope to complete in the next week?

Finish up the CV + App integration and then bluetooth integration and testing.

Nish Status Report

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I tried to work on soldering the glove fully so we would be able to show it. It did not work, as I am really bad at soldering, and soldering a flexible protoboard is terrible. I spent a long time doing it and then at the end snipped the whole thing and have asked to order a new arduino because I ruined it pretty badly. I spent a long time trying to desolder it but then figured we have the budget anyway.

^^ failed soldering that was painstaking.

” Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is currently behind. To catch up, I will have the full integration between the apps done tonight and have the Python interpreter in the package tomorrow.

” What deliverables do you hope to complete in the next week?

This coming week, on Tuesday I will have the two halves of the app merged, and on Friday I would have gotten all the Python code for the CV running in the App. We recognize that there may be a delay with the frame processing, but hopefully the interpreter will still run in under 100 ms.

 

Nish Status Report

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I made the initial frame of the piano app and a camera demo. I made sure that you could hide the camera view will still sending frames, and got buttons to attach to different actions and work. In the frame of the piano app, I worked on pulling up the interface with two octaves on the screen. Currently, you can play the piano with touch input. Later, this touch input will be replaced with bluetooth input that mimics the touch.

 

Here is a photo of the piano frame:

 

 

Some images of the camera demo:

” Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Yes, we are on schedule according to our Gantt chart.

” What deliverables do you hope to complete in the next week?

 

I will merge the two demos into one app in Xcode, which will take some nifty little hacking. Also, I will create a calibration screen and convert the frames into the appropriate format to forward to the Python code.

 

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

 

In the app, I will be making sure that we have 2 octaves playable (tested by counting the maximum display). We will need to set some user input to allow them to change which octaves are displayed. Other tests are adapted from our design report:

Tests for Volume

– Measure the volume output in Xcode for each note, make sure they are all within the same decibel (inbuilt in Xcode, can also measure with an external phone’s microphone)

Tests for Multinote Volume

– Same test as above, but when playing multiple notes at a time.

Tests for playback

Because we will be playing multiple notes at the same time, we want to have a fast enough playback time for the notes played. First, we will test our playback speed, playing at least 8 notes over 2 octaves. We will start the time when the arduino registers a pressed key, and then see how long it takes to reach the app and call the command to play the speaker. All of this should happen in under 100ms. If it doesn’t, we will need to alter how our apps’ threads handle input and prioritize better, or change our baud rate.

Tests for time delay

  • We want our product to behave as similarly to a real piano as possible, so we want the way that our note fades to accurately reflect how notes actually fade out on a real piano. We need to make sure that playing successive notes quickly allows each note to fade and layers the next note on top, in addition to adding in the sound levels. We will also compare the sound to a real piano, using a metronome and timer to see how long each note rings out on our piano version of a real piano. Our goal is to have the keys fade out within 0.5 seconds, although our fade may be more linear than on a real piano.

 

Team Status Report April 8th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk for the CV portion of the project is thresholding the image to obtain the coordinates of the warping function in any lighting environment. The thresholding values will never be 100% accurate so this leaves a possibility that the project could be a risk. The backup plan is to have the user click to choose their points instead or have an error message that tells the user to put the image in better lighting.

Another significant risk is the Arduino Nano BLE. In the past week, we have had issues with connectivity, powering the circuit correctly, and receiving accurate pressure sensor information. Because our project relies heavily on the gloves being able to send information to the computer about which finger is pressed and how hard it is pressed, not having this information will definitely be a risk. The contingency plan for this is to connect the pressure sensors to an Arduino Uno and transfer the information through the Arduino IDE.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The changes that were made to the CV side of things was trying out an external package (mediapipe) to help with identifying the hands and fingers of a person in a given image. This is much easier than color-thresholding finger tips.

Team Status Report 4/1/2023

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk is how we are going to be integrating our computer vision with our Xcode platform. Specifically, we have decided to not use Kivy, due to its constraints. We have separately developed our app and computer vision algorithms, but have yet to integrate them together. We hope that our source, a Python package that works in Xcode to integrate Python language into Apps, will work, but we have not yet tested this. If this doesn’t work, we will work on translating our code from Python into C++, as there is an Open CV package that will work but may require more understanding of C++. Overall, this is our greatest risk. Since we are already now learning Swift due to Kivy’s limitations, we hope to not have to learn how to code computer vision in  c++ language. However, if we need to, this is still an option as we have already developed our parameters through Lee’s work with Open CV in Python.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have decided not to use Kivy for our interface. This is because we discovered that although Kivy is a good inter-platform development tool, it does have limitations. Specifically, it is unable to access the iPhone’s hardware, such as the BLE, speakers, and camera, which are all integral to our project. Thus, we are now learning Swift UI language in order to develop our interface. We will use a Python package to write our Open CV, described above but our interface and camera access will be in Swift.

Nish’s Status Report 4/1/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I managed to get Xcode running, which took several hours to load and run. I am able to deploy code to my phone directly to test out the camera functionality. The image below shows hello world deployed to the phone. Currently, I am working on pulling up the camera itself through a button.

I also worked a lot this week on reframing how to approach the app’s development. I watched several crash courses on how to code in Swift after deciding that we should in fact try to code everything in Swift rather than relying on Kivy to launch in Xcode. I found a resource, PySwift, that will allow us to call our open CV in PYthon in Xcode, so we will be able to keep that portion of our project but we will also need to learn Swift to code our UI.  

The image here shows a screenshot of linking our phone’s systems to Xcode, which took about 6 hours of work itself in order to be able to link them together.  We are almost also done with pulling up the camera specifically on an iPhone 12, fixing up bugs from last week as we test it on a real device.

 Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think we are on time, as we are each working individually to produce our interim demo.

 What deliverables do you hope to complete in the next week?

 

I hope to integrate the camera with a real interface and speakers so that we can show a working interface for our interim demo, even though the CV won’t be integrated.  We should be able to pull up 1) BLE transmission through Caroline’s work, 2) pulling up the camera and putting it away, and 3) playing sound through the speakers by interim demo.

Nish’s Status Report 3/25/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, we met a few times to discuss our progress. I was working on the interface mainly. I started to build the kivy app. I built it in separate parts and working on integrating these features together. Below is a screenshot of some of the output with configured Camera properties to run a square on our  We will only be able to test this once we wrap it in Xcode (the next step).  Individually, we are able to access the Camera and the speakers with our code We also test our Arduino Nano to make sure it can send connect to our iPhones.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Yes, I think we are on track at this point. For our interim demo, we will be showing our CV algorithm. As long as I continue to make progress, we might even be able to integrate our CV into the interface before the demo date.

What deliverables do you hope to complete in the next week

On Monday, I will be able to show the Kivy interface with the speakers, camera, and BLE transmission working. After that, by Friday I hope to create the calibration screen, and then work on hiding the camera as we begin to create the keys interface will using CV in the background.

Nish’s Status Report 3/18

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I evaluated the ethics of our project in the Ethics assignment. We feel that overall, our project does not have negative ethical dilemmas. However, it solidified our ideas that we only want to target experienced piano players (to some degree), as we don’t want to cause bad hand posture and bad technique in younger/beginner users, if they attempt to play the piano on a flat surface for a long time.

 

I also worked for a bit in trying to get our oscillators and synth sounds. However, this work is now obsolete given that we will just use an existing package. I’ve been looking through and testing different audio packages and MIDI packages available in Python. pypiano and fluidsynth are the ones we are now considered, and will narrow down when we work on audio processing again later in our timeline.

 

I also spent a large amount of time learning how MIDI protocol works, and working through tutorials to practice sending and receiving MIDI messages and writing them to a file. I used a bit of an arrangement of Dandelions that I had been working on and wrote out MIDI messages and tried to play them back in a .mid file, using a few resources such as this that uses MATLAB to actually write the messages to the file:

https://www.mathworks.com/help/audio/ug/convert-midi-files-into-midi-messages.html

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

After evaluating our risks (as seen in the group report), we feel a little behind as a lot of the work that I was trying to do this week is kind of obsolete now. To get back on schedule, we are focusing on having something working for our interim demo. That will require our interface to be up and running, so both Caroline and I will be working together next week to build up the structure of the app in Kivy, and hopefully be able to launch it through Xcode as well.

What deliverables do you hope to complete in the next week

Next week, I hope to have been able to launch our app from our iPhone, such that it shows a calibration page and accesses the camera.

Team Status Report 3/18

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

I think our most significant risks in how we are going to be integrated our different components and subsystems. In order to be able to work, we have different branches in our repo for CV, Interface, and Audio, and plan on integrating them together in the master branch. However, when CV and Audio are integrated together, we will probably have to write in some threading capabilities so that CV can continually run  while MIDI timing is being taken with  the pressure sensor, so we will require a timing mechanisms.

In order to mitigate this, we will take a few steps. Firstly, we will edit points in our schedule so that we merge in each CV milestone with our interface. This will require us to set up the bare bones of the interface rather quickly, so we will be pushing the audio details later. In our interface, we will set up a software timer that runs/is checked as necessary, with a minimum BPM of 30, so we will always merge the CV milestones with a timer in place and software interrupts are triggered by a note being played via the pressure sensors to check the timer and send the MIDI message.

Our contingency plan is to have a MIDI message sent on each beat. This is not ideal, as we may have a note that starts on a half beat, but given that we are focusing on the note accuracy + volume and interface in this project, we feel that it would be okay to have this as a contingency plan.

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Given the feedback on our design report, we have decided to edit our implementation of the audio processing flow. We will not spend time trying to figure out the frequency spectrums of each note, but rather use a piano package with MIDI available through Python to render our sounds. We are currently exploring a few different options for these packages. This will reduce a lot of workload by using work already available to us rather than starting from scratch and trying to build deep sounds. Although we received feedback that suggested we use a soundcard as we are under budget, 1) we want to play the sounds through the phone for the portability aspects, as we expect our user to be using their laptop for actual score writing, and 2) we want to minimize cost to the user, as outlined in our use case.

Nish’s Status Report 3/11

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

In the past week, I worked on the design report. Particularly, I worked on the Design Trade, Architecture, and System Overview sections. I made the block diagrams and also created the audio processing flow, as we switched our focus from PCB design (hardware) due to choosing the Arduino Nano into audio processing. I worked a lot on researching how synthesizers are made and finding the software we will need to implement them, such as pygame midi and the Oscillator packages, and understanding how to shape our sound waves to sound like a piano.

 

Here are some of the diagrams I created: 

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think we are on schedule. Most of our work is now in the software/audio realm, so we are able to focus on prototyping and understanding those applications next week even though we haven’t gotten our Arduino Nano yet.

What deliverables do you hope to complete in the next week?

Next week, I hope to complete running a dummy Kivy app that is able to make all 49 pitches that we are hoping to include in our piano, and be able to shape the waves using the Sustain and Release modulators talked about in our design report, experimenting with what gets the most realistic sound.