Nish Status Report

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I made the initial frame of the piano app and a camera demo. I made sure that you could hide the camera view will still sending frames, and got buttons to attach to different actions and work. In the frame of the piano app, I worked on pulling up the interface with two octaves on the screen. Currently, you can play the piano with touch input. Later, this touch input will be replaced with bluetooth input that mimics the touch.

 

Here is a photo of the piano frame:

 

 

Some images of the camera demo:

” Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Yes, we are on schedule according to our Gantt chart.

” What deliverables do you hope to complete in the next week?

 

I will merge the two demos into one app in Xcode, which will take some nifty little hacking. Also, I will create a calibration screen and convert the frames into the appropriate format to forward to the Python code.

 

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

 

In the app, I will be making sure that we have 2 octaves playable (tested by counting the maximum display). We will need to set some user input to allow them to change which octaves are displayed. Other tests are adapted from our design report:

Tests for Volume

– Measure the volume output in Xcode for each note, make sure they are all within the same decibel (inbuilt in Xcode, can also measure with an external phone’s microphone)

Tests for Multinote Volume

– Same test as above, but when playing multiple notes at a time.

Tests for playback

Because we will be playing multiple notes at the same time, we want to have a fast enough playback time for the notes played. First, we will test our playback speed, playing at least 8 notes over 2 octaves. We will start the time when the arduino registers a pressed key, and then see how long it takes to reach the app and call the command to play the speaker. All of this should happen in under 100ms. If it doesn’t, we will need to alter how our apps’ threads handle input and prioritize better, or change our baud rate.

Tests for time delay

  • We want our product to behave as similarly to a real piano as possible, so we want the way that our note fades to accurately reflect how notes actually fade out on a real piano. We need to make sure that playing successive notes quickly allows each note to fade and layers the next note on top, in addition to adding in the sound levels. We will also compare the sound to a real piano, using a metronome and timer to see how long each note rings out on our piano version of a real piano. Our goal is to have the keys fade out within 0.5 seconds, although our fade may be more linear than on a real piano.

 

Team Status Report April 8th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk for the CV portion of the project is thresholding the image to obtain the coordinates of the warping function in any lighting environment. The thresholding values will never be 100% accurate so this leaves a possibility that the project could be a risk. The backup plan is to have the user click to choose their points instead or have an error message that tells the user to put the image in better lighting.

Another significant risk is the Arduino Nano BLE. In the past week, we have had issues with connectivity, powering the circuit correctly, and receiving accurate pressure sensor information. Because our project relies heavily on the gloves being able to send information to the computer about which finger is pressed and how hard it is pressed, not having this information will definitely be a risk. The contingency plan for this is to connect the pressure sensors to an Arduino Uno and transfer the information through the Arduino IDE.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The changes that were made to the CV side of things was trying out an external package (mediapipe) to help with identifying the hands and fingers of a person in a given image. This is much easier than color-thresholding finger tips.

Caroline’s Status Report for 4/1/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I finalized the orders for the pressure sensors. I also started to work on the Xcode interface. With Nish’s help, I was able to deploy a sample test code onto my phone from my laptop on Xcode. I found a video online that walks me through how to communicate with the Arduino Nano BLE and started to create a file that would be able to receive Bluetooth information from the Arduino Nano BLE. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think that our overall project is on schedule, but I am currently a little behind on my work. I had a very busy week but will be putting in more work next week to get back on track.

What deliverables do you hope to complete in the next week

Next week, I will focus on being able to receive and communicate information with the Arduino Nano BLE. In addition, if we receive our pressure sensors next week, I will be assembling together the circuit for the gloves.

Team Status Report 4/1/2023

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk is how we are going to be integrating our computer vision with our Xcode platform. Specifically, we have decided to not use Kivy, due to its constraints. We have separately developed our app and computer vision algorithms, but have yet to integrate them together. We hope that our source, a Python package that works in Xcode to integrate Python language into Apps, will work, but we have not yet tested this. If this doesn’t work, we will work on translating our code from Python into C++, as there is an Open CV package that will work but may require more understanding of C++. Overall, this is our greatest risk. Since we are already now learning Swift due to Kivy’s limitations, we hope to not have to learn how to code computer vision in  c++ language. However, if we need to, this is still an option as we have already developed our parameters through Lee’s work with Open CV in Python.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have decided not to use Kivy for our interface. This is because we discovered that although Kivy is a good inter-platform development tool, it does have limitations. Specifically, it is unable to access the iPhone’s hardware, such as the BLE, speakers, and camera, which are all integral to our project. Thus, we are now learning Swift UI language in order to develop our interface. We will use a Python package to write our Open CV, described above but our interface and camera access will be in Swift.

Nish’s Status Report 4/1/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I managed to get Xcode running, which took several hours to load and run. I am able to deploy code to my phone directly to test out the camera functionality. The image below shows hello world deployed to the phone. Currently, I am working on pulling up the camera itself through a button.

I also worked a lot this week on reframing how to approach the app’s development. I watched several crash courses on how to code in Swift after deciding that we should in fact try to code everything in Swift rather than relying on Kivy to launch in Xcode. I found a resource, PySwift, that will allow us to call our open CV in PYthon in Xcode, so we will be able to keep that portion of our project but we will also need to learn Swift to code our UI.  

The image here shows a screenshot of linking our phone’s systems to Xcode, which took about 6 hours of work itself in order to be able to link them together.  We are almost also done with pulling up the camera specifically on an iPhone 12, fixing up bugs from last week as we test it on a real device.

 Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think we are on time, as we are each working individually to produce our interim demo.

 What deliverables do you hope to complete in the next week?

 

I hope to integrate the camera with a real interface and speakers so that we can show a working interface for our interim demo, even though the CV won’t be integrated.  We should be able to pull up 1) BLE transmission through Caroline’s work, 2) pulling up the camera and putting it away, and 3) playing sound through the speakers by interim demo.

Lee’s Status Report for 4/1/2023

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This past week, I worked during class on Monday to unpack the amazon package that arrived of the goose neck phone holder tripod desk stand mechanism with the ring light. We found that the ring light made insignificant changes to the lighting of the surface but the lighting in Ansys Techspark was something that couldn’t be completely representative of all lighting conditions.

But it worked great, and could be lifted high enough to take a photo of a bird’s eye surface down below that it could show the entirety of the four octaves. This is good since the entire image captured can simply display all the octaves and we don’t have to do extra work to accomodate for a selection of octaves while the user is playing, rather we can do all of them at once.

I created a printout of what I wanted the design of the piano keyboard printout should look like going through multiple iterations. I wanted to make sure that the corners had a distinct enough visual to do feature matching so that i could find its coordinates and then do a warp perspective to get the four corners of the piano from the overhead camera view. This proved tricky but first the piano layout can’t fit on an 8.5 x 11 sheet of paper, so I had to print out two and then meticulously tape them together to combine a total of four octaves on the keyboard layout.

Feature matching proved to not be accurate enough, so I decided to experiment with doing red border contour identification using the cv.findcontours function. This took some time to figure out in application, but I was able to get the moments of the contours to identify the centroid areas of the thresholded objects found in the image, which was the 4 corners. This center points of these centroids was good enough to get the 4 corners to then do a warp perspective function. Since it wasn’t a close bounding border to the piano and there still some extra slack since it was only the center of the centroid, not the other ones, I ended up just cropping the post warped image and then adding the gray border that I used in the segmentation algorithm from last week.

I tried this out on test images from online and this has worked to a good degree so far but it requires some additional polishing. My hope is that there shouldn’t be too many issues doing this with real photos in person.

” Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is behind schedule, as I did not take into account warp perspective into my schedule. I could just take the easier route if necessary to make sure demoes go smoother by having the user select the points themselves in the interface, removing most potential for error. I also need to rearrange the order of the segmentation to map to the order of the actual keynotes including sharps (i.e. C key and C# key). Lastly, for the interim demo, they want me to show they can threshold the entire hand on the image which shouldnt be too difficult since I’m not focusing on specific fingers for right now.

” What deliverables do you hope to complete in the next week?

By next Wednesday for the interim demo I hope to have the following above completed. As a recap this is polishing up the warp perspective, ordering the segmentation keys, thresholding an entire hand on the image, then lastly having this work on real life photos taken from the top down view phone holder.

Lee’s Status Report for 3/25/2023

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This past week I worked on the CV code to segment out both black and white keys. I made several changes to my code and I will go through them here:

  1. Resize the image to make things consistent (used some factor so that every image would have the same number of pixels while keeping the same aspect ratio).
  2. Add a gray border to the image to help with thresholding. Can’t be black nor white because then the border would be thresholded with either the white or black keys which we don’t want
  3. Do the grayscale, blurring, and edge detection. Then do binary thresholding on the edge detection. This gives us areas dividing all keys to the borders seen here:
  4. Then we do segmentation on the image so that each of the white spaces are assigned enumerated labels. I used the cv.connectedComponentsWithStats function for this task. These labels were converted by me to greyscale values between 0 and 255 normalized with an even distribution. The output image has layers of shades of gray each value essentially representing a segmented region for each piano key or as I define them as “sectors”:
  5. I then used a mouse cursor window function called cv.setMouseCallback that allows me to create a window for the image where my mouse can read information on the 2D matrix. By getting its value I can find it’s respective sector information for that segmented region from a list I created to store that information. When the user hovers over a key, the enumerated number of that sector will show up.  Here is some of my code below and the terminal output showing this:

Per last week, last Monday I ordered the phone mount that i decided to go with. It has arrived to the ECE corridor so I can try it out this coming Monday.

Lastly, I worked on finishing the ethics assignment this week too. This includes having the task 3 discussion with my team in addition to going to the ethics lecture for task 4.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is on schedule. I need to do some small adjusting for the image sizes to make that more test images are more consistent. I will test this with the new phone holder stand to start testing with real images with the ring light to make further adjustments there.

What deliverables do you hope to complete in the next week?

By next week according to the schedule, I plan on incorporating the hands in there. How I will do that is essentially have a picture shot of the piano with no fingers on it to have it as a background. Then the colored tips of the gloves will hover over a piano key, the CV will easily be able to tell that the colored region overlapping with a segmented region. So it will be able to determine which piano key the finger is over then.

Team Status Report 3/25

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Our most significant risks is sourced from our ethics discussions this past week. Some groups gave some ethical critiques on our design and how if they were not addressed in a worst-case scenerio, could certainly jeapordize the success of the project for prospective users. For computer vision, having a camera filming could introduce privacy concerns. If our design doesn’t simply work the user may just go back to using an actual piano. If for some reason the battery shorts/explodes, it could injure the user too. If played for extensive periods of time without breaks, it could bring about fatique and physical pain to the user. Lastly, if the user wasn’t aware of some information before using the product, it may fail in environments that isn’t a flat surface or with low lighting. The user may also wear the glove backwards/weirdly or can’t pair with their phone interface if they aren’t familar with the device. They may not be able to pair at all if there are too many bluetooth devices as well.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We did not make any changes to the existing design of the system since last week.

Caroline’s Status Report for 3/25/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I finished testing the pressure sensors. They are able to differentiate between the low, medium, and high pressure that we want for our project. They are relatively accurate and work well for our project.  We will be ordering them from Amazon in the next week after Nish and Lee test out the pressure sensors and see how they work for themselves.

Video of pressure sensors: https://drive.google.com/file/d/1XjYCct6e8iyqOXaQRiQNeGvuUmjMI3tv/view?usp=sharing

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think that we are on track. We are making significant progress and will have a working project by interim demo.

What deliverables do you hope to complete in the next week

Next week, I will focus on getting Kivy and OpenCV up and ready to go. On Monday, Nish and I will try to be able to open the camera and receive BLE information in our interface. If we don’t finish on Monday, we will use Wednesday as our buffer.

Nish’s Status Report 3/25/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, we met a few times to discuss our progress. I was working on the interface mainly. I started to build the kivy app. I built it in separate parts and working on integrating these features together. Below is a screenshot of some of the output with configured Camera properties to run a square on our  We will only be able to test this once we wrap it in Xcode (the next step).  Individually, we are able to access the Camera and the speakers with our code We also test our Arduino Nano to make sure it can send connect to our iPhones.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Yes, I think we are on track at this point. For our interim demo, we will be showing our CV algorithm. As long as I continue to make progress, we might even be able to integrate our CV into the interface before the demo date.

What deliverables do you hope to complete in the next week

On Monday, I will be able to show the Kivy interface with the speakers, camera, and BLE transmission working. After that, by Friday I hope to create the calibration screen, and then work on hiding the camera as we begin to create the keys interface will using CV in the background.