Team Status Report April 8th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk for the CV portion of the project is thresholding the image to obtain the coordinates of the warping function in any lighting environment. The thresholding values will never be 100% accurate so this leaves a possibility that the project could be a risk. The backup plan is to have the user click to choose their points instead or have an error message that tells the user to put the image in better lighting.

Another significant risk is the Arduino Nano BLE. In the past week, we have had issues with connectivity, powering the circuit correctly, and receiving accurate pressure sensor information. Because our project relies heavily on the gloves being able to send information to the computer about which finger is pressed and how hard it is pressed, not having this information will definitely be a risk. The contingency plan for this is to connect the pressure sensors to an Arduino Uno and transfer the information through the Arduino IDE.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The changes that were made to the CV side of things was trying out an external package (mediapipe) to help with identifying the hands and fingers of a person in a given image. This is much easier than color-thresholding finger tips.

Caroline’s Status Report for 4/1/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I finalized the orders for the pressure sensors. I also started to work on the Xcode interface. With Nish’s help, I was able to deploy a sample test code onto my phone from my laptop on Xcode. I found a video online that walks me through how to communicate with the Arduino Nano BLE and started to create a file that would be able to receive Bluetooth information from the Arduino Nano BLE. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think that our overall project is on schedule, but I am currently a little behind on my work. I had a very busy week but will be putting in more work next week to get back on track.

What deliverables do you hope to complete in the next week

Next week, I will focus on being able to receive and communicate information with the Arduino Nano BLE. In addition, if we receive our pressure sensors next week, I will be assembling together the circuit for the gloves.

Team Status Report 4/1/2023

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk is how we are going to be integrating our computer vision with our Xcode platform. Specifically, we have decided to not use Kivy, due to its constraints. We have separately developed our app and computer vision algorithms, but have yet to integrate them together. We hope that our source, a Python package that works in Xcode to integrate Python language into Apps, will work, but we have not yet tested this. If this doesn’t work, we will work on translating our code from Python into C++, as there is an Open CV package that will work but may require more understanding of C++. Overall, this is our greatest risk. Since we are already now learning Swift due to Kivy’s limitations, we hope to not have to learn how to code computer vision in  c++ language. However, if we need to, this is still an option as we have already developed our parameters through Lee’s work with Open CV in Python.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have decided not to use Kivy for our interface. This is because we discovered that although Kivy is a good inter-platform development tool, it does have limitations. Specifically, it is unable to access the iPhone’s hardware, such as the BLE, speakers, and camera, which are all integral to our project. Thus, we are now learning Swift UI language in order to develop our interface. We will use a Python package to write our Open CV, described above but our interface and camera access will be in Swift.

Nish’s Status Report 4/1/2023

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I managed to get Xcode running, which took several hours to load and run. I am able to deploy code to my phone directly to test out the camera functionality. The image below shows hello world deployed to the phone. Currently, I am working on pulling up the camera itself through a button.

I also worked a lot this week on reframing how to approach the app’s development. I watched several crash courses on how to code in Swift after deciding that we should in fact try to code everything in Swift rather than relying on Kivy to launch in Xcode. I found a resource, PySwift, that will allow us to call our open CV in PYthon in Xcode, so we will be able to keep that portion of our project but we will also need to learn Swift to code our UI.  

The image here shows a screenshot of linking our phone’s systems to Xcode, which took about 6 hours of work itself in order to be able to link them together.  We are almost also done with pulling up the camera specifically on an iPhone 12, fixing up bugs from last week as we test it on a real device.

 Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think we are on time, as we are each working individually to produce our interim demo.

 What deliverables do you hope to complete in the next week?

 

I hope to integrate the camera with a real interface and speakers so that we can show a working interface for our interim demo, even though the CV won’t be integrated.  We should be able to pull up 1) BLE transmission through Caroline’s work, 2) pulling up the camera and putting it away, and 3) playing sound through the speakers by interim demo.

Lee’s Status Report for 4/1/2023

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This past week, I worked during class on Monday to unpack the amazon package that arrived of the goose neck phone holder tripod desk stand mechanism with the ring light. We found that the ring light made insignificant changes to the lighting of the surface but the lighting in Ansys Techspark was something that couldn’t be completely representative of all lighting conditions.

But it worked great, and could be lifted high enough to take a photo of a bird’s eye surface down below that it could show the entirety of the four octaves. This is good since the entire image captured can simply display all the octaves and we don’t have to do extra work to accomodate for a selection of octaves while the user is playing, rather we can do all of them at once.

I created a printout of what I wanted the design of the piano keyboard printout should look like going through multiple iterations. I wanted to make sure that the corners had a distinct enough visual to do feature matching so that i could find its coordinates and then do a warp perspective to get the four corners of the piano from the overhead camera view. This proved tricky but first the piano layout can’t fit on an 8.5 x 11 sheet of paper, so I had to print out two and then meticulously tape them together to combine a total of four octaves on the keyboard layout.

Feature matching proved to not be accurate enough, so I decided to experiment with doing red border contour identification using the cv.findcontours function. This took some time to figure out in application, but I was able to get the moments of the contours to identify the centroid areas of the thresholded objects found in the image, which was the 4 corners. This center points of these centroids was good enough to get the 4 corners to then do a warp perspective function. Since it wasn’t a close bounding border to the piano and there still some extra slack since it was only the center of the centroid, not the other ones, I ended up just cropping the post warped image and then adding the gray border that I used in the segmentation algorithm from last week.

I tried this out on test images from online and this has worked to a good degree so far but it requires some additional polishing. My hope is that there shouldn’t be too many issues doing this with real photos in person.

” Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is behind schedule, as I did not take into account warp perspective into my schedule. I could just take the easier route if necessary to make sure demoes go smoother by having the user select the points themselves in the interface, removing most potential for error. I also need to rearrange the order of the segmentation to map to the order of the actual keynotes including sharps (i.e. C key and C# key). Lastly, for the interim demo, they want me to show they can threshold the entire hand on the image which shouldnt be too difficult since I’m not focusing on specific fingers for right now.

” What deliverables do you hope to complete in the next week?

By next Wednesday for the interim demo I hope to have the following above completed. As a recap this is polishing up the warp perspective, ordering the segmentation keys, thresholding an entire hand on the image, then lastly having this work on real life photos taken from the top down view phone holder.