Katherine’s Status Report for April 28th

What did you personally accomplish this week on the project?

I got symbol detection working !!! This was actually such a difficult journey but I think it will make our project better. It’s accuracy is not great but I think I can make it a lot better with more negative images and training. I also had the idea to combine the color detection and symbol detection to make it really accurate! This would hopefully mitigate issues of detection lips for red, or other things like that. This would also help with not having a 100% accurate symbol detection. I would do this by lining the symbol with red and then test if the color detection and symbol detection has overlap, meaning this is most likely the correct symbol. I also fixed an issue in the fabrication design because when I did the new design with the 6mm I made a mistake on the front plate that makes it tilt upwards. This is a pretty simple fix so I just corrected the file and cut a new one with Suna.

Back to the symbol detection — I had so much trouble getting this to work because opencv has not updated recent versions to include the commands that I needed, because they have been prioritizing other features. Therefore, I had to try to revert to older versions of opencv (below 3.4) but this lead to so many issues. I tried using a virtual environment on my computer, as wall as downloading Anaconda to try and get this older version through them. There was a lot of issues with different packages not being the right version. Finally, I found a video talking about this issue, but they were working on a Windows. I tried to convert what they were doing to Mac commands, but they were using resources that weren’t compatible, so finally I got on to virtual andrew and sent all of my data over. Now, finally, I was able to get the right version of opencv that I needed and run the training and create sample commands. I started with a smaller number of photos than I probably should, just to see if I could get the cascade file, so there is definitely room for improvement. This first image is when I first got it working, and you can see it is detecting a lot of things beyond just the correct symbol.

I began messing with settings on what size it should be detecting, to get rid of some of the unreasonable sizes it was detecting and got this:

Which is a little better. Combination with color and more samples will make this much better.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

On schedule.

What deliverables do you hope to complete in the next week?

Fine-tuning the cascade file and combining it with color, which shouldn’t be very hard, to have better detection for the symbol. Integration with Lance’s part still needs to be done this week as well.

 

Katherine’s Status Report for April 22

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I worked on materials for the mounting system and symbol detection. At first, I was thinking of having part of it be plywood so we could get the material easier, but found 6 mm acrylic in the scrap bin that we can use. I then chose an adhesive that I think will work a lot better than what I tried last time, because it fell apart very easily in the last prototype.

I have been running into a lot of issues with symbols detection, but decided to make my own Haas Cascade again. There are a lot of hard things about this, and I have to find a lot of workarounds for any existing tutorials so this has been pretty time intensive. One issue is just collecting negative images to train the cascade. It seems like there are a lot stricter permissions on huge datasets of images, which makes sense for privacy reasons, but has made my life very difficult. However, I finally found one that worked and was able to create a large directory of negative images. I was then finally able to produce the file I needed for the detection, but it is detecting the symbol very poorly so I am working on trouble shooting that.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

Nope!

What deliverables do you hope to complete in the next week?

Tomorrow, I am going to laser cut the mounting system and put it together. The rest of the week, I am going to work on getting symbol detection working well enough to be used.

Katherine’s Status Report for April 8th

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I started by laser-cutting the mounting system and putting it together. I tried using hot glue, which didn’t prove to be a good method so I am going to use superglue next time. We also discovered that the 3mm acrylic does not hold up well under the weight of the solenoids, so after the demo, I worked on a redesign using 6mm acrylic. I have had a lot of past projects where I had a hard time keeping acrylic together, so I am considering cutting out certain pieces in 6mm thickness wood instead because the wood glue holds together very well. In addition to redesigning the mounting system, I kept working on object detection. I have decided to make the Haar Cascade for the symbols using the opencv tools to train them. I also changed the design of the objects to make them symmetrical around the center so they are more easily recognized in different positions. I am currently in the process of training the cascade.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

I am on schedule.

What deliverables do you hope to complete in the next week?

Finish the object detection!!  and get started on distance measurements so that I can provide feedback to the user.

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

I am planning to run a test (when my symbol detection starts working)  where I measure the time between when I put the symbol in the note area and when it recognizes it. To do this, I am going to have code that just has a time that records what time it recognizes the symbols and have it start with the symbol in the right place. This needs to be less than a second to meet the use case requirements.

 

Katherine’s Status Report for April 1st

What did you personally accomplish this week on the project?

This week, I worked on creating the design to mount the solenoids above the piano, and worked more on object detection. I took lots of measurements of the piano and used online specs for the solenoids to figure out a design and created it in Fusion360. I will attach an image once the issue with uploading photos is fixed, but for now, I put the image in the slack. I exported DXF’s and will cut it all out and put it together on Monday with the acrylic that I picked up last Wednesday. I also spent more time trying to figure out how to get the symbol detection working, and had very limited success — one method that I tried was using template matching, but was scaling the image until it was greater than the screen, in order to account for different scales of the symbol – was taking too much time and causing a lag in the video. Another detection was only able to detect it about 5% of the time. Finally, I tried some shape detection and that was also having a very hard time figuring out where even the star was. I have been a bit frustrated with this, but I may try to make another attempt at creating a Haas Cascade after doing more reading.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on schedule, using the updated Gantt chart from last week.

What deliverables do you hope to complete in the next week?

Putting together the mounting system and getting the symbol detection working so that I can shift from color detection to detecting symbols.

Katherine’s Status Report for March 25

What did you personally accomplish this week on the project?

This week, I worked on a couple of things. First, Lance and I spent time discussing how we were going to integrate our parts of the project and how my data should be passed to him. Next, I brainstormed how to produce the mounting system for the solenoids. I have an ideate minor so I have taken a lot of classes using the laser cutter, 3d printers, etc. and have a lot of experience in designing because of that, so we figured I would build the mounting system. (I decided on 3mm clear acrylic so that you could see inside). This wasn’t originally in the Gantt chart, so that has been altered. Finally, I have been working on providing feedback to the user. This has become more of an ordeal than originally planned because I am also switching to using the detection of a symbol rather than color detection. This makes the device a lot better because now the background shouldn’t matter as much and is also easier to place on someone. (This was based on last week’s meeting). I designed two symbols to be used, for some reason WordPress won’t let me update anything right now but I will just describe them for now and try to add them later.  One is a star with an R in the middle, and the other is a hexagon with an L in the middle. I want it to be able to detect these symbols and use them to trigger the notes, instead of color. I did a lot of research on how to best do this – its more complicated than I originally expected but I am currently working on a way to do it, that I think will work-  I am going to create a custom Haar Cascade file for the two symbols and use that in my openCV code. I tried several other methods, like template matching, but they failed due to not being able to handle hte image at different scales or other similar problems. Finally, I decided to provide feedback to the user using the object detection that I was developing already. I found examples of people calculating depth based on faces, so I think I could try to implement that with my symbols and provide real-time feedback for moving closer or farther based on that. Because this depends on my symbol recognition implementation, which is taking more time than expected, I need to push back the Gantt chart about a week.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Technically, I am behind schedule because the feedback to the user is not working yet due to changing the mechanism but I had my schedule ending pretty early so I am going to edit the Gantt chart to incorporate the new work — making the symbol detection and creating the mounting system.

What deliverables do you hope to complete in the next week?

Next week, I am planning on completing the build of the mounting system and symbol detection.

Katherine’s Status Report for March 18

What did you personally accomplish this week on the project?

I finished up the generative mode mapping so it efficiently detects whether a pattern that we have defined was triggered by the user. The patterns can be any length and we can determine what to play based on them – this is where Lance’s work will come in. I also worked on creating a transition between the modes, so the user can go from the note-playing mode to the generative mode by signaling in a box on the bottom.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on schedule.

 

What deliverables do you hope to complete in the next week?

Next week, I am working on providing visual feedback to the user that tells them whether they need to come closer to the camera or stand farther back. Based on feedback from our meeting on Wednesday, I am also planning on trying to switch the detection to use symbols instead of colors so that there is not as much issue with picking up background color and it is also easier for people to use it no matter what color they are wearing.

Katherine’s Status Report for March 11

What did you personally accomplish this week on the project?

Besides working on the Design Document, I worked on the Generative mode for the project. For this mode, there is a 5×5 grid (as shown in a previous post). The goal is to have the user be able to make certain gestures by passing through boxes in a pattern and having this generate a more abstract pattern of notes by matching to a pattern we have already determined. I implemented some sample patterns and worked on detecting where red was in the grids. Next, I have a position array that updates when the user’s hand (or wherever they have designated the color) has spent a certain amount of time in a grid box. This position array is compared to the patterns and is also shortened to keep it efficient. This shows it working in the terminal:

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am currently on schedule.

What deliverables do you hope to complete in the next week?

Next week, I plan to complete the generative mode note generation — I am going to add more patterns of varying length and start testing it by assigning the patterns to notes I can play using the musicalbeeps python library. I am also going to implement the user-controlled switch from normal note generation to the generative mode screen.

 

Katherine’s Status Report for Feb 25

This week, I worked on fine tuning the key detection and actually producing sound to fine tune it to respond naturally to ‘key presses.’ This week was presentations, so I wasn’t able to get as much done as normal but I spent a while trying to get some python libraries that produce sound to work (ran into some unexpected issues with versions and installing). I finally got one to work, so now when someone puts their hand in a note, it waits an appropriate amount of time (added a counter for this), and then presses the note and plays the actual sound on my computer (super exciting!). It looks very cool in video, but to show this I put the terminal output below, showing how the notes are generating after I move in the box.

So far, I am on schedule.

Next week, I am hoping to map gestures and get generative mode working for certain patterns.

Katherine’s Status Report for 02/18

This week, I worked on generating the on-screen grid for the playing mode and the generative mode, generating notes in playing mode, as well as finalizing how each of these will work. For the note mode, I decided to implement a delay to determine whether someone is actually pressing a key or chord — so a counter starts as soon as one of the notes / chords sees the color enter and if a certain amount of time passes, the note is generated. There is a “button” at the bottom, shown in purple in the attached photo, that will switch to generative mode if either hand is inside the box for 2 seconds. Generative mode is shown as a grid, and we are going to have patterns stored defined by the order of grid boxes passed through. By constantly recording which grid boxes the person moves their hand through, we can determine if any patterns have been matched and use that to generate a sequence of keys. Next, I am going to try to hook up sound to test how the playing is actually going to sound and work more on the generative mode. I am currently a bit off schedule since we didn’t account for initial work in the schedule, but the schedule needs to be reworked after making a lot of changes to howwe are going to approach the project. There was a lot of open time in my section before, so it will be easy to just shift everything down a week and add some more into the schedule.

Shown in the images is note mode and generative mode. I have my finger over the camera so that the graphic can be seen a lot easier.

Katherine’s Status Report for 02/11

This week, I worked on learning how to use OpenCV and looking for ways to implement tracking color in the screen. First, I worked on getting a screen to pop up using OpenCV so that the user could see themselves move and therefore where their movements would be on the screen, which will be important later. Next, I worked on figuring out how to track colors from the video since we want to use colors to see where someone’s hands are in the screen. The photo shows the code correctly identifying my jacket as red in multiple places. I am able to identify colors red, green, and blue in a picture and the next step will be correlating those to where in the screen they are. I don’t predict this will be very difficult, and I am hoping to get it done tomorrow. My progress is on schedule so far, and for next week I want to work on making sure gestures are recognized by where they are in the screen.