Lance’s Status Report for March 25th

What did you personally accomplish this week on the project?

This week,  I finished up a bit of platform integration code, and after working with Katherine to discuss software integration I decided that it would be best to go back and rewrite large portions of my own code. This is both for ease of integration, and ensure that the code is easy to maintain, modify, and work on for someone else. I’ve gotten through parts of it, mainly the Python end of things. This involves some code that manages sending messages to Arduino and representing notes in the code. The note representation hasn’t changed much, but it has been refined. For testing purposes, it was mostly just a number, a length, and a checksum. Now, it’s a list that contains a length, a list of notes that are represented as their MIDI values, and finally a checksum. The current implementation has better support for chords, because the previous simply relied on the Arduino interpreting them fast enough to queue them into the same time step. The clocks of the computer and Arduino won’t be synced, so there’s no need to add a start value (rather, it would be difficult to, and it is currently unnecessary due to the speed of the current pipeline). For more info on the representation, notes will be sorted into measures, which are grouped into phrases. There will be one phrase that constantly updates which contains eight measures. The measures have no set length (in Python), but do have a maximum length of timeSignatureNumerator*timeSignatureDenominator*8. This means that in 4/4, you can have four sets of eight 32nd notes, which is generally the smallest subdivision anyone will have to deal with on a daily basis. Measures are ordered, as are phrases, but note lists are not. However, ensuring that there are no duplicate notes is important.

If I decide to make a version that does not enforce strict timing, I can just add a flag to the note list that determines whether or not they will be played immediately or sent to the phrase+measure list set.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I had initially planned on finishing up integration and testing around this time, so it’s debatable as to whether or not I’m on track, but I plan on finishing as much as I possibly can before the interim demo.

What deliverables do you hope to complete in the next week?

I plan on expanding upon what I already have and making it work with Katherine’s code, as well as letting the Arduino sequencer interact with the solenoids.

Katherine’s Status Report for March 25

What did you personally accomplish this week on the project?

This week, I worked on a couple of things. First, Lance and I spent time discussing how we were going to integrate our parts of the project and how my data should be passed to him. Next, I brainstormed how to produce the mounting system for the solenoids. I have an ideate minor so I have taken a lot of classes using the laser cutter, 3d printers, etc. and have a lot of experience in designing because of that, so we figured I would build the mounting system. (I decided on 3mm clear acrylic so that you could see inside). This wasn’t originally in the Gantt chart, so that has been altered. Finally, I have been working on providing feedback to the user. This has become more of an ordeal than originally planned because I am also switching to using the detection of a symbol rather than color detection. This makes the device a lot better because now the background shouldn’t matter as much and is also easier to place on someone. (This was based on last week’s meeting). I designed two symbols to be used, for some reason WordPress won’t let me update anything right now but I will just describe them for now and try to add them later.  One is a star with an R in the middle, and the other is a hexagon with an L in the middle. I want it to be able to detect these symbols and use them to trigger the notes, instead of color. I did a lot of research on how to best do this – its more complicated than I originally expected but I am currently working on a way to do it, that I think will work-  I am going to create a custom Haar Cascade file for the two symbols and use that in my openCV code. I tried several other methods, like template matching, but they failed due to not being able to handle hte image at different scales or other similar problems. Finally, I decided to provide feedback to the user using the object detection that I was developing already. I found examples of people calculating depth based on faces, so I think I could try to implement that with my symbols and provide real-time feedback for moving closer or farther based on that. Because this depends on my symbol recognition implementation, which is taking more time than expected, I need to push back the Gantt chart about a week.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Technically, I am behind schedule because the feedback to the user is not working yet due to changing the mechanism but I had my schedule ending pretty early so I am going to edit the Gantt chart to incorporate the new work — making the symbol detection and creating the mounting system.

What deliverables do you hope to complete in the next week?

Next week, I am planning on completing the build of the mounting system and symbol detection.

Team Status Report for March 25th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

HW: I am a little concerned about my lack of ability to implement this hw system without burning components and possibly running out of money. Also, for some reason, when I soldered the components to the proto-board, it wasn’t working as well. Unclear why that was happening though. For now, we are using the breadboard.

SW: I am concerned about creating the dataset needed to detect my symbols. However, my contingency plan is using the color detection that is already working if I can’t get that to work.

Music SW: I’m vaguely worried that in changing a lot of things around both for integration and improvements that I may break the code, leading to large delays without rollbacks. I have been backing up my code, but I’m not satisfied with its current performance.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

HW: I originally wanted to solder my components but for the upcoming demo, we will just use the breadboard to do it. There is no change in cost.

SW: I changed how we detect movement, because using color was hard if there were any colors in the background and colors on the perosn’s clothes. Now I am trying to use symbols. No additional costs.

Music SW: I changed a lot of the internal working of the code, mostly for ease of representation and modularity, but overall it has not changed any large portions of the project.

Provide an updated schedule if changes have occurred.

HW: N/A

SW: I can’t upload photos in WordPress for some reason… It is just giving an error. But we had my schedule end now anyways so I am just adding a week for symbol detection and creating the mounting system, and the following week for more advanced feedback using the symbol detection.

Music SW: My schedule has an added week for code refinement, but overall it has not changed much.

Sun A’s Status Report for March 25th

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I spent most of my time trying to figure out how I was going to build the hardware system for the upcoming demo. I am planning on building a full octave system with the possibility of playing 4 actuators simultaneously. As of now, I CAN play 4 actuators but I can’t seem to control them individually. Also, in the process of implementing this, I burned out 4 of the actuators so will have to order more for future use… (I will be going back to Tech Spark to figure out the individual control tomorrow though).

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

So far, it’s going fine. I think I might be on a time crunch if I can’t figure out why my actuators are controlled by one signal by tomorrow though.

 

What deliverables do you hope to complete in the next week?

First, I would like to finish building the full octave system by Wednesday. And, then I can focus on building a mounting system.

 

Katherine’s Status Report for March 18

What did you personally accomplish this week on the project?

I finished up the generative mode mapping so it efficiently detects whether a pattern that we have defined was triggered by the user. The patterns can be any length and we can determine what to play based on them – this is where Lance’s work will come in. I also worked on creating a transition between the modes, so the user can go from the note-playing mode to the generative mode by signaling in a box on the bottom.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on schedule.

 

What deliverables do you hope to complete in the next week?

Next week, I am working on providing visual feedback to the user that tells them whether they need to come closer to the camera or stand farther back. Based on feedback from our meeting on Wednesday, I am also planning on trying to switch the detection to use symbols instead of colors so that there is not as much issue with picking up background color and it is also easier for people to use it no matter what color they are wearing.

Lance’s Status Report for March 18th

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I finished up the interfacing code for having our computer talk to the Arduino. It sends data in chunks, where certain symbols (like ! or @) delineate portions of data. This way, we can ensure that the data we send is accurate by marking different blocks of data and including an ending checksum. It also allows for easy chord representation and reading (parse through bytes until you see a block footer, add every note in the block to a “play next” list). This code works on a small scale, but I’ll need to make sure it can work with large amounts of data as well.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

As it stands, I’m currently on track.

What deliverables do you hope to complete in the next week?

I want to tighten up my algorithms and ensure they produce at least somewhat nice sounding results. This involves tweaking Bayesian Updating and ensuring that the note queue functions properly at all times. I’m also going to add phrase support for the chord estimation, so that we can generate cycles instead of walking along one never-ending chord line. This also helps with deciding on weights for certain chord probabilities.

Team’s Status Report for March 18th

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

HW: Before this week, my biggest concern was not having enough power but I realized that the power supply I use have +30/-30 and 2 A each — and the maximum power rating is 120W. So, I realized that I can just add 30V/2A and -30V/2A to create 120W power source for the circuit. But now, my only concern is that how I am going to build the mounting system and how we are going to integrate the generative mode. If the generative mode taking too long to integrate by the interim demo, we may have to resort to only doing simple-note-playing mode by the interim demo.

Music SW: The concern this week was data transfer. There are ways to send data to an Arduino over serial, but this data generally isn’t marked as anything specific. So, we needed a way to ensure we can read in data and send it down the proper paths (note queue, actuator system, etc.). Generative mode needs a bit of work, but the hope is that it should be done in time for the interim demo (no point in just hoping, though. I’m also working on finishing it and setting up a data pipeline!).

SW (CV): My biggest concern at this point is detecting without environmental interference causing an issue – the plan is to try to switch it to symbol detection. We could have a specific symbol drawn out on a small piece of paper that is then stuck on a hand, for example, that the OpenCV uses to track where they want to signal. This would solve the issue of someone wearing the signaling color or someone in the background wearing it. The contingency plan would to go forward with color signaling, since that is working, and just isolate the environment with a black backdrop.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

HW: Not yet!

Music SW: Ditto.

SW CV: No.

Provide an updated schedule if changes have occurred.

HW: None!

Music SW: Same!

Software: None, except working to switch to symbol detection this week in addition to the feedback.

This is also the place to put some photos of your progress or to brag about acomponent you got working.

HW: I was able to power three solenoids simultaneously!

Music SW: Nothing to see, but I set up some pretty nice data structures for data transfer, at least in my opinion.

SW: Looks cool on video, but image captures don’t look much different. Both normal note generation and generative mode is working and you can switch between them which is cool!!

Sun A’s Status Report for March 18th

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I worked on replicating my test circuit to have 3 solenoids (all powered at the same time). During this process, I noticed that the solenoids were getting heated but that may be due to having them on for a few minutes constantly. However, in reality, these solenoids will not be powered on for 3+ minutes straight so hopefully, we do not have to find a solution for having too hot of parts. I also tried to solder some cables onto the solenoids because the solenoids’ cables are flimsy.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on track!

What deliverables do you hope to complete in the next week?

I am hoping to finish creating (and soldering) the 7 solenoid circuit onto the protoboard by next week

Katherine’s Status Report for March 11

What did you personally accomplish this week on the project?

Besides working on the Design Document, I worked on the Generative mode for the project. For this mode, there is a 5×5 grid (as shown in a previous post). The goal is to have the user be able to make certain gestures by passing through boxes in a pattern and having this generate a more abstract pattern of notes by matching to a pattern we have already determined. I implemented some sample patterns and worked on detecting where red was in the grids. Next, I have a position array that updates when the user’s hand (or wherever they have designated the color) has spent a certain amount of time in a grid box. This position array is compared to the patterns and is also shortened to keep it efficient. This shows it working in the terminal:

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am currently on schedule.

What deliverables do you hope to complete in the next week?

Next week, I plan to complete the generative mode note generation — I am going to add more patterns of varying length and start testing it by assigning the patterns to notes I can play using the musicalbeeps python library. I am also going to implement the user-controlled switch from normal note generation to the generative mode screen.

 

Lance’s Status Report for March 11th

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours):

Before Spring Break, I implemented a somewhat simplified version of Bayesian Updating. This will allow us to predict chords as the user plays. One thing to note is that the probabilities of every chord that doesn’t contain the played note will slightly decrease, but the values in the array are normalized between 0 and 1, so this isn’t apparently. This also means that the selected chord will always have a probability of “1.0” with this algorithm. Although it does simplify some aspects (like visually picking out the right chord), it can be slightly misleading, as we aren’t really “sure” that the most probable chord really is the chord being played over.

Some output from the algorithm can be seen below (array indices are [C, D, E, F, G, A, B]):

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Thankfully, I am on track and can immediately begin working on integrating all of the musical systems together. This includes computer communication with the Arduino.

What deliverables do you hope to complete in the next week?

I plan to integrate the Bayesian Updating and general signal transmission code with the Arduino’s actuator code.