Lance’s Status Report for Feb. 11th

Over the past week I’ve been weighing options for melody generation. There are a plethora of options because of the nature of music as an artform.

If we choose to let the user directly control the notes they play, then the form is simple. Given some input (grid coordinates for now), we can tell the hardware to press a specific note on a keyboard. The question then becomes, how do we make it interesting? If you’re stuck always playing exactly the note you’re at, it’s not going to sound like music. Well, it will, but it could always be more interesting. Then, let’s say that the other axis of your grid determines how fast the notes are playing, their subdivision. This is fine- and it definitely adds more spice- but there are still some key elements missing. What if we want to “play” a musical rest? What if we want to jump over some number of notes, say, an octave? There are definitely options for this, for example, in the case that we have colored gloves, we could use the absence of color to indicate a rest, which can then be used to make note jumps. This solution should be incredibly intuitive, but difficult to play skillfully at first, which could be discouraging.

If we want to procedurally generate melodies based on two notes at some point, we have a few options. The simplest is to say, “What is the subdivision? If you give me that, I’ll create a string of notes that run up (or down) from the first to the second note, in the amount of time you give me. If there are too many intervals between the two notes to be played, I’ll skip over some of the boring ones. If there aren’t enough, I’ll make some longer.” This works, and we can even add random variations. For example, we could randomly choose to generate a “harmonic enclosure,” which is when you land on a note after playing the notes above and below it in a sequence. We can add plenty of other flairs to our melodies like this. This solution is very versatile, but may be difficult to create a smooth user experience for.

The last solution is something I need to further consider, but it involves generating a melody based on “mood” determined from gestures. We could possibly do this by parameterizing certain musical intervals with various emotions, and then selecting notes based on that. This gives the user a lot of leeway, but also greatly increases the amount of effort needed to create something musically coherent and satisfying to play/listen to. Sure, moving from the 7th to the 1st interval is satisfying, as is a ii-V-I progression, but everything can be used in so many different ways that it’s hard to say what the best outcome would be. This solution, if implemented properly, would be extremely fun to use, but definitely sacrifices some of the user’s autonomy.

As it stands, I am on track to start with pitch selection algorithms and continue with musical progression generation.

Leave a Reply

Your email address will not be published. Required fields are marked *