Creative AI, part 2: Music

Although I love music and have some musical training, computer-generated music is very much NOT my area of research, so this will be brief. Don’t confused “brief” with a lack of material, though; computer music researchers are doing AWESOME things.

One very well-established computer musician is named Emmy. Programmed by David Cope (and short for Experiments in Musical Intelligence), Emmy can analyze music by humans and then generate new music designed to sound similar.

Here’s a video of Emmy imitating Vivaldi:

And one of Emmy imitating Beethoven:

A few things should be kept in mind if you listen to these. One is that the music in the videos is synthesized, rather than played with real instruments, so the timbre of the music sounds artificial. This is different from having an artificial-sounding melody. A more valid criticism of Emmy, though, is that she imitates existing styles instead of doing something original. You’ll remember, perhaps, that we talked about this in the visual art post: if a computer can only imitate styles that are programmed into it, instead of making its own decisions, it could be seen as less creative.

So are there computers that compose music from scratch, and have it played on real instruments? Yes! Here’s an example: Iamus, from the University of Malaga in Spain, which composes in a modernist (and dissonant) classical style. Although the musicians in this video are human, the music was composed entirely by Iamus.

Not all computer music is classical, either. One very fruitful area of research is jazz. Because jazz among humans relies so heavily on improvisation, most jazz computer projects seek to create computers that can improvise, either by themselves or in response to a human. In effect, a human and an ideal jazz computer could “jam” together, each responding to the other’s improvisations in an interesting way.

Or, a computer and another computer can jam. In the video below, the ZOOZbeat (a phone program that improvises music based on the user’s gestures) jams with Shimon, a marimba-playing robot:

What about pop music? Can computers do that? Sure! One example is Darwin Tunes, which uses a genetic algorithm to try to find the most successful pop hooks. The video below, playing many examples of these hooks, sounds… not particularly hooky to me, but it’s certainly an interesting idea:

Finally, those at the very avant-garde of computer music are asking questions about how computers can change how humans make and think about music. One way of doing this is to design new instruments, or even new tools for making new instruments. The Wekinator is one such tool; it allows a computer to learn to make a wide variety of sounds based on a user’s gestures. You might have noticed that the ZOOZbeat, which I mentioned a minute ago, does something quite similar. Here’s an example of an ambient piece being performed by humans gesturing to a Wekinator instrument:

If that’s not avant-garde enough for you, here’s a video of a person using a Wekinator to play music on tree bark:

Snakes can play Wekinator instruments too:

You’re welcome.