Computers in a jazz ensemble? Inventing improvisational AI

Professor Shlomo Dubnov of UC San Diego is among researchers who recently received an advanced grant from the European Research Council to teach computers how to improvise musically.

Go to any jazz club and watch the musicians. Their performances are dynamic and improvised; they invent as they go, having entire conversations through their instruments. Can we give computers the same capabilities?

To answer this question, Professor Shlomo Dubnov of the University of California, San Diego, who has appointments in the departments of Music and Computer Science and Engineering and the Qualcomm Institute at UC San Diego, and Gérard Assayag, a researcher at the Institute for Research and Coordination in Acoustics/Music in Paris, recently received a European Research Council Advanced Grant of 2.4 million euros (about $2.8 million) . Dubnov and Assayag will work with other international partners on Project REACH: Increasing Co-Creativity in Cyber-Human Music, which teaches computers how to improvise, musically.

The conversation

Almost all human interactions are improvised, with each party modifying their responses based on real-time feedback. But computers, even with the most sophisticated AI, are limited by their programming. And while chatbots and other online tools are getting better at conversing fluently, they can hardly improvise on the fly.

“There are various machine learning applications today for creating art,” Dubnov said. “The question is: can we go beyond these tools and generate a kind of autonomous creativity?

In this scenario, computers would do more than learn musical styles and reproduce them. They would innovate on their own, produce longer and more complex sounds, and make their own decisions based on the feedback they received from other musicians.

“Not only should the human musician be interested in what the machine is doing; the machine should be interested in what the person is doing,” Dubnov said. “He must be able to analyze what is happening and decide when he will improvise with his human partners and when he will improvise alone. He needs agency.

The struggle to define music

The REACH project is largely about reinforcement learning, a form of machine learning that gives computers a sense of reward to motivate them; however, this requires quality labels. For example: what makes music good?

In other words, REACH is to teach computers an aesthetic, something that humans don’t fully understand. Musically, what is good or bad? What makes Miles Davis’ Kind of Blue, Beethoven’s Ninth Symphony and The Beatles’ Abbey Road so captivating?

“If we produce sounds with the intention of making music, that’s music,” Dubnov said. “This means that, to become musical, the computer must have its own intention.”

In a sense, the project is about self-discovery. By teaching the machines to improvise well, the group REACH will invent more precise definitions of music. It all comes down to the methods behind reinforcement learning. The team will have to teach the computers to make the right choices.

“This will be our biggest scientific challenge,” Dubnov said. “How do you give the machine an intrinsic motivation when it is so difficult to specify what constitutes the good? »

Ada J. Kenney