As Mr. Eck says, these systems are at least approaching the point — still many, many years away — when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different. But that end game — as much a way of undermining art as creating it — is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.
In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building software that does much the same thing. Using neural networks, he and his team are crossbreeding sounds from very different instruments — say, a bassoon and a clavichord — creating instruments capable of producing sounds no one has ever heard.
Much as a neural network can learn to identify a cat by analyzing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analyzing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one. Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47 percent bassoon and 53 percent clavichord. Another might switch the percentages. And so on.
For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team are combining them to form something that didn’t exist before, creating new ways that artists can work. “We’re making the next film camera,” Mr. Eck said. “We’re making the next electric guitar.”
Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other A.I. techniques. “This work has exploded over the last few years,” said Adam Ferris, a photographer and artist in Los Angeles. “This is a totally new aesthetic.”
In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.
At the same time, a number of artists — like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris — are exploring neural networks in other ways. In January, Mr. Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.
The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. “Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him,” she said, “these new computational techniques create a broader palette for artists.”
A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.
Soon, Mr. Eck and other Googlers spotted the blog, and now Mr. Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw. By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They don’t copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like.
Then, you ask them to, say, draw a pig with a cat’s head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines. Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Mr. Ha. A.I. isn’t just creating new kinds of art; it’s creating new kinds of artists.
By CADE METZ
https://www.nytimes.com/2017/08/14/arts/design/google-how-ai-creates-new-music-and-new-artists-project-magenta.html
Source link