07 May 2020
Artificial Intelligence can perform various routine tasks faster and with greater stamina than humanoid workers – this realisation has now become established in many companies. Increasingly, however, AI is also being used in artistic fields and even being set loose to generate music or paintings. Are these artificial creatives going to render human artists obsolete?
Going once, going twice, going for the third time: A portrait entitled “Edmond de Belamy” was sold at Christie's auction house on 26 October 2018 for 432,500 dollars. A considerable sum for a picture from the brush of an unknown artist, which was in all honesty not the most exciting work the world had ever seen and which had previously been valued at between 7,000 and 10,000 dollars. What was extraordinary, however, was how the picture came to be, as evidenced by the artist's signature in the lower right corner: “min G max D Ex[log(D(x))]+Ez[log(1-D(G(z)))]” – the formula of the algorithm that produced the portrait.
The creative force behind the work is the Parisian collective Obvious, which had sold a painting from its Belamy series to an art collector back in February 2018 for 10,000 dollars. The painting auctioned at Christie’s was based on a dataset of 15,000 portraits from the 14th to 20th centuries which had been fed into the system. One part of the algorithm, the generator, continuously generated images based on these data. The other part, the discriminator, was programmed to sort through the images thus generated until it became impossible to discern any difference to a man-made work.
© Getty ImagesThe portrait “Edmond de Belamy“: “painted“ by an algorithm.
The purchase price of the portrait, which is somewhat reminiscent of a half-hearted preliminary study in the style of Edouard Manet, can hardly claim to be due to the artistic value of the painting – but this isn’t in any case the ultimate benchmark on the art market. The concept of art is redefined every few generations, said Erin-Marie Wallace, whose “Rare-Era Appraisals” company values works of art, on US radio station NPR. “We’re redefining what art actually is for the 21st century. Art is measured by what people are willing to pay for it.”
„The Next Rembrandt“
Some time before “Edmond de Belamy”, public attention was drawn to a painting created by AI: in 2016 – almost 350 years after the death of the most important artist of the Dutch Baroque – „The Next Rembrandt“ was generated at the University of Delft. A team of data analysts, software engineers, AI and art experts had spent 18 months scanning the master's 346 works in 3-D and collating them in a database. What arose was a data mountain of more than 15 terabytes.
In the next step, the “typical”, or most common, subject of a Rembrandt portrait was selected: a middle-aged man. This is hardly surprising: after all, the Dutch artist was often commissioned to paint the portraits of well-heeled citizens. Then, all the portraits matching this profile were compared to generate the typical Rembrandt nose, the typical Rembrandt eye or the typical Rembrandt mouth. In the final step, the “painting” was generated on a 3-D printer from 148 million pixels with thirteen layers of paint.
At first glance or from a distance, the image might pass muster as a “real” Rembrandt. Upon closer inspection, however, it appears more like the work of a hard-working student with lesser gifts who has skilfully imitated the style of the master. The expression and the timeless human depth which make a Rembrandt portrait extraordinary are nowhere to be found in the computer-generated image. Nor is the originality with which the artist from Leiden established new forms of representation and painting techniques, thereby blazing the trail followed by successive generations of artists. “The project failed to create a new Rembrandt – only Rembrandt could do that,” admitted project manager Bas Korsten. But it was “an excellent opportunity for people to understand what it was that made Rembrandt Rembrandt. We’ve found a way to keep the great master alive.”
Composed by computer
Rembrandt is by no means the only pioneering artist whom AI has been instructed to tackle by its human programmers: If you want to know if you can distinguish a real chorale by Johann Sebastian Bach from a computer-generated equivalent, you can test your skills on BachBot.com. Behind this site is a research project conducted at the University of Cambridge to examine the style of the great composer. The scientists' stated aim was to build an Artificial Intelligence that could generate and harmonise chorales in such a way that listeners would be unable to distinguish them from Bach's own work.
Music-savvy computer scientists had previously tried their hand at baroque chorale cantatas as early as the 1980s – these being short and comparatively simple pieces of music. To do this, they fed countless rules into their computers. The developers of the BachBot, on the other hand, relied on Deep Learning, i.e. self-learning systems, such as those used today in image recognition and to which the name of the comparable DeepBach project refers.
The proof of the musical pudding is in the listening. In our own attempt at the BachBot challenge, we guessed right 80 percent of the time. But the decision was often on a razor’s edge, and we mostly had to listen to the pieces multiple times. The creators of DeepBach tested their computer-generated chorales on 1,600 listeners, about a quarter of whom were trained musicians. More than half of the listeners did indeed mistakenly attribute pieces by DeepBach to Bach's own pen.
But while algorithms can pull off some quite convincing results with comparatively simple chorales, they are absolutely out of their depth when it comes to Bach's fugues, says composer and scientist Francois Pachet, who was involved in DeepBach. The system is “not able to see what we call the '’higher-quality structure’ of a composition,” Pachet said in on the SWR radio station. “Fugues, for example, have motifs, i.e. melodies, which appear again and again in different variations throughout the piece. DeepBach can’t recognise these - yet.”
An AI with a recording contract
Endel is not necessarily concerned with “higher-quality” compositional structures or complex melodic variations. Be that as it may, this AI has notched up an achievement that many talented young musicians can only dream of: in March 2019, the software developed by artists, programmers and scientists in Berlin was awarded a record contract by music giant Warner Music.
Endel generated 600 short pieces. These have titles such as “Sunny Afternoon”, “Clear Evening” or “Foggy Morning”. At a stretch, a well-disposed reviewer might be willing to own that the spherical sound surfaces are a little reminiscent of the electronic ambient music of Brian Eno, but for radio presenter Matthias Hacker from Bayerischer Rundfunk, they sound more like the bland background noise you get in a spa or the dentist’s waiting room. And that’s pretty much what Endel is programmed to churn out: the AI generates “mood music” - background sounds for relaxing, concentrating, falling asleep or exercising – at least, according to the four categories of the associated app.
Endel doesn’t make music in the traditional sense and is certainly not designed to replace human musicians, said CEO Oleg Stavitsky on the Deutschlandfunk radio station. The creative impulse behind the algorithm is composer Dmitry Evgrafov, who feeds the system with music samples.
Aiva – a soundtrack at the touch of a button
“A Call to Arms” is the name of the piece that graphics card manufacturer Nvidia used to accompany its keynote address at CES 2019 in Las Vegas. In places, the music composed by the AI Aiva almost conjures up the distant spirit of Kevin Costner riding through a digital version of “Dances with Wolves”. Other pieces, such as “Free Spirit”, also make you feel as though you’ve heard them somewhere before on more than one occasion. Aiva’s musical spectrum ranges from reflective piano sound to symphonic bombast, hitting the one or the other emotional button without ever being edgy or challenging.
Aiva generates something like your average musical score – a soundtrack for movies or computer games with a tight budget and limited ideas. “Automated systems will ultimately only threaten musicians who have the emotional range of a bot,” tweeted composer Holly Herndon, who experiments with AI herself in her music. For Aiva co-founder Pierre Barreau, the main point of AI is to relieve human musicians of some of the donkey work. The field of music composition will remain dominated by human beings in the long run, is Barreau’s reassuring answer to the anxious question posed by a concerned musician on YouTube. “AI is just another assistant, a tool that allows people to be creative, just like synthesizers and virtual instruments.”
The human composer has the last word
For Orm Finnendahl, Professor of Composition at the Frankfurt University of Music and Performing Arts, AI is primarily a source of inspiration. But just by developing the algorithms and selecting the musical parameters, it is the professor himself who sets the direction for the computer. And the last word on whether a computer-generated sequence is incorporated into the final product is, of course, his, Finnendahl told dpa. “AI is capable of amazing acts of imitation. But what makes art is the original idea behind it – and AI can't even begin to depict that."
AI may admittedly be able to detect patterns and recombine rules at the touch of a button. But it’s a long way away from mastering whatever it is that defines art - friction, surprise, unexpected twists and turns. And, unlike real artists, it doesn’t translate feelings or thoughts into music; all it can do is compose using the emotions and experiences that people have already transposed into music.
Computerised scripts
Just how little AI actually understands about contexts of meaning becomes clear when the algorithm is programmed to write a screenplay. A computer can already generate readable sports or weather reports from text elements and data. This works because the text structure here is formulaic and precisely defined. But what emerges when you let AI loose on more open text forms can be seen in the short film entitled “Sunspring”.
Film-maker Oscar Sharp and AI expert Ross Goodwin fed a neural network with a diet of scripts from related sci-fi and superhero movies. On this basis, the algorithm learned which words often follow other words and what directorial instructions look like, and generated its own script, which was then performed by real actors. The result is a loosely connected sequence of random utterances – sometimes amusingly absurd, mostly simply meaningless and still light years away even from the first creative endeavours of a seven-year-old Star Wars fan.