Two Rutgers computer scientists, professor Ahmed Elgammal and PhD candidate Babak Saleh, recently trained a computer to analyze over 62,000 paintings and then rank which ones are the most creative in art history. The work, which will be presented as a paper (“Quantifying Creativity in Art Networks”) at the upcoming International Conference on Computational Creativity (ICCC) in Park City, Utah later this month, has a number of profound implications for the way we think about human creativity.
Most notably, it means that computers could soon be able to judge how creative humans are, instead of the other way around. In this case, the researchers focused on just two parameters — originality and influence — as a measure of creativity. The most creative paintings, they theorized, should be those that were unlike any that had ever appeared before and they should have lasting value in terms of influencing other artists.
The computer – without any hints from the researchers (e.g. “Keep an eye on some guy named Picasso starting around 1907”) – actually fared pretty well, selecting many of the paintings that art historians have designated as the greatest hits, among them, a Monet, a Munch, a Vermeer and a Lichtenstein. And it even dismissed a few famous artworks — a charcoal drawing by Durer, for example — as being too derivative. (A computer art snob!)
The more that computers are able to recognize and judge creativity, the more they will be able to take on roles within the art world that once belonged solely to humans. Think about the role of the art curator at a museum or gallery, which is to select paintings that are representative of a particular style or to highlight paintings that have been particularly influential in art history.
That’s essentially what the computer algorithm from Rutgers University did — it was able to pick out specific paintings from Picasso that were his “greatest hits” within specific time periods, such as his Blue Period (1901-1904). And it was able to isolate Picasso’s works that have been the most influential over time, such as “Les Demoiselles d’Avignon.” In an earlier experiment, the Rutgers researchers were able to train a computer to recognize similarities between different artists.
Moreover, the computer was able to perform a nifty little trick that humans can’t. As described by Elgammal and Saleh, the “Time Machine Experiment” was a unique way to compare how well certain paintings would have fared, had they been painted a few years earlier or a few years later. The idea being, of course, that moving a highly original painting back even a few years in time should significantly boost its creativity score.
That type of algorithmic power could alter the role of art buyers, who flit from art fair to art fair, looking for promising new works to purchase. A lot of the decision on what to buy is based on tastes, on preferences, and, yes, human bias. But imagine a computer being able to do the same job. Once it knows the visual attributes that are in demand by the art market, the computer might be a lot more efficient in buying future artwork. Instead of using parameters of “novelty” and “influence,” the computer might aim for something a bit more bourgeois – “ability to sell for lots of dollars later.”
At an an art auction, machines might be able to tell you exactly how much to bid on a painting by fine-tuning it for a number of variables. Machines would digitize an image of the artwork being auctioned, quickly analyze it against a database of all existing art, and immediately tell you how it ranks in terms of overall creativity. Maybe then we wouldn’t get irrational bids such as $179.4 million for Picasso’s “Les femmes d’Alger (Version ‘O’)” — a computer would presumably know better than to overbid by $147.5 million.
Of course, there has always been skepticism about “creative computers.” The authors of the paper highlight three different problems with their work – the relatively small size of today’s digitized art databases (in addition to the database of 62,254 paintings, another database they used contained only 1,710 paintings); the inability to express all visual attributes of art in a way that a computer would understand; and the possible need for more parameters beyond originality and influence to understand art.
Even at the upcoming ICCC conference where the “Quantifying Creativity in Art Networks” paper will be presented, topics for discussion include “Breaking Down Skepticism about Creative Computers” and “Is Biologically Inspired Invention Different?” The basic premise being, of course, that there will always be something a bit “off” when it comes to computers and creativity.
That being said, computers appear to be at the cusp of transforming the conventional notion of creativity, making it much more about data mining, computational power, and network science. The Rutgers researchers, for example, came up with 2,559 different visual concepts (e.g. space, texture, form, shape, color) that might be used to describe a single 2D painting. As a result, they were able to approach creativity as a classic problem of network science, seeing which paintings were “connected” to which other paintings. Paintings, they suggested, can be thought of as just nodes in a vast “art network.” And some nodes are more important than others.
This thinking about creativity could eventually filter into fields of academic study such as art history. Art historians would still think in terms of various genres – Impressionism or Cubism, say – but would have a way to quantify what makes these works so intuitively interesting. Imagine the next generation of art history scholars attending art museums with their smartphones or laptops, busily crunching the data on paintings to see what makes them unique rather than relying on intuition, taste and a refined aesthetic.
In the most futuristic scenario, computers might be able to advise on the creation of new art works. IBM, for example, has started to experiment with ways to integrate cognitive computing with different artistic endeavors. At the recent World of Watson event in New York City, the Watson supercomputer advised a human artist (Stephen Holding) on color palette and color psychology to fine-tune the design aesthetic for a huge mural painting.
Purists, of course, will huff and puff and claim that a machine could never duplicate the intense creativity of someone such as Leonardo da Vinci, long considered one of humanity’s most creative and innovative talents. (The Rutgers computer algorithm liked his “Madonna and Child With Pomegranate” by the way) But keep in mind that da Vinci was also a preeminent scientist and mathematician of his age. If he had been born during the era of big data and machine intelligence, it’s a safe bet that he’d be playing around with these algorithms when not painting the “Mona Lisa.”