Paintings

Will AI Change Art History Forever?

August 29, 202410 Mins Read


It’s not often that the work of authenticating art makes mainstream news, but that’s exactly what happened last year when a team of researchers in the UK determined that an anonymous, centuries-old painting known as the de Brécy Tondo was likely made by the Renaissance giant Raphael. It was a bold claim—one with potentially huge financial implications—but what really caught people’s attention was the technology the researchers used to get there: AI. 

The group, led by two science professors—Christopher Brooke from the University of Nottingham, and Hassan Ugail from the University of Bradford—developed a facial recognition model to compare the Madonna depicted in the de Brécy Tondo with other portraits of the same figure. After finding a 97 percent match with Raphael’s Sistine Madonna altarpiece, the researchers concluded, in Ugail’s words, that “identical models were used for both paintings and they are undoubtedly by the same artist.”

Related Articles

Image of the Raphael painting "The Holy Family with the Infant Saint John the Baptist, or Madonna of the Rose". (1517) Oil on panel transferred to canvas.
In Room 049 at the Museum Nacional del Prado.

But the novelty of this discovery did not last long. Some eight months after Brooke and Ugail’s announcement, a Swiss AI company called Art Recognition used its own model to determine, with 85 percent certainty, that the de Brécy Tondo was not made by the Renaissance master. In an op-ed, Art Recognition’s founder, Carina Popovici, defended her company’s findings, pointing to the many art historians on her staff and the sophistication of her model, which was  trained on images of real and forged Raphael paintings. Notably, she stopped short of discrediting Brooke and Ugail. “The most straightforward explanation for the strong discrepancy between the two results is that the models are essentially addressing different questions,” Popovici wrote.

Dubbed the “battle of the AIs” by the Guardian, this brief controversy did little to convince skeptics of AI’s ability to interpret paintings. Instead, it served as a microcosm of the larger debates playing out in the institutional art world as this technology comes closer and closer to infiltrating its hallowed halls. AI is already curating museum shows and biennials. Could it change how we approach, study, or regard art of the past, too?

When it comes to AI and art history, it’s important to remember that, for many, this is not just a technological question, but an urgent ideological one too. In 2013, the scholar and theorist Johanna Drucker published a paper called “Is There a ‘Digital’ Art History?” The piece touches on wide-ranging and complex topics, from data mining to the evolution of critical methodologies. But ultimately, Drucker’s answer to the question posed by the title of her paper is surprisingly simple: No. Engineered to aggregate and extrapolate, contemporary computational technology had made art history infinitely more accessible and navigable, Drucker noted. But it had not, in her words, altered the field’s “fundamental approaches, tenets of belief, or methods.” 

Within the digital humanities—an increasingly prominent field of academic study that incorporates advanced computational techniques into the study of nonscientific disciplines like art and literature—Drucker’s paper proved to be divisive. So did a related essay published two years later, in which art historian and critic Claire Bishop argued that the limits of a so-called Digital Art History reflect a broader socioeconomic problem: neoliberalism’s relentless push to metricize and optimize.

“Digital art history, as the belated tail end of the digital humanities, signals a change in the character of knowledge and learning,” writes Bishop in her essay, “Against Digital Art History.” More and more, she argues, “research and knowledge are understood in terms of data and its exteriorization in computational analyses. This raises the question of whether there is a basic incompatibility between the humanities and computational metrics. Is it possible to enhance the theoretical interpretations characteristic of the humanities with positivist, empirical methods—or are they incommensurable?”

Bishop and Drucker’s papers are roughly a decade old now. In the booming world of AI development, that’s a significant amount of time, and much about the technology has changed in the interim. What hasn’t changed, though, is the broader context that informed these two important thinkers’ positions, which arrived in the middle of a worrying—and still ongoing—trend of lawmakers, grant-givers, and educational institutions opting to fund the fields of STEM at the expense of the humanities. On some simple, subtextual level, Bishop and Drucker were arguing for the importance of critical connoisseurship in a culture war that’s starting to tip the opposite way.

MIT Press

Are art historians becoming outmoded? Some experts do not seem worried. “I don’t think there’s any AI or machine learning technique that could ever replace an art historian,” Amanda Wasielewski, a digital humanities professor at Uppsala University in Sweden. Hers is a reassuringly pragmatic take on this issue, partly due to the privilege of hindsight—though she does lean toward the thinking of Bishop and Drucker on some critical points too. 

Wasielewski is not against the integration of these tools into her field, nor is she necessarily convinced that their effects will be so radical. “There are already applications for machine learning and AI in everyday research uses for art historians, archivists, museums, and galleries,” she says, citing the tool’s foregone role in archival databases and collection management software. “These are practical applications that don’t replace humans. They just make our work slightly more streamlined.”

Ultimately, Wasielewski is less concerned that AI will introduce new ways of thinking than she is with the prospect that it may bring back old ones too. Last year, the scholar published Computational Formalism, a book that charts the ways in which machine learning has reintroduced the kind of strict, “close reading” methodologies of art study that have not been fashionable among critics and historians for many years. 

This approach, which emphasizes an artwork’s physical properties (composition, color, scale) over the external contexts of its making (the artist’s identity and intentions, say), was the dominant theoretical mode for much of the modernist era of the early 20th century. But by the 1960s and ‘70s, as new critical approaches—feminism, postcolonialism, structuralism—evolved in response to a cultural landscape shaped by political violence and burgeoning social movements, the formalists’ close-eyed approach seemed out of step. In a 1974 postmortem on the movement, literary scholar Gerald Graff offered a conclusion eerily similar to the one Bishop would come to write about Digital Art History four decades later. Formalism, Graff wrote, was just “one more symptom of the university’s capitulation to the capitalist-military industrial-technological complex.”

Now, Wasielewski fears that machine learning will bring this dogma back from the dead. With their hefty data diets and swift algorithmic operations, these computer vision systems are fine-tuned for formal analysis and pattern recognition—hence their early integration into the sectors of art authentication and archive management. But the more we turn to these systems for study, Wasielewski suggests, the more we ignore the many “critical frameworks” and “different methodological paradigms” that were developed before them. “When you think that somehow you’re going to draw out objective things from a formalist methodology,”  she continued, “you don’t do any extra methodological work.”

A painting of a man in armor with a red sash.

Portrait of Don Diego Messía Felipe de Guzmán, Marqués de Leganés was attributed to Anthony van Dyck by two experts following AI analysis.

The formalist thinking encouraged by computers does not necessarily equate to close reading. In the early 2000s, literary historian Franco Moretti proposed a concept called “distant reading”—now referred to as “distant viewing” in art—whereby vast amounts of formal data from a field are crunched to reveal broad patterns and trends across time, place, and style. A recent project by two digital humanities professors—Leonardo Impett from the University of Cambridge, and Fabian Offert from the UC Santa Barbara—illustrates the upshot of this approach.  

Last year, the researchers sought to reexamine the ideas in Drucker’s “Is There a ‘Digital’ Art History?” through Moretti’s methods and newtransformer-based vision models, which can “learn” the relationship between different pieces of data.These systems  “can tell you much more about a painting than we could ever hope to have achieved with more traditional computational methods,” Offert said. He pointed to Diego Velazquez’s 1656 painting Las Meninas—a masterpiece on the shortlist of the most studied artworks in history—which he and Impett analyzed through their own model, finding surprising compositional similarities to a pair of 20th-century photographs by Robert Doisneau and Joel Meyerowitz. 

The blunt name of Impett and Offert’s paper reveals tips their intentions: “There Is a Digital Art History.” But the titlecomes with an asterisk too. 

“With this generation of models, we’re getting closer to doing actual art historical work with machine learning,  with the caveat that we buy into whatever it is that these models have been trained on,” Offert explained. In other words, these new machine learning models are only as good as what is fed into them, and what is fed into them is only what art historians, with all their human biases, have chosen to digitize. “We can have the benefits of these new models, but at the same time,” Offert continued, we have to “always critique [them] and figure out the limitations of their…weird machinic visual culture.”

“They’re are not magical machines. They come from somewhere,” Wasielewski said of these models. “We need to question not just how these tools are applied, but where they have their origins, what kind of data they were trained on, what biases might be contained within [them].”

The brief row with Brooke and Ugail was not the only time that Popovici has had her company’s efforts questioned. Last fall, roughly a month after Art Recognition shared its findings on the de Brécy Tondo, a German art history professor named Nils Büttner published an essay that targeted the company’s work to build a broader argument about the limitations of AI trained on digital images.  

Popovici called Büttner’s essay “very aggressive,” and said the tone was symptomatic of the anxiety many old-school  experts feel when confronted withAI. “They feel that the technology is going to push them out of the way and take their jobs.” But Popovici has come to view incidents like this as an opportunity for dialogue, not mudslinging.“We’ve really put a lot of effort into talking to them because it’s not true,” she said. “In order to train the AI, you need images, and the images come from catalogue raisonnés, [which] are made by experts” like Büttner. Their knowledge, Popovici added, “is absolutely paramount”

Earlier this year, Popovici teamed up with Büttner on a research paper examining an old painting attributed to the 17th-century artist Anthony van Dyck. They went about the task before them in separate ways. Büttner, a by-the-book historian, picked over the painting in person, checked his observations against past research, then determined that the work  was not made by the Baroque master, but one of his workshop pupils. Popovic, an AI expert with commercial interests, assembled a collection of images related to the artist, fed them through her model, then decided that there is a 79 chance that the painting was not made by the artist. 

Two different methods, two similar conclusions: a traditionalist and technologist reaching across the ideological aisle in the name of normal, productive dialogue. It was a simple gesture, but also a meaningful one. As Wasielewski reminds, it’s people who design these systems, not the other way around, and the dialogue about how we use them has really only just started. When it comes to the development of AI, she said, “We in the field of art and in the field of art history need to be involved in these conversations.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts