You can read this because our 400-plus members make it possible.
Good right?

The British Museum's AI illusion

D

Six hours. No longer was the message online. Yet that was enough to spark a minor digital uproar. The British Museum posted images on Instagram and Facebook in late January of a young woman looking intently at objects from the collection. The message was innocuous: “Taking time to take a closer look is always worthwhile.” The images were not. They appeared to have been generated by AI.

After a flood of negative reactions, the posts disappeared as quickly as they appeared. What remained were screenshots, criticism and an uncomfortable question that now resonates much more widely than just in London: what happens when museums - institutions that derive their authority from authenticity, expertise and trust - use artificial images?

Without interpretation

The incident itself was orderly. Archaeologists and heritage specialists reacted vehemently. Not so much on the aesthetics of the images, but on what they saw as a principled crossing of boundaries. AI images, the critics said, undermine the distinction between documentation and fiction, between representation and simulation. Especially when a museum deploys them without explicit interpretation.

Remarkably, the images showed a model who ‘shifted’ culturally. In one image, she wore East Asian clothing, in another Mexican-looking motifs, while looking at an Aztec object from the collection. For critics, this exemplified a well-known AI problem: datasets that are culturally unevenly distributed and reproduce stereotypes. “As if all these cultures are the same,” remarked one of them .

AI generates

The British Museum responded defensively but recognisably. It concerned “user-generated content”, the spokesperson said. The museum itself does not publish AI images and removed the post “recognising the potential sensitivity”. At the same time, the museum announced it was working on museum-wide guidelines for AI use.

In doing so, the incident touches on a broader movement. Museums are in the midst of a technological transition that is not substantially different from previous digital transformations. Even then, scepticism was heard: with digitisation of collections, with social media, with virtual reality. What sets AI apart is the nature of the technology. Where previous tools reproduced or distributed, AI generates. It creates images, texts and interpretations that look convincing but have no direct relationship to physical reality.For museums, this is not a trivial nuance.

Reliable

Institutions that for centuries built their legitimacy on preserving and contextualising authentic objects are suddenly entering a domain where the distinction between real and synthetic is blurred. Not only for visitors, but also for professionals. When an AI image is visually indistinguishable from a photographic image, the question shifts from “is it beautiful?” to “is it reliable?” That is precisely where the core of the current museum discussion lies.

International sector organisations now explicitly warn about these tensions. The Museums Association stresses transparency as a first principle: label AI use, make clear what is synthetic and what is not. Not out of technophobia, but out of institutional responsibility. After all, trust is a museum's primary capital.

Power

NEMO, the European network of museum organisations, also explicitly places AI in a governance and policy framework. AI is not a communication tool, but a strategic development with implications for copyright, data use, labour market and public legitimacy. Museums, the argument goes, operate within the public domain and therefore bear a heavier duty of care.

UNESCO adds a geopolitical dimension. Cultural institutions function as custodians of collective memory. When they deploy technology that is structurally dependent on commercial models and opaque datasets, it touches on questions of cultural representation and power relations.

Valuable

The British Museum incident shows how abstract policy discussions suddenly become concrete. An Instagram post becomes a test case for institutional credibility. Not because AI images are necessarily problematic, but because their context and framing are decisive.

An AI image can be educationally valuable. Think reconstructions, visualisations or interactive applications. But it requires explicit interpretation. Without that context, it creates exactly what critics fear: an aesthetically appealing but epistemologically unstable reality.

Liable

With this comes a second area of tension. AI touches directly on professional roles within museums. Curators, educators, researchers and designers derive their position from expertise and interpretation. When AI takes over tasks traditionally performed by humans, not only work shifts, but also responsibility. 

Who is liable for errors, bias or misrepresentation? That question is still largely unanswered, but is gaining urgency.

Paradox

It is tempting to think of AI as an efficiency tool. Faster, cheaper, more scalable. But for museums, efficiency is rarely the primary criterion. Their social function revolves around reliability, diligence and cultural sensitivity. AI use without a clear normative framework therefore almost inevitably chafes.

What remains is a paradox that many institutions now recognise. AI offers unprecedented opportunities for public outreach, research and accessibility. At the same time, the same technology can erode precisely what museums base their authority on.

Mirror

The British Museum removed the images. That ended the immediate controversy. The underlying discussion by no means did. On the contrary, the incident acts as a symptom of an industry-wide search for new ground rules. Because ultimately, the AI discussion in museums is not about algorithms. It is about trust. About what reality a museum presents, and under what conditions the public remains willing to accept that reality. 

A six-hour Instagram post then suddenly turns out to be a mirror. Not of technology, but of the fragility of cultural institutions in an age when even images are losing their self-evident status.

Appreciate this article!

donation
I donate

Respond!

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popular posts

Recent publications

Analogue or AI?

Analogue or AI?

Don't forget to fathom AI. And Holland Festival, and Jip and Naaz, and VPRO.

Categories