In this article on artificial intelligence, I use museums as an example. Museums have many characteristics that cultural institutions have, making them a good ‘benchmark’.
At first glance, most museums may seem a long way from artificial intelligence. Halls hang paintings that are centuries old, showcases display objects from other environments, and visitors walk past stories carefully curated by curators. Yet AI is rapidly playing a role in the day-to-day operations of cultural institutions.
Not only in the form of experimental applications, but also in systems that have been part of museums' digital infrastructure for years: collection management systems, ticketing platforms, search engines and audience analysis. Technology is thus imperceptibly changing the way museums work.
This is precisely why there is a growing need for an explicit AI policy. Not as a fashionable addition to existing strategies, but as part of good governance. Those who use AI without clear frameworks run risks beyond technical errors. They affect reputation, copyrights, collection management and public trust.
Authenticity in the age of synthetic images
One of the most fundamental questions goes to the heart of the museum itself: authenticity. Museums are built on the idea of preserving and presenting objects whose authenticity has been established as best as possible. Art historians, conservators and registrars have spent decades developing methods to recognise forgeries and reconstruct provenance.
Generative AI is changing that playing field. Images can now be created that are hardly or indistinguishable from existing works of art. Documents - e.g. about the provenance of an object - can also be convincingly generated. This creates a new type of forgery: digitally produced, but often intended to appear credible in the physical world.
- A classic example of how museums can be misled is the Amarna Princess case. In 2003, the British Bolton Museum bought a statue presented as a rare Egyptian artefact. It was later revealed that the work had been made by British forger Shaun Greenhalgh and his family. The statue had been in the museum for three years before it was exposed as a forgery. (Amarna Princess)
- This kind of fraud existed before AI, but experts warn that generative AI is exacerbating the problem. Criminals use AI tools to generate convincing provenance documents, certificates and sales invoices that give a work of art a false provenance. AI Is a Godsend for Criminals Forging Fake Art
For museums, this means that traditional expertise is no longer enough. Curators must not only have art-historical knowledge, but also understand how synthetic images are created and how digital manipulation can be detected. AI policies here often start with something simple: procedures for verification. Just as museums have rules for the provenance of objects, they will also have to determine how digital authenticity is verified.
Technology and dependence on suppliers
A second issue is less obvious, but at least as important: dependence on technology suppliers. Much software used by museums now includes AI functionality. Sometimes this is clearly visible, for instance in automatic image recognition or visitor recommendation systems. In other cases, AI is hidden in the background of a system.
- An example of how institutions deal with this comes from the Amon Carter Museum of American Art. There, discussion about AI arose after employees realised that AI tools could potentially access internal information and collection data. In response, the museum began developing internal guidelines for how employees should use AI and what information should not be entered into AI systems. Cultivating AI: Developing AI Guidelines and Literacy Resources at the Carter
- Organisations such as Art Fund have now developed formal AI policies to determine how technology and suppliers should be assessed and what data may be used. Art Fund's AI policy: why, how and what
For drivers, this means that procurement processes are changing. Where previously functionality and price were mainly considered, questions now have to be asked about data and algorithms. What data does a system use to learn? Is collection data used to train commercial models? And what happens to that data when a contract is terminated? And in a time of increasing cyber risks: how do we know that intruders cannot penetrate through the software partner? Questions like these make it clear that AI policy is not just a technical issue. It also touches on legal agreements, data management and strategic autonomy.
The invisible bias in museum data
A third dimension of AI use lies in the content of museum collections themselves. Museums hold vast amounts of digital data: descriptions of objects, catalogues, metadata and archives. When AI systems work with this data, they also adopt the historical assumptions and inequalities embedded in it. Many collections were built in a different era. Some artists or communities are underrepresented, descriptions contain outdated terms and certain perspectives are missing. AI systems that search, classify or make recommendations often reinforce these patterns. What has little presence in the dataset is also less likely to appear in the results.
- A concrete example is research at Harvard University Herbaria, where data analysis and AI were used to identify outdated or problematic descriptions in natural history collections. The project shows how AI can both reproduce bias and help make it visible. Improving the Search: Uncovering AI bias in digital collections
For museums, this means that AI policy is not only about technology, but also about content responsibility. The question of how collections are described and interpreted thus becomes relevant again.
AI and the future of museum work
Beyond content and technology, AI also touches the organisation itself. In the cultural sector, AI is often presented as a tool: a way to write texts faster, analyse images or interpret visitor data. That image is partly correct. Many applications will support work rather than replace it. But at the same time, the nature of certain tasks is changing. Routine work in research, communication or administration can increasingly be automated. Other work, on the contrary, is shifting towards interpretation and supervision. For museums, this means that human resources policies have to change along with it. Employees need new skills to work with AI tools, but also to understand their limitations.
- AI is also changing the nature of work in museums. For example, discussions on “practical AI for museums” highlight that AI can be used to automate tasks such as collection analysis, text generation or audience analysis, but at the same time organisations need to invest in new skills and training for staff. This means that AI is not just a technology project, but also an HR and organisational issue: which tasks are changing and which new competences do curators, educators or communication teams need? Practical AI for Museums: 20 Insights
The conversation about AI is therefore not only about efficiency, but also about trust. Employees need to know how technology is deployed and what role they have in it.
Governance: who is responsible?
Perhaps the most important question surrounding AI is ultimately a managerial one: who is responsible? In many organisations, employees are already using generative AI tools for daily tasks. Texts are generated, images edited, analyses made. Often, this is done without formal guidelines. This may seem harmless, but carries risks.
Can an employee enter internal documents into a public AI tool? Who owns the rights to an image created using AI? And who checks whether generated information is correct before it is published?
An AI policy provides clarity here. Not by restricting innovation, but by creating frameworks. Guidelines on data, copyright and responsibilities help employees use technology safely.
More and more cultural organisations are therefore developing formal AI guidelines. For example, MuseumWeek has developed a special AI policy guide for museums to define which tools employees are allowed to use, how transparency should be ensured and who is responsible for supervision. The purpose of such guidelines is not to restrict innovation, but to clarify questions such as:
- Which AI tools are allowed;
- Which data may be shared;
- How transparent institutions should be about AI-generated content.
AI & Museums: Empower Your Team with an Internal AI Policy Guide
From experiment to strategy
Many museums are currently in a transitional phase. AI is researched, sometimes applied, but rarely managed structurally. This is understandable. Technology is developing rapidly and the cultural sector often has limited resources to keep up with all the developments. Yet there is a growing realisation that AI is no longer an experiment. It touches fundamental aspects of the museum: authenticity, knowledge production, audience relations and governance.
This is precisely why AI policy belongs at board level. Not as a technical manual, but as a strategic document that defines how an institution deals with new technology. Museums have experienced technological change more often in their history. From photography to digitisation of collections, each time the sector had to determine how innovation fits within its public mission. Artificial intelligence is the next step in that process.
The challenge for museums and all other cultural institutions is not to avoid AI, but to use it in such a way that it strengthens the core of their work: preserving, interpreting and sharing cultural heritage. Then, of course, it is important to have sound policies in place that can not only be implemented, but also monitored.




