You can read this because our 400-plus members make it possible.
Good right?

What governance and supervision need to know about software sovereignty

W

In an earlier article, I wrote about the myth of the digital lane: the persistent idea that cultural institutions, given enough political or moral will, can change digital course relatively easily. Less reliance on US technology, more use of European alternatives, more control over data and infrastructure. It sounds straightforward. Almost reassuring, but a closer look reveals something else: digital infrastructure is not a lane you shift, but a layered stack of software, cloud, data, identity, security, archiving and now AI. It is precisely this interconnectedness that makes the subject administratively relevant.   

The question of software sovereignty is therefore no longer a hobby of IT departments. It goes to the heart of governance and supervision. Because as soon as an institution no longer has a clear overview of which digital systems it depends on, its ability to set course independently also diminishes. Then suppliers, contract conditions, habit and time pressure take the actual decisions. 

Not the flag on the facade, but the degree of direction

 In the debate, software sovereignty is often reduced to origin. European would be more secure or autonomous, American more risky. This intuition is understandable, especially at a time of geopolitical tension and growing suspicion of Big Tech. But for governance and supervision, it is too simplistic.

 The relevant question is not just where a supplier comes from, but how much control the institution itself still has. Can it move its data? Does it understand which legal regimes apply? Does it know what happens if conditions change, prices rise or access to certain services comes under pressure?

That distinction became acutely visible in the case surrounding the European Commission and Microsoft 365. In March 2024, the European Data Protection Supervisor ruled that the Commission's use of Microsoft 365 violated applicable privacy rules in several respects, including around international data transfers and unauthorised disclosures. The Commission subsequently had to adjust its usage and brought it in line with the rules in July 2025, according to the regulator. The lesson is clear: the fact that data is in Europe or that a public institution works with a large organisation does not automatically mean that governance is in place. Data location alone is not sovereignty. 

For cultural institutions, this is not an abstract Brussels issue. They too work with personal data of visitors, staff, artists, donors and partners. So management and supervision have to ask not only where data are, but also under which legal and contractual regime they actually circulate.

The vulnerability is in the stacking

Moreover, the real dependency rarely arises from a single loose system. It is in the stacking. An institution uses Microsoft 365 or Google Workspace for mail and documents, has data in a cloud environment, manages identities via linked systems, relies on security software from specialised vendors, uses external platforms for collaboration and, meanwhile, lets employees experiment with AI tools. All this is interlocking. As a result, software is no longer a collection of separate products, but an ecosystem.   

The July 2024 global CrowdStrike outage painfully showed what that means. The incident was caused by a faulty content update for Windows hosts and was emphatically not a cyber attack, according to CrowdStrike. Yet one flaw in a dominant security link led to global disruption. Therein lies precisely the managerial lesson. Dependency is not only a privacy or geopolitical issue, but also a continuity issue. When one supplier is deep in the digital chain, a technical problem can translate into operational downtime at lightning speed.  

For a museum, stage or archive, this is very concrete. Then it's not about an abstract “incident”, but about no access to documents, no planning, no internal communication, possibly no ticket sales and no clear view of what information has been hit. What starts technical ends up administrative.

Lock-in often grows without anyone really deciding it

 To this comes an uncomfortable truth. Many institutions did not consciously choose maximum dependence. They grew into it gradually. First an office environment was added, then a cloud layer, then extra security, then another AI functionality because it came as standard or because employees asked for it. Lock-in rarely arises as one big decision. It arises from a series of small, practically understandable choices that together create a high switching threshold. 

 Microsoft's pricing and packaging changes for Microsoft 365, announced in December 2025 and effective from 1 July 2026, illustrate that dynamic. In it, Microsoft is increasingly explicitly linking additional AI, security and management functionalities to existing suites. That in itself makes sense from a product development perspective, but managerially it also means something else: the more functions come together in one environment, the more difficult it becomes to assess or replace components separately. Technical integration then deepens dependencies.  

This is precisely why management and supervision must learn to look at mission critical dependencies. Not just: what software do we use? But above all: without which digital functions can the institution actually not run tomorrow?

AI makes the sovereignty question even more sensitive

That question becomes all the more urgent as AI piles into existing software. The debate about AI is often about efficiency or innovation, but for cultural institutions it touches at least as much on rights, reputation and control. The commotion surrounding Adobe in June 2024 was illustrative of this. Following criticism from creators, Adobe publicly clarified that it does not train generative AI on customer content and that that commitment would be explicitly added to its terms and conditions. That such a clarification was necessary says it all. Once AI becomes intertwined with creative software, the discussion immediately shifts to governance: what is a vendor allowed to do with our content, under what conditions, and can we as an institution justify it?  

 For cultural institutions, this is even more acute. After all, they work with collections, images, texts, educational content and other information with cultural and legal value. So the question of which AI tools are used can never be separated from the question of which rights and data flows are involved.

 At the same time, the market is showing that sovereignty is not black and white. OpenAI announced data residency in Europe in February 2025 for ChatGPT Enterprise, ChatGPT Edu and the API, and further extended those capabilities to more enterprise customers in November 2025. This shows that non-European vendors do move under pressure from European requirements. This is relevant for governance and supervision, because otherwise the debate becomes too quickly ideological. Not every non-European solution is necessarily unusable, just as not every European alternative automatically offers sufficient scale, functionality or security. So the question remains: what safeguards are really there?  

 Sovereignty is not a gesture, but a change operation

This brings us to the point that is perhaps most often underestimated in the debate: switching is not a moral statement, but a change operation. Data must be migrated, links rebuilt, processes modified and employees trained. This takes money, time and administrative attention - exactly the three things that are often scarce in the cultural sector. 

The classic LiMux case study in Munich shows how complicated such a path is. The city decided to migrate to open source back in 2004. The project took years, required redevelopment, customisation and organisational adjustment, and was later partially phased out. The lesson is not that sovereignty would be impossible, but that it has a chance of success only if governance and organisation take transition costs, culture change and political persistence seriously.  

 For cultural institutions, this means that software sovereignty should not be understood as autarky. Nobody switches off global technology for a moment tomorrow. It's about something else: mature direction. Knowing where the dependencies are. Understanding which systems are mission critical. Distinguishing which risks are legal, operational or strategic. And then making policies that fit the organisation's capability.

That is less heroic than a major digital breach. But it is governance. And probably exactly what the sector needs right now. Because the real unfreedom does not start when an institution is dependent on software. It starts when governance and supervision no longer know exactly what from - and therefore cannot properly decide how to proceed.

 

Appreciate this article!

donation
I donate

Respond!

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popular posts

Recent publications

Analogue or AI?

Analogue or AI?

Don't forget to fathom AI. And Holland Festival, and Jip and Naaz, and VPRO.

Categories