The current system of distributing money in the cultural sector is too one-sided and by no means always meets the criteria of legitimacy, effectiveness and transparency. BIS institution, newcomer, established company, crucial infrastructure or experimental production house, all have to join a regime of planning and peer assessment. This can be done smarter, fairer and cheaper -- by first and cheaper - by first carefully considering the purpose of the assessment and which criteria are most important.
Reason
'Yes, we got it'. A cry of relief can be heard from many a cultural organisation when the Council for Culture, national cultural funds or municipalities announce which organisations will receive a multi-year grant. At least as often you hear a stifled curse if the outcome is negative. Some can continue to build on their artistic ambitions for a few years. The other sees a dream fade away and may just end up in the situation where its continued existence is in question.
The distribution of public funds in the cultural world has become a rat race of epic proportions. Every year, thousands of organisations, groups and individual artists compete for scarce resources. Triumph and disappointment alternate. The dominant method of assessment is a selection process based on pear or expert review, in which peer review committees assess the quality of applications. Although its application has become more professional over the years, criticism is growing and questions arise as to whether peer review is still the most appropriate tool.
In this article, we argue that distributive mechanisms based on peer review are not always the most appropriate for distributing cultural subsidies. We examine what alternatives are possible and what their advantages and disadvantages are. In doing so, we put on different glasses: diligence (Was the verdict reached fairly?), support (Is the judgement legitimate?), efficiency (how much time and money is involved?) and transparency (Is the process insightful?). We make it clear that peer review leads to a distribution procedure that is rather one-sided. Depending on purpose and context, other methods are richer and thus more appropriate.
What is it all about?
Every four years, over a thousand cultural organisations submit four-year or two-year grant applications to the state, national cultural funds and municipalities. A positive assessment does not guarantee a grant, as there is not enough money to honour all applications. Besides multi-year grants, there is also a range of project and programme grants in each discipline that are largely distributed by national cultural funds.
In total, it amounts to some EUR 4 billion, of which municipalities account for the largest share at over EUR 2 billion and central government follows with EUR 1.5 billion and the provinces with over EUR 300 million.
To distribute these grants, a choice is almost always made for a tenderprocedure in which organisations are invited to apply via an open call. Applications are assessed by a committee of peers, after which the best proposals are considered for funding.
Peer review is a tried and tested method, deeply rooted in the sciences and culture. As early as the Renaissance, peers judged each other in French artists' and writers' salons. The first known systematic use of peer assessment was done by the Royal Society of London. From 1665, they used this method to test the quality and reliability of publications in journals. In the Netherlands, peer review increasingly used from the 19th century onwards as an assessment mechanism for scientific research. From the 1960s and 1970s, the method was used for the distribution of money by the Netherlands Organisation for Scientific Research (NWO). And from the 1990s also for the assessment and accreditation of research groups and (higher) education institutions.
In the cultural sector, peer review used more widely for the distribution of cultural subsidies from the 1980s onwards. First by funds and later by the central government. Since the introduction of the Basic Cultural Infrastructure (BIS), the selection mechanism has become the standard. In recent years, the Council for Culture and State Culture Funds have increased the use of peer review considerably professionalised. Yet the shoe wrings. After each round of assessments, criticism of the system increases.
Dividing money on the scales
From a managerial and organisational science perspective, methods of distributing money must meet a number of basic requirements. These are the main criteria:
- Legality: the judgement has been carefully and properly justified on the basis of legal regulations and political decisions. In other words, the procedure is based on applicable rules and decisions.
- Legitimacy: the judgement is seen as authoritative and credible by those being judged (and by others). In other words, there is support for the procedure used.
- Efficiency: commitment of money, resources and time to reach judgment is proportionate to what is at stake. In other words, it is an appropriate, proportionate procedure.
- Transparency: All applicants should be given equal opportunities without hidden conditions. The different steps in the assessment and award process are accessible and public. In other words, the deliberations made during the procedure can be followed by all involved.
If we read the criticism released after the past and previous assessment rounds, it is largely related to these basic requirements, or more precisely, to the failure to meet them.
Legality
The number of submitted and successful notices of objection or appeals is an indicator of the extent to which a judgement has been properly justified. The Performing Arts Fund received 66 notices of objection last round where there were 28 four years ago1https://www.theaterkrant.nl/nieuws/fonds-podiumkunsten-behandelt-66-bezwaarschriften-meerjarige-subsidies/. Particularly among funds and municipalities, we see more and more rejected organisations exercising their right of objection, sometimes up to the courts. In Brabant, 99 grant applications to the Province for the period '25-'28 led to 53 grants. As many as 29 applicants lodged objections, with harsh words falling on the quality of the assessments and the procedure2 https://www.brabantcultureel.nl/2024/09/26/tientallen-bezwaarschriften-ingediend-bij-provincie-tegen-besluit-over-subsidies/. To date, objections and appeals rarely lead to adjustment of the judgment. So the legality of the procedure, whether at the state, funds or municipalities, seems to be fine. But in anticipation of these appeals, assessment committees are legally paring down their judgments. The assessment process is juridifying.
Legitimacy
The current way of distributing money is unsustainable. An assessment has de facto only two outcomes: at or out. Every four years, the survival of an organisation is again in the balance. It is a Solomon judgment aimed at distributing money, not at improving an organisation or the cohesion of the sector as a whole. The constant (competitive) struggle for subsidies leads to looting and capital destruction. Its support base is eroding.
The legitimacy of peer review is also under pressure because the independence and professionalism of committees is difficult to ensure. Peris-Ortiz et al.3Peris-Ortiz, Gopez and Lopez-Sieben. Cultural and Creative Industries: an overview. 2018. show that the background of committee members strongly influences the outcome of proceedings and can easily cause institutional bias. In a small country like the Netherlands, professionals constantly encounter each other in varying capacities. The fellow expert assessing an application today is working on his own proposal tomorrow. Conflicts of interest and role confusion lurk.
Moreover, going through an assessment process must be done properly and professionally. Assessing is a profession and requires proper training and supervision of committee members; a step that sometimes gets missed.
Efficiency
Distributing money always costs money, but in the cultural sector the transaction costs are very high. With governments and funds: the days spent on policy preparation, making frameworks, setting up committees, consulting, advising. At submitters: sessions and more sessions, writing plans, calculations. Add to that the consultants and copywriters hastily recruited by cultural organisations to write a plan. And then there are the appeal procedures that many rejected applicants file and take a hefty toll. Express all those hours, days, weeks in euros. A rough estimate comes to a range between €75 and €100 million4J.A. Bartelse and H. van Soelen. Cost estimation assessment round four-year policy. Utrecht, 2024..
Transparency
The quote attributed to Otto von Bismarck that 'laws are like sausages; it is better not to know how they are made' also seems to apply to peer review reviews. In the cultural sector, communication about it focuses mainly on the outcome. Applicants receive relatively brief explanations of the judgement, in some cases accompanied by point scores. The formation of the judgement and the committees' internal deliberations remain in seclusion. The lack of transparency is defensible. After all, committee members should be able to speak freely. But it comes at a price. Applicants experience capriciousness and arbitrariness because some committees would advise more strictly or loosely than others.
Viewed along these yardsticks, there are advantages as well as considerable disadvantages to using peer review as a method of distributing public funds. Ellen Hardy skillfully dashes hopes that other methods do meet all the basic requirements. In the article 'The art of distribution' (Hardy, 2022)5 E. Hardy. The art of division. 2022 she shows, using objection procedures, that the requirements in distributing grants increasingly have parallels with procurement law. Distributing money is complex, increasingly judicial and has no 'one best way'.
If there is no 'one best way', what alternatives are available and when is it best to apply them?
Alternatives
There are several variants to arrive at a distribution of subsidies. In higher education, for instance, courses are accredited through review procedures. When allocating funds for scientific research blind reviews a role. Governments and funds have been experimenting with alternative methods in recent years, mostly with the aim of being quick and threshold-reducing. We look at several variants using two perspectives: the purpose of the assessment and the criteria to which importance is attached.
When choosing an assessment method, the target of the assessment plays a decisive role. When the aim is to assess organisations that need to develop sustainably, different assessment methods come into the picture than for organisations with a short-term, dynamic profile. A basic infrastructure - the name says it all - assumes more importance to stability. These are essential functions in a system where assessment will focus more on learning and development than 'in or out'. For dynamism, project applications or grants provide agile organisations that respond quickly to cultural and societal trends and developments.
Another perspective when choosing an assessment method are the criteria to which importance is attached. When these are objectifiable and unambiguously measurable, then methods aimed at quantifiable outputs, such as financial parameters or audience numbers. This method of assessment is usually efficient and reproducible, with low transaction costs. When more importance is attached to less unambiguously measurable outcomes, such as artistic quality or social influence, then specialised knowledge or a carpenter's eye comes into play. Such assessments are often seen as less transparent ('black box'), but their legitimacy can be high when they involve authoritative experts.
Placing these two perspectives in a system of axes allows us to understand the characteristics and usefulness of different assessment methods.

Models under the microscope
Accreditation or visitation
This allocation principle is based on long-term agreements and assessment criteria that are more diffuse in nature: rather focused on outcome (social effect) than output. Cultural organisations receive funds for a longer period and agree on outline performances. Periodically, the institutions are inspected by committees with substantive experts, focusing on 'learning' and 'development'. The performance agreements form the basis for a longer-term accreditation (or concession). Governments can make customised agreements on policy emphases they expect from these organisations, but as a rule, they work within a legal framework (and remit) relatively independently and autonomously from government. Think universities or public broadcasting. This form of assessment is particularly suitable for cultural institutions with structural tasks at national or regional level, or in the category ‘to big to fail’ fall.
Forms of peer review
As mentioned, the most common distribution mechanism in the cultural sector. After registration, organisations are assessed by committees of experts or peers according to a variety of criteria. This method is particularly suitable when a substantive judgement on artistic quality or social relevance is desirable, while keeping an eye on keeping the cultural field sufficiently accessible to new entrants.
In some cases, such an assessment can be carried out by a intendant, a person whose judgement is relied on for a defined round or time period. A special category in this part of the system includes civic initiatives. Having citizens' initiatives (citizens as experts) fill part of the available resources promotes dynamism in the field and increases the legitimacy of arts and citizen involvement.
Draw
Drawing lots is controversial as a sharper judge in the cultural sector because it is associated with arbitrariness and at the expense of the quality of supply. This is not entirely justified, as drawing lots is the fairest and most democratic way of allocating resources. In the public sector, lottery is not unknown. It is used, for instance, in grant applications with a so-called ceiling system.
Applied to the cultural sector, drawing lots can be an adequate tool to honour new applications and thus encourage innovation. Drawing lots levels the playing field for initiatives where writing skills are not readily available and are sometimes hired externally to have a chance of getting a grant. Before drawing lots, the option exists to test a number of basic criteria, such as the obligation to apply from a foundation, apply codes and/or provide references.
Monitoring
Like accreditation, monitoring fits better with organisations with which long-term grant agreements are in place. The difference with an accreditation process is that expert review is omitted in favour of as objective as possible indicators that are tested transparently. Think of audience figures, development of own income, number of performances/productions, visibility of the institution, etc. During monitoring, these predefined indicators are tested.
Organisations are assured of continuity under the condition that the dashboard of measurement remains predominantly green. Monitoring can be enriched by a conversation between grantmaker and grant recipient to interpret the quantitative data and agree on adjustments to the indicators.
Conclusion
The current system of distributing money in the cultural sector is too one-sided and by no means always meets the criteria of legitimacy, effectiveness and transparency. BIS institution, newcomer, established company, crucial infrastructure or experimental production house, all have to join a regime of planning and peer assessment. This can be done smarter, fairer and cheaper -- by first and cheaper - by first carefully considering the purpose of the assessment and which criteria are most important. The cultural sector becomes more vital if high-profile institutions are monitored from a development perspective and newcomers can gain low-threshold access.
The search for better distribution methods can be found in a number of recent initiatives. The Fund for Cultural Participation launched the pilot participative c.q. trust-based finance6https://cultuurparticipatie.nl/actueel/232/pilot-participatief-en-trust-based-financieren, where the decision-making power on frameworks and funding lies with the applicant. Amsterdam is shifting in its Arts Plan 2025-20287https://www.amsterdamsfondsvoordekunst.nl/over-afk/meer-informatie-over-het-fonds/nieuws/hoofdlijnen-kunstenplan-2025-2028-bekend/#:~:text=Meer%20ruimte%20voor%20makers%20en%20nieuwe%20instellingen&text=Met%20het%20Kunstenplan%202025%2D2028 more resources to creators and initiatives that traditionally had less easy access to grants.
In the diagram below, we give an example of which assessment method we think is most appropriate.
| Cultural organisation | Purpose assessment or character criteria | Selection or assessment mechanism |
|---|---|---|
| Supporting institutions, such as LKCA, Boekman Foundation | National system task, measurable outputs | Monitoring |
| Local initiatives, such as neighbourhood festivals | Dynamic, approachable | Lottery or citizens' initiative |
| Presenting organisations with a physical location, such as museums, theatres and music venues | Regional or national system task, continuity, learning & development | Accreditation |
| Development settings | Dynamics and innovation, content judgment | Peer review |
| Art projects in public spaces | Dynamics, artistic-social | Intendant or citizens' initiative |
| Major producing organisations, such as National Opera and Ballet | System, too big to fail, quality | Accreditation |
Distributing money will always cost money. And making choices from a supply that exceeds available resources will always be painful. We argue that the view of the arsenal of distribution instruments should be broadened considerably. Each alternative has its advantages and disadvantages, in relation to the goals and criteria pursued by the means. An informed choice will increase support, reduce costs, and curb objection procedures. Plus, perhaps most important of all, an informed choice will enrich the cultural landscape.
About the authors
Paul Adriaanse works at Utrecht University as Director of Education for Professionals for the Faculty of Law, Economics, Governance & Organisation and the Faculty of Humanities. He is also attached to UU's Department of Governance & Organisation (USBO), from where, among other things, he developed the Leadership in Culture (LinC) programme.
Jeroen Bartelse is general director of TivoliVredenburg and chairman of Kunsten '92, the umbrella trade association for the cultural and creative sector. He is an Academic Fellow at Utrecht University (USBO)
The authors thank Prof Dr Mirko Noordegraaf for his comments on an earlier version of this article.
Endnotes
- 1
- 2
- 3Peris-Ortiz, Gopez and Lopez-Sieben. Cultural and Creative Industries: an overview. 2018.
- 4J.A. Bartelse and H. van Soelen. Cost estimation assessment round four-year policy. Utrecht, 2024.
- 5E. Hardy. The art of division. 2022
- 6
- 7





