Two panel discussions at the DLD Congress in Munich last week - one on the future of media in the age of AI, the other on the social and moral implications of that same technology - together expose an uncomfortable truth. Artificial intelligence is no longer an external technological issue, but a structural force that goes to the heart of public information, cultural meaning and human coexistence. It is not the technology itself that is central, but the choices we make about who controls it, who nurtures it and who corrects it.

Journalism as the last human infrastructure
The media panel is remarkably unanimous about a paradox: AI is both undermining journalism and making it more urgent than ever. Henry Blodget, founder of Business Insider and now an independent publisher, puts it sharply:
“As long as there are human beings, we will always want to know what is happening and what it means.”
Blodget stands here not as a nostalgist but as a realist: distribution, summarisation and repackaging of news are rapidly being automated. What remains is meaning-making - and this requires human presence, trust and context.
That point is strongly complemented by Dutchman Almar Latour, CEO of Dow Jones. From his role as publisher of The Wall Street Journal, he points to the structural stakes of the moment:
“Information has value. It cannot just be taken, processed, and redistributed without permission.”
Latour speaks here not just as a media executive, but as a defender of a public infrastructure. Journalism is data, yes - but data with social value. If that value is not recognised, the source itself disappears. The history of “information wants to be free” has, he warns, led to news deserts and democratic decline.
From economies of scale to economies of scale
While big players fight over intellectual property, smaller editors see a different field of tension. Gordon Saft, editor-in-chief of Rest of World, calls AI an “accelerant” of existing crises: waning trust, dependence on platforms and the rise of “Google Zero”, in which search engines summarise news without redirecting. At the same time, he sees an unexpected opportunity:
“AI can level the playing field on the product and technology side.”
Small editors can experiment faster, build new formats and serve niches that were previously inaccessible. But this technological emancipation does not solve the core problem: how will citizens soon find what is reliable?
That question touches on what Latour calls a “flight to quality”. In a world of abundant, synthetically generated news, it is not who produces the most that wins, but who has built trust. Journalism is thus shifting from mass product to cultural institution.
AI as a social system, not a neutral tool
The second panel broadens this perspective. Here, AI is discussed not as a tool, but as a power structure. Meredith Whittaker, president of the Signal Foundation and former co-founder of the AI Now Institute, explicitly warns against the idea of inevitability:
“What we are talking about is technology derivative of a very particular business model.”
AI, she argues, is not a force of nature but the product of a concentrated ecosystem of data, capital and geopolitical power. When that infrastructure becomes invisible, society loses its capacity for counter-power.
Her warning about so-called “agentic systems” - AI systems that act autonomously within operating systems - touches directly on fundamental freedoms. When such systems gain standard access to communication, behaviour and context, privacy no longer becomes a prerequisite but a marketing promise with no technical basis. This affects not only tech companies, but also cultural institutions, media houses and governments dependent on digital infrastructure.
What makes human actions valuable?
The deepest layer of the debate is tapped by Edward Harcourt, director of the Institute for Ethics in AI in Oxford. He shifts the question from “what can AI do” to “what do we lose”:
“The one thing a machine cannot do is commiserate - because it cannot suffer.”
According to Harcourt, human value lies not in output, but in shared vulnerability: mortality, empathy, reciprocity. As soon as information exchange is completely disconnected from human presence, its meaning impoverishes.
That observation is uncomfortably relevant to culture and media. As recommendations, summaries and even conversations become automated, the question remains: who here is still sharing their mind with whom?
Hope without naivety
Finally, investor Bradley Horowitz offers another vocabulary: not optimism, but hope. Hope as an active attitude, calling for ethics, governance and self-restraint. Technology can contribute to human well-being, he argues, but only if it is consciously embedded in values and counterpower.
Conclusion: culture as a line of defence
What connects both conversations is the realisation that AI is not a technological detail, but a cultural stress test. For journalism, it means a reappraisal of origins, craftsmanship and trust. For society, it requires maturity: asking questions, acknowledging power and protecting human presence in systems that are increasingly autonomous.
For cultural institutions and intellectual elites, there is a clear task here. Not to reject technology, but to frame it. Not to celebrate speed, but to guard meaning. Because in an age where everything can be generated, that which cannot be simulated - human experience, responsibility and judgement - becomes the scarce commodity.




