AI and Technocracy: A Marriage of Convenience?
In what ways do artificial intelligence and technocracy intersect? And how does this affect democracy?
Last week, I presented a paper at the 4th International Conference on Ethics and Artificial Intelligence, hosted by the University of Porto. My talk explored how Artificial Intelligence (“AI”) systems relate to technocracy. As AI becomes increasingly embedded in key areas of public life, scholars have begun to question its democratic implications. Yet, despite growing interest in this intersection, the concept of “technocracy” is often invoked too casually, without a clear definition. As a result, we often learn very little about the supposed worries concerning the ways in which technocracy and AI intersect.
What is “Technocracy”?
Across a range of academic disciplines – including criminology, public administration, archaeology, computer science, and philosophy – scholars are increasingly preoccupied with the ways in which AI and technocracy intersect. This literature, if I may homogenise the diversity of these scholarly contributions, engages with a wide array of AI applications: smart cities, autonomous weaponry, healthcare diagnostics, algorithmic governance in the public sector, and AI as such.
However, how “technocracy” is invoked is strikingly inconsistent. In some instances, “technocracy” is used synonymously with technology or used in relation to transhumanism. Interestingly, some theologians adopt such framings, particularly in response to or in dialogue with Pope Francis’ Laudato Si’.
Others use “technocratic” adjectivally to describe a system or function, without clearly specifying what qualifies it as such. I will return to these differing usages below.
The concept of “technocracy” can denote distinct configurations within political systems and public administration. It may refer to an ideal-type regime, fundamentally antithetical to democracy,1 or to technocratic governments – such as the Monti government in Italy (2011–2013) or the Papademos government in Greece (2011–2012). Still, the question is how many technocrats make up a technocratic government, because where Monti’s cabinet was occupied only by unelected experts, Papademos’ cabinet was merely headed by one (see McDonnell & Valbruzzi, 2014). Others might invoke “technocracy” more loosely to describe decision-making tools, such as scientific modelling.
In short, “technocracy” is a multifaceted term that requires clarification when deployed. In my doctoral thesis, which I hope to submit later this year, I propose four types of technocracy, each understood as a different mode of political influence exercised by scientific experts. But this brings us to a further conceptual difficulty: who counts as an expert? There are many types of experts. Most commonly, “technocracy” evokes technical experts, such as engineers – a usage that dates back to the origins of the term, when American engineer William Henry Smyth proposed a model of governance for the industrial society in 1917, sparking the Technocracy Movement. Today, attention often shifts to the political influence of scientific experts. Yet in discussions of AI and technocracy, references to “private experts” (notably, large tech companies) are equally prominent as well as artificial or non-human “experts.”
How “Technocracy” is Invoked
Back to the literature about AI and technocracy. A minor pet peeve of mine is the tendency in this literature to invoke the term “technocracy” (or “technocratic”) without offering a definition.
To be clear, my goal here is not to criticise individual scholars or undermine the value of their work. My aim is to highlight tendencies in the literature and to encourage more careful conceptual framing. I’m interested in how the terms “technocracy” and “technocratic” often circulate in academic discourse without clear and consistent definitions. This observation is based on a range of studies I’ve encountered in recent months. While I won’t cite specific studies here, I’ve noticed several recurring tendencies, and I will provide illustrative but fictional examples that are based on the studies I have encountered. I don’t mean to suggest that these studies are flawed – only that the term “technocracy” often carries a lot of weight without being clearly defined. My hope is that, by highlighting these patterns, we can begin to sharpen how we use this term.
“Technocratic X”
Frequently, authors use “technocratic” as a qualifier (e.g., technocratic judgment, governance, attitudes, or AI systems) without specifying what renders these phenomena technocratic. In these cases, it functions more as an attribute than a clearly specified concept.
The data scraping tool not only enables the control and surveillance of public life, but facilitates technocratic control of the public sphere.
This formulation implies a particular (i.e., “technocratic”) mode of control, yet it remains unclear what “technocratic” specifically connotes in this context.
In some cases, authors elaborate on the technocratic quality – e.g., by linking it to efficiency – but such elaborations often remain partial. What we’re left with, essentially, is a comparative claim: X [an AI system, action, or noun] is like Y [technocracy] because it exhibits Z [a presumed technocratic feature]. While this may suggest something about technocracy’s function, it rarely clarifies the broader term of “technocracy” and, therefore, only offers a glimpse into a part of what “technocracy” means.
Single Use
Some papers mention “technocracy” or “technocratic” briefly (often in the abstract or introduction), only for the term to disappear entirely from the body of the text. The reader is left to infer the relevance or meaning of “technocracy” by connecting it to related concepts that are invoked throughout the text – e.g., expertise, efficiency, or control. But the relationship between these features and the notion of technocracy is often left implicit, requiring interpretative work on the part of the reader that could be avoided with a clearer definition.
Presumed Consensus
At times, authors write as if “technocracy” has a widely accepted definition.
This paper investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within the technocratic society.
This reference to “the technocratic society” seems to presume conceptual clarity where, as I’ve shown above, considerable ambiguity remains.
Now, I realise that I might be doing a disservice here to these studies by merely invoking small passages (despite these being fictional) and not providing the larger context of these invocations. I struggle with fairly representing the studies I collected. In a larger piece, I hope to be able to offer more context and dive deeper into the specific ways in which the references are invoked. Nevertheless, there is very often little context to offer. While some papers do offer context or explanation, many do not. And even when “technocracy” is mentioned more than once, the surrounding paragraphs often provide little clarity about what is meant.
That said, there are positive counterexamples. Some studies explicitly define their usage of technocracy, either through apposition:
Technocratic efficiency: relying on experts who employ an instrumental mindset to achieve Pareto optimality.
Or through more direct exposition:
Technocracy here refers to the system of governance wherein public decisions are made by scientific experts.
Such definitional signalling helps the reader understand how the term is being used and enables meaningful discussion about the implications.
Relationships between AI and Technocracy
Despite the implied understanding of “technocracy,” I have attempted to categorise several ways in which the literature conceptualises the relationship between AI and technocracy. I outline two such patterns here, though I am continuing to refine and expand these categories.
First, in some accounts, AI systems are portrayed as replicating technocratic functions. This relationship is often invoked in discussions of automated decision-making, where decisions are made through purely technical procedures that omit or marginalise normative evaluation – e.g., in the ethics of autonomous vehicles or algorithmic governance.
Another formulation presents technocrats as deploying AI systems to further their administrative or political objectives. This is a common theme in the Smart City literature, where the design and governance of urban infrastructure through AI systems is said to be ideal for technocratic ways of governing (e.g., optimisation and control).
These two relationships differ in kind: the former implies that AI itself performs a technocratic role, while the latter focuses on how human technocrats integrate AI into their practices.
Crucially, this raises the question of what an “AI technocracy” might entail. One vision is that of an Artificial Superintelligence that governs a political society (or the world) just because it “knows” how to make “the best” or “correct” political decisions. Another version is that of human decision-makers, whether technocrats or democratically elected representatives, who rely on and might even defer to the AI’s output in specific policy domains.
Especially the former scenario complicates our traditional understanding of “technocracy” as a rule by human experts. If algorithmic systems come to exercise epistemic authority in decision-making, “technocracy” may no longer suffice as a descriptor. Instead, we might find ourselves in what A. Aneesh (in Virtual Migration: The Programming of Globalization, 2006) has called an “Algocracy” – a governance system where algorithms, rather than humans, exert rule-like control.
Democratic Worries
Finally, most authors mention the negative consequences for democracy as a result of the ways in which AI and technocracy intersect. While these worries are diverse, I mention two of them here which seem to be mentioned quite often.
First, automation and algorithmic governance are often said to undermine participatory elements of democracy, including public deliberation and voting. When decisions are delegated to AI systems, there is a risk that participation will be displaced by technical rationality.
Second, black-box algorithms raise concerns about transparency. When AI systems are deployed, and when their operations remain inscrutable even to experts, transparency (and, consequently, democratic accountability) suffers. In addition, this worry is often invoked in relation to private corporations that use AI systems.
Only a few mention positive elements and consequences of “AI technocracy”. Henrik Skaug Sætra, for example, thinks that the outputs of AI systems in public decision-making could be beneficial. Still, he emphasises democratic participation in the creation of AI systems. Furthermore, Sætra argues not only that the “[l]ack of understandability is the price for better decisions,” but that the problem of transparency is not unique to AI: opacity is also a feature of relying on human expertise. I think that he is right about the latter, and that tensions between democracy and AI often mirror the tensions between democracy and (deference to and trust in) expert authority in public decision-making.
Conclusion
Above, I have given a brief overview of the current state of the academic literature(s) concerning the ways in which technocracy and AI intersect.
Despite the importance and interdisciplinarity of this topic, many contributions lack conceptual clarity. To fully capture the relationship between technocracy and AI, it is recommended that scholars focus on first delineating (albeit briefly) what they mean by “technocracy.”
Much remains to be explored in this emerging area. This article draws on ideas from my doctoral research and forms part of a broader project in development. If you’re an academic and have suggestions for shaping the idea from this article into a postdoctoral research project, please reach out to me and share your tips. I’d love to hear from you.
Lucas Dijker
Similarly, “democracy” is yet another concept subject to many different interpretations (see e.g., Cappelen, 2025).