Leading in the Age of AI: How AI Discourse Shapes Responsibility and Power
Meike’s Reflections on Artificial Intelligence
Do you prefer to listen to this article? Click below to access our AI-generated audio version!
Leading in the Age of AI: How AI Discourse Shapes Responsibility and Power
Meike’s Reflections on Artificial Intelligence
This is the second of seven parts of MDI’s leadership architect Meike Hinnenberg’s new blog reflection series on AI. You can find the first part here! Stay tuned for more 🙂
Chapter II – Lines of Enunciation
By distinguishing Artificial Intelligence as an industrial apparatus from machine learning as a set of practices, Crawford performs a gesture of ethical resistance. She interrupts the smooth circulation of the term, exposing Artificial Intelligence not as a settled object but as a line of enunciation – and in doing so opens a different path through the field.
In Deleuze’s sense, lines of enunciation are neither utterances nor texts, neither speakers nor doctrines. They are conditions of sayability that circulate within a dispositif, delineating what can be named, thought, and acted upon.
Most often, lines of enunciation remain invisible precisely because they work so well. They do not appear as commands, norms, or ideologies; they slip into language as description, into grammar as agency, into names that seem to pre-exist the things they gather. They do not ask to be believed: one does not need to agree with a line of enunciation to use it.
How AI Discourse Shapes Reality and Responsibility
These lines are not primarily repressive; they are productive. They bring objects into being (AI), generate problems (alignment, bias), propose solutions (ethical AI), and sketch futures (AI will transform everything). A critique that treats them merely as false representations, therefore, misses the point. Their force lies not (only) in what they conceal, but also in the realities they help bring into existence.
Understanding this productivity – and, with it, understanding technology not simply as an instrument to be used wisely but as a mode of world-disclosure – is essential, especially with regard to the question of responsibility. We are not outside the dispositif. We are not independent of the social, technological, and linguistic structures through which the world becomes accessible to us. Our relation to ourselves and our access to reality are shaped within them.
Response-ability
What is therefore required is not the illusion of standing beyond these structures, but the effort to understand how the dispositif operates: what realities it brings into being, how we are positioned within it, and how we might relate to it, act within it, or even shift its lines. For now, being independent of these conditions does not mean we would not be responsible. Responsibility may instead take the form that Bernhard Waldenfels calls Antwortlichkeit (response-ability): a responsiveness to what addresses us before we fully understand it, a response that can never entirely catch up with what precedes it.
Let us follow this path a little further to see how it shapes the field. If we turn, for example, to the website of the OECD, we read:
AI holds the potential to address complex challenges from enhancing education and improving health care, to driving scientific innovation and climate action. However, AI systems also pose risks to privacy, safety, security, and human autonomy. Effective governance is essential to ensure AI development and deployment are safe, secure and trustworthy, with policies and regulation that foster innovation and competition.
How Discourse Limits What Can Be Questioned
The OECD text speaks in a language in which Artificial Intelligence already acts: it drives, addresses, and enhances. Politics enters only later, as a moderating hand. In this grammar, Artificial Intelligence appears as an agent capable of benefit or harm, yet never itself fundamentally in question. Within this frame, one may debate safety, trust, and regulation, but more structural questions about extraction, power concentration, or the desirability of AI as such struggle to surface as relevant statements. The force of such enunciation lies not in persuading belief, but in pre-structuring the field of speech itself.
By distinguishing Artificial Intelligence as an industrial apparatus from machine learning as a set of practices, Crawford renders such a line of enunciation visible and thereby intervenes in the field of sayability. By questioning whether Artificial Intelligence is even artificial or intelligent, she shows that what appeared as an autonomous historical actor is in fact a constructed convergence: an industrial apparatus, a planetary infrastructure grounded in colonial continuities and distributed human labor.
What material and historical infrastructures make AI possible?
By shifting the question from “Is AI fair?” to “What material and historical infrastructures make AI possible?”, the unity of the term Artificial Intelligence fractures like the ice layer of a winter-frozen lake.
And another layer of the acoustic landscape begins to surface: the breathing of ventilation shafts, the murmur of moving earth, the metallic heartbeat of drills, the slow chewing of stone by machines, the deep-throated hum of engines, the churning of propellers folding the sea behind them, the wind threading through stacked containers, a quiet choreography of clicks and pauses labeling one image after another, bodies trying to keep time with logistics, repetition measured in beeps, the percussion of parcels in transit – a subdued sonority of work that must remain unnoticed, a human rhythm beneath the supposedly smooth surface of automation.

Meike Hinnenberg
Learning & Development Architect
Meike Hinnenberg is a trainer and Learning and Development Architect at MDI Management Development GmbH and specializes in communication, conflict management, diversity & inclusion, and lateral leadership.
