Richard Hill

Judgement for AI-mediated work


Executive Judgement in AI-Mediated Organisations: Why Leadership Value Hasn’t Moved to the Machine

Artificial intelligence is now embedded in the everyday cognitive environment of senior leaders. Executives operate amid algorithmically generated forecasts, recommendations, alerts, scenarios, summaries, and ranked options. Much of the commentary asks whether this makes decisions faster, smarter, or more consistent. That framing is understandable, but it risks missing the main event.

The central challenge for executives is not learning AI tools, becoming more “data-driven”, or expanding analytical capacity. It is preserving and governing sound judgement when cognition itself is increasingly distributed across humans and machines. AI does not remove uncertainty, responsibility, or value conflict. Instead, it changes the conditions under which judgement is exercised, defended, and institutionalised.

A useful way to see this is to start from a deliberately unfashionable claim: executive judgement is the primary unit of executive value in AI-mediated organisations. Tools can speed up analysis. Models can improve prediction. But no amount of optimisation can tell an organisation what it ought to do when aims conflict, when evidence is incomplete, when consequences are irreversible, or when legitimacy matters as much as performance. Those are the permanent features of executive work. AI reshapes how these features present themselves, but it does not eliminate them.

Bounded rationality has moved

Herbert Simon’s account of bounded rationality remains a solid foundation for understanding executive decision-making. Executives do not optimise across all alternatives with full information. They satisfice, using processes that are “good enough” under constraints of attention, time, uncertainty, and limited cognitive capacity. Contemporary AI discourse often treats these constraints as computational problems to be solved: more data, more processing, broader search, better predictions.

But bounded rationality is not just about computation. It is a structural condition of decision-making under uncertainty and responsibility. AI may expand analytical reach, yet it introduces new constraints and new risk surfaces. Executives now face governance demands that did not previously exist: interpreting probabilistic outputs; judging whether a model’s domain of competence matches the current situation; managing drift and contextual misalignment; and justifying machine-influenced decisions to boards, regulators, and stakeholders. The satisficing threshold becomes multi-dimensional. Decisions must be not only effective, but also defensible, explainable (to the extent possible), and accountable.

So AI doesn’t dissolve bounded rationality. It relocates it. The bottleneck shifts from “can we analyse enough?” to “can we govern what this analysis is doing to our decision system?”

Sensemaking is the real battleground, and AI participates in it

If Simon explains the limits of optimisation, Karl Weick explains something even more important for executive work: executives do not merely choose among options, they construct the situations to which they respond. Sensemaking is continuous, social, and oriented toward plausibility rather than objective completeness. People act on the story that feels coherent enough to coordinate action.

AI systems now shape this story-making process. They do not merely present information. They generate framings: summaries, narratives, ranked priorities, recommended actions, and “what this means” interpretations. They influence salience (what gets noticed), urgency (what feels pressing), and plausibility (what seems like the obvious conclusion). And they do this persistently, not episodically. The executive is not just consulting a tool at decision time; the organisation is living inside a stream of machine-shaped attention and interpretation.

This matters because sensemaking is upstream of “the decision”. If an AI system stabilises one framing too early, the organisation can converge prematurely. If it privileges what is easily measured, difficult-but-important considerations can be squeezed out. If it produces fluently written explanations, it can create an illusion of coherence that substitutes for scrutiny. When AI becomes an always-on participant in meaning construction, governing judgement becomes inseparable from governing the cognitive environment.

Bias doesn’t vanish. It migrates across the human–machine boundary.

A persistent misunderstanding is that AI reduces bias by replacing human judgement with statistical rigour. Research on heuristics and biases shows why this is naïve. Human judgement relies on heuristics that are adaptive under uncertainty, but they create systematic distortions. The tempting story is that AI corrects these distortions.

In practice, bias doesn’t disappear. It relocates and recombines. It enters through problem formulation, data selection, model objectives, defaults, interface design, prompt choices, and interpretation of outputs. It also interacts with organisational incentives and human cognitive habits in ways that create systemic failure modes rather than individual errors.

Two familiar patterns illustrate the problem. The first is automation bias: people defer to machine recommendations even when context suggests caution. The second is selective scepticism: people distrust the machine only when it conflicts with their prior beliefs, while embracing it as “objective” when it supports them. In both cases, the bias is no longer located neatly in a person’s head. It is distributed across a socio-technical system.

This is why “debiasing training” for executives is a weak intervention. Bias in AI-mediated contexts is often embedded in workflows, defaults, and organisational routines. Likewise, technical interventions like explainability features or fairness metrics help, but they only touch fragments of a broader epistemic governance problem: how an organisation decides what to believe, what to ignore, and what to act on.

Expert judgement isn’t option-comparison. It’s pattern recognition plus mental simulation.

Naturalistic Decision Making research reinforces another awkward reality: experienced decision-makers rarely choose by comparing options in a neat analytic spreadsheet. In high-stakes environments, experts recognise patterns, generate a plausible course of action, and mentally simulate consequences to test feasibility. Alternatives are considered mainly when the first approach fails the simulation.

This is not irrationality. It’s a sophisticated adaptation to time pressure and complexity. And it has big implications for AI. If executives are fundamentally operating through recognition and simulation, then AI’s most natural value is not “make the decision” but “extend the simulation”.

Used well, AI can surface edge cases, generate counterfactuals, reveal second- and third-order consequences, or stress-test assumptions. It can act as a cognitive simulator that enriches the executive’s mental models. Used badly, it can swamp intuition with noise, or lock the organisation into historically dominant patterns by privileging what the training data makes salient.

The key is that this interaction is governance-dependent. The outcome hinges less on raw model performance and more on how leaders calibrate trust, integrate outputs into deliberation, and preserve the ability to say: “This situation is outside the model’s competence.”

Practical wisdom and responsibility are non-transferable

Even if AI became dramatically better at prediction and recommendation, executive accountability would not change. Executive judgement is not purely cognitive; it is normative. Decisions commit the organisation to action under uncertainty, with consequences for people, resources, and futures. That is responsibility-bearing work.

The classical language for this is practical wisdom (phronesis): deliberating well about what ought to be done in specific circumstances where rules are incomplete and values conflict. Technical systems can generate analysis. They cannot bear responsibility, justify decisions in moral terms, reconcile competing goods, or absorb blame when harm occurs. Institutions still hold humans accountable. More importantly, organisations still need humans to decide what kind of organisation they are trying to be.

This is where much “AI governance” talk becomes too thin. Compliance frameworks and technical safeguards matter, but they are not a substitute for judgement. They can reduce certain classes of harm while leaving the core executive task untouched: deciding what to do when the formal criteria are insufficient or conflicting.

The AI-Augmented Executive: a governance construct, not a tooling narrative

Taken together, these strands point toward a reframing: executive capability in AI-mediated environments is best understood as the governance of distributed cognition.

This is the heart of the “AI-Augmented Executive” construct. The AI-augmented executive is not simply someone who uses AI tools well, writes good prompts, or adopts analytics enthusiastically. It is an executive who retains responsibility for consequential decisions while deliberately governing how AI participates in organisational judgement.

Four responsibilities follow.

Governing cognitive boundaries. Leaders must decide where AI is epistemically valid and where it is not, especially under novelty, regime shift, or moral ambiguity. This includes knowing when to treat AI outputs as tentative hypotheses rather than as actionable conclusions.

Governing sensemaking. Leaders must manage how machine-generated framings shape organisational narratives, salience, and closure. This includes protecting interpretive plurality, creating room for dissent, and preventing fluency from masquerading as truth.

Governing bias and failure modes. Leaders must treat bias as systemic, not personal. The task is to design processes that detect and counterbalance bias propagation across data, defaults, prompts, incentives, and interpretation.

Retaining accountability. Leaders remain answerable for outcomes and for the cognitive environment that produced them. In AI-saturated settings, this means being accountable not just for a decision, but for the socio-technical decision system.

None of this implies that AI is unhelpful. It implies the opposite: AI is powerful enough to change the shape of judgement work. The risk is not “bad AI” in the simplistic sense. The risk is organisational decision-making drifting into a state where commitments occur by accident, responsibility becomes untraceable, and machine-shaped narratives quietly replace deliberation.

A closing implication: stop treating AI as a decision upgrade

Many organisations treat AI adoption as a competence story: train people, buy tools, improve speed and output. That approach can deliver local efficiency and still degrade judgement quality. It can create faster decisions with weaker accountability, more confident narratives with less epistemic humility, and broader analytics with narrower sensemaking.

A more serious posture is to treat AI as a structural change in the organisation’s cognitive architecture. That demands governance, not just capability-building. It demands attention to decision rights, epistemic risk, interpretive discipline, and the design of workflows that keep responsibility visible.

In an age where analysis is cheap and fluent text is abundant, judgement becomes the scarce resource again. That is not a romantic claim about human exceptionalism. It is the boring, durable reality of leadership: deciding what to do when the world refuses to become tidy, and owning the consequences when it doesn’t go to plan.