
I’m Richard Hill, Professor of Agentic AI and a Chartered Engineer.
My work sits where AI meets real organisations: not the demos, the operations.
I’m interested in a specific failure mode that keeps repeating as generative AI spreads through admin work: decisions quietly migrate into drafting.
Someone “just sends” a fluent email, a report, a policy note, a customer reply, and the organisation inherits a commitment without ever naming who owned it.
I focus on judgement and governance in AI-mediated work: decision rights, evidence standards, verification steps, audit trails, and the boundaries that keep accountability intact.
- Make decision ownership explicit, by role and by decision type
- Separate drafting from deciding, in the workflow, not in people’s heads
- Treat evidence as an asset; define what “verified” means
- Use governance boundaries, not vague policies, to prevent predictable failures
- Design for exceptions, because that’s where accountability gets tested
Links
Latest posts
- Drafting Is Not Deciding
- When Trustees Must Push Back on AI Leadership Narratives
- Judgement as an Operating Model Problem in the Age of AI
- Executive Judgement in AI-Mediated Organisations: Why Leadership Value Hasn’t Moved to the Machine
- Up tempo work
- Judgement ID: JL-16-12-2025-01
- AI Cannot Read Your Mind, Your Prompts Are the Problem
- A Comprehensive Guide to Governance for Small Businesses Considering Agentic AI
- Why Small Businesses Should Embrace AI Now
- Good Cybersecurity is the Best AI Strategy