Richard Hill

Judgement for AI-mediated work


Judgement ID: JL-16-12-2025-01

Status: Decided

Decision owner: Head of Sales

Audience: Sales lead, delivery manager, ops director

Sensitivity: Sanitised

Category: Risk

Judgement statement:

“We will implement a lightweight proposal sign-off and AI-use control workflow before any proposal is sent, because it reduces misrepresentation and GDPR exposure while keeping bid speed, despite adding friction during tight deadlines.”

Context:

A 25-person services consultancy uses AI to draft sales proposals. A sales rep has been copying client information into a free AI tool to improve wording. There is no formal sign-off workflow and bids are time-pressured. A proposal was sent containing an invented capability and an aggressive delivery date, and the client has asked for that commitment in writing.

Trigger event:

Client challenge on an invented capability and delivery date after proposal issuance, creating immediate reputational and delivery risk.

Decision question:

“Given tight bid deadlines, lack of a formal sign-off process, and GDPR and reputational risk constraints, should we continue sending proposals as-is or introduce a minimal control workflow, and why?”

Options considered

Option A: Immediate ban on AI for proposals, and require manual drafting until a full process is designed.

Expected upside:

Stops client data being pasted into uncontrolled tools immediately. Reduces risk of AI-generated invention and uncontrolled tone.

Expected downside:

Slower proposal production, higher cost per bid, likely missed deadlines and lower win rate in the short term. Creates pressure for workarounds.

Reversibility: Medium

Second-order effects:

Sales may bypass controls informally. Delivery may still be surprised by commitments because the root issue is sign-off, not only AI.

Option B: Keep AI for drafting but introduce a lightweight “send gate” and data handling rules within 5 working days.

Expected upside:

Targets the failure mode: proposals being sent without verification and delivery sign-off. Maintains speed while adding check points for claims, dates, and GDPR handling. Creates auditability.

Expected downside:

Adds friction and may miss some deadlines during transition. Requires training and enforcement.

Reversibility: Easy

Second-order effects:

Improves sales-delivery alignment. Normalises evidence-based claims and creates a habit of documenting assumptions.

Option C (optional): Centralise proposal sending to one role (Sales Operations) for all bids, with standard templates and mandatory checks.

Expected upside:

Consistent quality control and single point of accountability for what goes out. Easier to enforce data rules.

Expected downside:

Creates a bottleneck and single point of failure. Harder to scale with growth.

Reversibility: Hard

Second-order effects:

Sales may feel disempowered. Proposal cycle time may increase even for simple bids.

Evidence and Reasoning

Known facts (verified):

  • The business has 25 staff and is growing quickly.
  • AI is being used to draft proposals.
  • A sales rep used a free AI tool and included client information.
  • A proposal was sent with an invented capability and an aggressive delivery date.
  • The client has asked for the capability and date commitment in writing.
  • There is no formal proposal sign-off workflow.
  • Constraints include tight bid deadlines, GDPR risk, reputational risk, and pressure to hit monthly targets.

Assumptions (not yet proven):

  • The invented capability and aggressive date were influenced by AI drafting plus inadequate human verification, rather than deliberate mis-selling.
  • A lightweight gate can be implemented without materially harming win rate.
  • Delivery can respond quickly enough to support a fast sign-off on dates and scope.
  • The biggest risk is unverified claims and dates leaving the building, not the mere presence of AI.

Heuristics used (rules of thumb):

  • If it can be used as a contractual lever, treat it as a commitment even pre-contract.
  • Small teams need small controls early, not perfect controls late.
  • Bans create audit gaps when incentives still reward speed.

What we deliberately ignored (and why):

  • Comparing specific AI tools or models, because process control and data handling are the immediate issues.
  • A full sales operations redesign, because the decision is time-sensitive and the organisation is small.

Forecasts and Probabilities

Forecast F1: “95 percent of proposals sent will include completed ‘send gate’ checks by 31-01-2026.”

Probability P = 0.75

Measure: Percentage of sent proposals with a recorded checklist completion stored with the proposal and referenced in the deal record.

Baseline: 0 percent (no checklist currently).

Tripwire / threshold for review: If below 80 percent by 31-01-2026, review enforcement and checklist length.

Basis: Both

Uncertainty band: “P likely between 0.60 and 0.85”

Confidence: Medium

Forecast F2: “Zero proposals will be sent containing an unverified capability claim during 01-01-2026 to 28-02-2026.”

Probability P = 0.65

Measure: Weekly audit of 10 percent of proposals for capability claims mapped to approved sources (service catalogue or approved case studies).

Baseline: Unknown (no historical audit).

Tripwire / threshold for review: Any single occurrence triggers root cause review within 5 working days and retraining.

Basis: Inside view

Uncertainty band: “P likely between 0.45 and 0.75”

Confidence: Low

Forecast F3: “Median proposal cycle time (draft to send) will increase by no more than 10 percent by 15-02-2026 compared with the baseline measured in December 2025.”

Probability P = 0.70

Measure: Median hours from first draft timestamp to send timestamp for comparable bid types.

Baseline: Unknown, to be measured for proposals created 01-12-2025 to 15-12-2025.

Tripwire / threshold for review: If above 15 percent increase by 15-02-2026, simplify the gate or change sign-off routing.

Basis: Both

Uncertainty band: “P likely between 0.55 and 0.80”

Confidence: Medium

Forecast F4: “Client escalations related to proposal accuracy will be 0 in 01-01-2026 to 31-03-2026.”

Probability P = 0.60

Measure: Count of client complaints or escalations tagged “proposal accuracy” in the incident log.

Baseline: At least 1 incident (current trigger), prior rate unknown.

Tripwire / threshold for review: Any escalation triggers review and enhanced review for the next 5 proposals for 30 days.

Basis: Base rate

Uncertainty band: “P likely between 0.40 and 0.70”

Confidence: Low

Calibration note (optional):

“Recent forecasts tend to be mixed.”

AI Involvement

Where AI was used: Drafting, Summarising

What AI was not allowed to do:

  • Generate or imply capabilities not already approved in the service catalogue or documented case studies.
  • Commit to delivery dates or resourcing without delivery sign-off.
  • Receive client personal data or identifiable client information in free, uncontrolled tools.
  • Produce final proposal text that is sent without a human verification pass and recorded sign-off.

Human checks applied:

  • Sales rep completes a “claims table” listing each capability claim and the approved source.
  • Delivery manager approves delivery dates and explicit scope assumptions before sending.
  • Ops director approves any non-standard terms, unusual risk, or data processing statements.
  • Final read-through confirms commitments vs assumptions are clearly labelled.

Key risk introduced by AI: Hallucination

Constraints and Decision Rights

Non-negotiable constraints:

  • No client personal data in free, uncontrolled AI tools.
  • Capability claims must be evidence-linked to approved material.
  • Delivery dates require delivery sign-off.
  • Proposal send requires recorded sign-off.
  • Workflow must remain lightweight enough to meet most bid deadlines.

Decision rights: who decides, who is consulted, who can veto

  • Decides: Head of Sales
  • Consulted: Delivery manager, ops director
  • Can veto: Delivery manager on dates and scope feasibility, ops director on GDPR and reputational exposure

Accountability note: who carries the responsibility if this goes wrong (role)

Head of Sales is accountable for what is sent to clients; delivery manager is accountable for feasibility sign-off; ops director is accountable for data handling controls.

The Judgement and Rationale

Chosen option: B

Rationale:

The failure was a commitment leaving the organisation without the right checks, not simply the use of AI. A lightweight send gate makes capabilities and dates verifiable and makes data handling explicit, while preserving speed for a small team under deadline pressure.

Trade-offs accepted:

We accept added friction and some short-term slowdown to reduce misrepresentation risk, GDPR exposure, and delivery commitments that cannot be met.

Risks and Failure Modes

Top failure modes (ranked):

  1. Workarounds, staff bypass the gate under target pressure.
  2. Rubber-stamping, delivery sign-off becomes perfunctory.
  3. Checklist theatre, boxes ticked without evidence linkage.

Misuse risk (bad precedent / misinterpretation):

Sign-off is treated as blame transfer rather than shared responsibility.

Ethical or fairness concern:

Clients are misled by invented claims or unrealistic dates and make decisions based on misinformation.

Disconfirming Signals

Signals that trigger review:

  • Any repeat incident of invented capability or unapproved delivery dates.
  • Missing checklists or inconsistent artefacts indicating bypassing.
  • Cycle time increase above threshold without compensating quality gains.

Evidence that would change the decision:

  • Repeated audit failures despite training and enforcement, implying the risk cannot be controlled in practice.
  • Sustained material win-rate decline linked to the gate, requiring redesign or alternative control placement.

Tripwires and thresholds:

  • Any repeat misrepresentation incident triggers immediate escalation and temporary enhanced review for 30 days.
  • Compliance below 80 percent by 31-01-2026 triggers workflow redesign and manager accountability actions.
  • Cycle time increase above 15 percent by 15-02-2026 triggers simplification of checks.

Implementation Notes

Actions (owner role + due date):

  • Head of Sales, publish send gate checklist and enforce for all proposals, due 23-12-2025.
  • Ops director, publish approved tool guidance and prohibited data handling list, due 23-12-2025.
  • Delivery manager, define delivery date approval rule set and red lines, due 30-12-2025.
  • Sales lead, create approved capability library and approved fragments, due 10-01-2026.
  • Operations coordinator, implement tracking and storage convention for checklists and sign-offs, due 30-12-2025.
  • Head of Sales, run training session for sales staff, due 03-01-2026.

Comms plan (internal/external):

Internal briefing to sales, delivery, and ops on the new rules and enforcement. External response to the affected client correcting the record and clarifying what is actually feasible, without disclosing internal tools.

Documentation created or updated:

  • Proposal send gate checklist (v1)
  • Approved capability library and evidence sources
  • Delivery date approval rules
  • AI tool and data handling guidance
  • Incident record and corrective client communication note

Outcome Tracking

Metrics chosen:

  • Checklist compliance rate
  • Median proposal cycle time
  • Audit pass rate for evidence-linked capability claims
  • Number of proposal accuracy escalations
  • Instances of prohibited data handling detected

Early results:

Unknown. Baselines for cycle time and historical error rate are not currently measured.

Unexpected effects:

Unknown. Possible displacement of effort from late-stage edits to earlier scope clarification.

Review and Learning

Review date: 15-02-2026

Result: Held

What we learned:

Unknown. First priority is baseline measurement and auditability, otherwise success claims will be speculative.

What we’d do differently next time:

Introduce a send gate earlier in the growth curve, before informal AI use becomes normal and invisible.

Public Redactions

What was removed:

Organisation name, client identity, sector specifics, tool names, proposal content, and incident details that could identify individuals.

Why: Privacy / commercial sensitivity

Richard Hill

Judgement for AI-mediated work

© 2026 Richard Hill