Richard Hill

Judgement for AI-mediated work

Category: Essay

  • Executive Judgement in AI-Mediated Organisations: Why Leadership Value Hasn’t Moved to the Machine

    Artificial intelligence is now embedded in the everyday cognitive environment of senior leaders. Executives operate amid algorithmically generated forecasts, recommendations, alerts, scenarios, summaries, and ranked options. Much of the commentary asks whether this makes decisions faster, smarter, or more consistent. That framing is understandable, but it risks missing the main event.

    The central challenge for executives is not learning AI tools, becoming more “data-driven”, or expanding analytical capacity. It is preserving and governing sound judgement when cognition itself is increasingly distributed across humans and machines. AI does not remove uncertainty, responsibility, or value conflict. Instead, it changes the conditions under which judgement is exercised, defended, and institutionalised.

    A useful way to see this is to start from a deliberately unfashionable claim: executive judgement is the primary unit of executive value in AI-mediated organisations. Tools can speed up analysis. Models can improve prediction. But no amount of optimisation can tell an organisation what it ought to do when aims conflict, when evidence is incomplete, when consequences are irreversible, or when legitimacy matters as much as performance. Those are the permanent features of executive work. AI reshapes how these features present themselves, but it does not eliminate them.

    Bounded rationality has moved

    Herbert Simon’s account of bounded rationality remains a solid foundation for understanding executive decision-making. Executives do not optimise across all alternatives with full information. They satisfice, using processes that are “good enough” under constraints of attention, time, uncertainty, and limited cognitive capacity. Contemporary AI discourse often treats these constraints as computational problems to be solved: more data, more processing, broader search, better predictions.

    But bounded rationality is not just about computation. It is a structural condition of decision-making under uncertainty and responsibility. AI may expand analytical reach, yet it introduces new constraints and new risk surfaces. Executives now face governance demands that did not previously exist: interpreting probabilistic outputs; judging whether a model’s domain of competence matches the current situation; managing drift and contextual misalignment; and justifying machine-influenced decisions to boards, regulators, and stakeholders. The satisficing threshold becomes multi-dimensional. Decisions must be not only effective, but also defensible, explainable (to the extent possible), and accountable.

    So AI doesn’t dissolve bounded rationality. It relocates it. The bottleneck shifts from “can we analyse enough?” to “can we govern what this analysis is doing to our decision system?”

    Sensemaking is the real battleground, and AI participates in it

    If Simon explains the limits of optimisation, Karl Weick explains something even more important for executive work: executives do not merely choose among options, they construct the situations to which they respond. Sensemaking is continuous, social, and oriented toward plausibility rather than objective completeness. People act on the story that feels coherent enough to coordinate action.

    AI systems now shape this story-making process. They do not merely present information. They generate framings: summaries, narratives, ranked priorities, recommended actions, and “what this means” interpretations. They influence salience (what gets noticed), urgency (what feels pressing), and plausibility (what seems like the obvious conclusion). And they do this persistently, not episodically. The executive is not just consulting a tool at decision time; the organisation is living inside a stream of machine-shaped attention and interpretation.

    This matters because sensemaking is upstream of “the decision”. If an AI system stabilises one framing too early, the organisation can converge prematurely. If it privileges what is easily measured, difficult-but-important considerations can be squeezed out. If it produces fluently written explanations, it can create an illusion of coherence that substitutes for scrutiny. When AI becomes an always-on participant in meaning construction, governing judgement becomes inseparable from governing the cognitive environment.

    Bias doesn’t vanish. It migrates across the human–machine boundary.

    A persistent misunderstanding is that AI reduces bias by replacing human judgement with statistical rigour. Research on heuristics and biases shows why this is naïve. Human judgement relies on heuristics that are adaptive under uncertainty, but they create systematic distortions. The tempting story is that AI corrects these distortions.

    In practice, bias doesn’t disappear. It relocates and recombines. It enters through problem formulation, data selection, model objectives, defaults, interface design, prompt choices, and interpretation of outputs. It also interacts with organisational incentives and human cognitive habits in ways that create systemic failure modes rather than individual errors.

    Two familiar patterns illustrate the problem. The first is automation bias: people defer to machine recommendations even when context suggests caution. The second is selective scepticism: people distrust the machine only when it conflicts with their prior beliefs, while embracing it as “objective” when it supports them. In both cases, the bias is no longer located neatly in a person’s head. It is distributed across a socio-technical system.

    This is why “debiasing training” for executives is a weak intervention. Bias in AI-mediated contexts is often embedded in workflows, defaults, and organisational routines. Likewise, technical interventions like explainability features or fairness metrics help, but they only touch fragments of a broader epistemic governance problem: how an organisation decides what to believe, what to ignore, and what to act on.

    Expert judgement isn’t option-comparison. It’s pattern recognition plus mental simulation.

    Naturalistic Decision Making research reinforces another awkward reality: experienced decision-makers rarely choose by comparing options in a neat analytic spreadsheet. In high-stakes environments, experts recognise patterns, generate a plausible course of action, and mentally simulate consequences to test feasibility. Alternatives are considered mainly when the first approach fails the simulation.

    This is not irrationality. It’s a sophisticated adaptation to time pressure and complexity. And it has big implications for AI. If executives are fundamentally operating through recognition and simulation, then AI’s most natural value is not “make the decision” but “extend the simulation”.

    Used well, AI can surface edge cases, generate counterfactuals, reveal second- and third-order consequences, or stress-test assumptions. It can act as a cognitive simulator that enriches the executive’s mental models. Used badly, it can swamp intuition with noise, or lock the organisation into historically dominant patterns by privileging what the training data makes salient.

    The key is that this interaction is governance-dependent. The outcome hinges less on raw model performance and more on how leaders calibrate trust, integrate outputs into deliberation, and preserve the ability to say: “This situation is outside the model’s competence.”

    Practical wisdom and responsibility are non-transferable

    Even if AI became dramatically better at prediction and recommendation, executive accountability would not change. Executive judgement is not purely cognitive; it is normative. Decisions commit the organisation to action under uncertainty, with consequences for people, resources, and futures. That is responsibility-bearing work.

    The classical language for this is practical wisdom (phronesis): deliberating well about what ought to be done in specific circumstances where rules are incomplete and values conflict. Technical systems can generate analysis. They cannot bear responsibility, justify decisions in moral terms, reconcile competing goods, or absorb blame when harm occurs. Institutions still hold humans accountable. More importantly, organisations still need humans to decide what kind of organisation they are trying to be.

    This is where much “AI governance” talk becomes too thin. Compliance frameworks and technical safeguards matter, but they are not a substitute for judgement. They can reduce certain classes of harm while leaving the core executive task untouched: deciding what to do when the formal criteria are insufficient or conflicting.

    The AI-Augmented Executive: a governance construct, not a tooling narrative

    Taken together, these strands point toward a reframing: executive capability in AI-mediated environments is best understood as the governance of distributed cognition.

    This is the heart of the “AI-Augmented Executive” construct. The AI-augmented executive is not simply someone who uses AI tools well, writes good prompts, or adopts analytics enthusiastically. It is an executive who retains responsibility for consequential decisions while deliberately governing how AI participates in organisational judgement.

    Four responsibilities follow.

    Governing cognitive boundaries. Leaders must decide where AI is epistemically valid and where it is not, especially under novelty, regime shift, or moral ambiguity. This includes knowing when to treat AI outputs as tentative hypotheses rather than as actionable conclusions.

    Governing sensemaking. Leaders must manage how machine-generated framings shape organisational narratives, salience, and closure. This includes protecting interpretive plurality, creating room for dissent, and preventing fluency from masquerading as truth.

    Governing bias and failure modes. Leaders must treat bias as systemic, not personal. The task is to design processes that detect and counterbalance bias propagation across data, defaults, prompts, incentives, and interpretation.

    Retaining accountability. Leaders remain answerable for outcomes and for the cognitive environment that produced them. In AI-saturated settings, this means being accountable not just for a decision, but for the socio-technical decision system.

    None of this implies that AI is unhelpful. It implies the opposite: AI is powerful enough to change the shape of judgement work. The risk is not “bad AI” in the simplistic sense. The risk is organisational decision-making drifting into a state where commitments occur by accident, responsibility becomes untraceable, and machine-shaped narratives quietly replace deliberation.

    A closing implication: stop treating AI as a decision upgrade

    Many organisations treat AI adoption as a competence story: train people, buy tools, improve speed and output. That approach can deliver local efficiency and still degrade judgement quality. It can create faster decisions with weaker accountability, more confident narratives with less epistemic humility, and broader analytics with narrower sensemaking.

    A more serious posture is to treat AI as a structural change in the organisation’s cognitive architecture. That demands governance, not just capability-building. It demands attention to decision rights, epistemic risk, interpretive discipline, and the design of workflows that keep responsibility visible.

    In an age where analysis is cheap and fluent text is abundant, judgement becomes the scarce resource again. That is not a romantic claim about human exceptionalism. It is the boring, durable reality of leadership: deciding what to do when the world refuses to become tidy, and owning the consequences when it doesn’t go to plan. 

  • Up tempo work

    Generative AI has quietly changed the tempo of work. Not in the headline places. In the boring places. Email, agendas, briefing notes, drafts of policies, draft replies to customers, draft performance notes. Stuff that used to take just enough effort to force a pause. 

    Now the pause is optional. That sounds like a productivity win. It is, sometimes. It’s also a governance problem wearing a productivity moustache.

    Because when drafting becomes effortless, organisations start committing to things without noticing. The thing that used to be “a draft” becomes “the decision”, because it reads cleanly and moves fast. 

    The claim

    The biggest leadership risk in the AI era is not that AI will make leaders obsolete. It’s that AI will make commitment too cheap, and organisations will confuse fluent drafting with actual decision-making.

    What would change my mind? Evidence that teams using AI heavily can consistently show (a) clear decision rights, (b) reliable escalation paths for exceptions, and (c) an audit trail that explains who owned what when it mattered, without slowing everything to a crawl. Not a policy. Actual practice.

    What I think is going on

    The current leadership narrative goes something like: AI can draft, but it can’t lead. Leaders must provide context, set guardrails, build trust, show judgement, and so on. 

    All true, in the abstract. But it misses the mechanics.

    AI doesn’t “replace leadership”. It changes the surface area of leadership. It pushes leadership into thousands of micro-moments, distributed across the organisation, where people are generating text and making commitments at speed. And those micro-moments are exactly where decision rights usually get fuzzy.

    So the right question isn’t “Can AI lead?” The question is: Where are decisions being made by accident, because text became cheap?

    The part people get wrong

    A lot of writing about AI leadership leans on “guardrails (clear values and decision rights)” as if saying it makes it real. 

    But guardrails are not values on a slide. Guardrails are a working control system:

    • which decisions exist (not “be responsible”, actual decisions)
    • who owns them
    • what counts as an exception
    • what must be escalated
    • what evidence is required before committing
    • how you find out when people bypass the route

    If you can’t answer those in plain English, the “guardrails” are vibes. Vibes do not survive contact with the inbox.

    A cleaner mental model

    McKinsey frames a shift from “command” to “context”.  I mostly agree, but I’d sharpen it:

    Leadership is moving from “deciding” to “designing decision conditions”.

    That means your job is to design the conditions under which other people, often using AI, can make decent calls under time pressure, without turning the organisation into a liability farm.

    Concrete example: customer support.

    AI helps a support agent draft a reply in 30 seconds. The model is good at sounding helpful. It will often over-promise because over-promising sounds helpful. A human who’s tired, new, or keen to close tickets can hit send.

    Now you’ve got an implied contract. Delivery teams inherit a mess. Finance gets dragged into refunds. Nobody can say whether this was an authorised exception or an accidental commitment.

    The fix isn’t “tell agents to be careful”. The fix is to explicitly separate:

    • drafting authority (anyone can draft),
    • commitment authority (only named roles can approve terms, money, timelines, exceptions),
    • release control (what must be checked before “send”, and who checks it).

    That’s governance. Not glamorous. Very effective.

    Judgement is not a personality trait

    McKinsey says leaders must demonstrate judgement, aligning choices to values, because AI is advisory not authoritative. 

    Yes. But “judgement” as a leadership trait is too squishy to operate at scale. I used to treat judgement as something you either have or you don’t. Now I think judgement is a system property as much as a personal one.

    Judgement shows up in:

    • what evidence is required before acting
    • whether uncertainty is made visible or papered over
    • how exceptions are handled
    • whether reversals are allowed without punishment
    • whether you can trace a decision back to a person, a rationale, and a timestamp

    If your operating environment rewards speed and punishes hesitation, you’ll get confident nonsense. AI just helps you generate it faster.

    Creativity, but make it accountable

    McKinsey argues leaders must design for nonlinear outcomes, not “20 percent better” but “10 times better”, and that humans must frame the problem, invite dissent, and hold the creative line. 

    Again, broadly right. Here’s the catch: AI makes it easy to produce ten options, which can create the illusion of creativity while reducing actual thinking. You get a pile of plausible outputs and nobody wants to be the boring person who asks, “What are we optimising for?”

    So I treat creativity work with AI like this:

    1. Write the brief like a contract. What is in scope, out of scope, what constraints are real, what success looks like, what failure looks like.
    2. Force one hard trade-off. Speed vs accuracy. Cost vs user harm. Personalisation vs privacy. Pick one. Make it explicit.
    3. Require a dissent paragraph. Not “risks”, a genuine counter-argument. If the best dissent you can write is weak, you probably don’t understand the space.
    4. Name the decision owner. The person who is on the hook when the shiny idea breaks.

    That’s how you get novelty without random motion.

    Where this breaks

    A few objections are fair.

    “This is too heavy for small teams.”

    If you try to build a full enterprise control framework, yes. But decision rights can be lightweight. A one-page “commitment map” is often enough to stop the worst mistakes.

    “We move too fast to add process.”

    You’re already paying for process. You’re just paying after the fact, in rework, customer fallout, HR pain, and fire drills. The question is where you want to spend your admin budget: before or after damage.

    “But leaders do need softer skills, trust, empathy, learning culture.”

    Agreed. The document makes a strong case for learning loops like premortems and after-action reviews.  I’m not arguing against the human stuff. I’m arguing that the human stuff fails without mechanics. Trust doesn’t scale by declaration. It scales when people can predict how decisions get made and how exceptions are handled.

    “AI tools can be configured to prevent this.”

    Sometimes. But configuration is still a governance choice. Who decides the rules? Who can override? What gets logged? Same problem, new wrapper.

    What I’d do if I were responsible

    • Map “commitment moments”. Where can someone, with a draft, create an obligation? Email, proposals, HR notes, customer replies, invoices, policy statements, procurement requests.
    • Define three decision classes.
      1. routine, can be auto-approved
      2. exception, needs named sign-off
      3. high-stakes, needs a second human and a record
    • Write a “draft vs decision” rule into workflows. Not training slides. Actual steps. If it matters, it gets reviewed.
    • Require minimal decision logs for exceptions. Two minutes, not a dissertation: what changed, why, who approved, what evidence, what you’ll check later.
    • Run one premortem per month on an AI-assisted process. “Assume this went wrong. How?” Then fix the top two failure modes. 
    • Protect leadership attention for inflection points. McKinseny cites the example that a CEO might keep 20 percent of the calendar empty.  The principle is right: protect time for the moments where judgement actually sits.

    Close

    I’m watching one thing more than anything else: whether organisations can keep the speed benefits of AI while making commitments harder to do by accident.

    Because that’s the new baseline. Drafting is cheap. Accountability is not. If you don’t design for that, your “AI transformation” will mostly be an expensive way to manufacture confident errors faster.

  • Essay: Managing research and teaching

    Essay: Managing research and teaching


    Abstract

    This article explores the challenges of managing research and teaching in UK Higher Education, by examining the variability of boundaries that are drawn around such spaces. Changing policy in the UK is provoking Higher Education Institutions to respond in dierent ways, to address the emergence of quasi, and ultimately, free market conditions. In particular, we examine how differing management and leadership cultures, namely mangerialist and collegial, can impose more or less constraints upon research and teaching management, as both discrete and combined activities. Furthermore the potential interplay between research and teaching is examined with a view to exploring a new model of university management, that has departmental leadership as a core component of a more de-coupled strategy. Finally we consider the implications of such thinking upon institutional management and leadership, and conclude that the emerging complexity in the UK HE sector is demanding a more adept leadership culture that embraces emergence and the development of a holistic understanding of research and teaching.

    1 Introduction

    This article considers the management of research and teaching in terms of the constraints that are often imposed upon each set of activities. This is a complex, challenging issue for university managers, and ultimately, institutional leaders. Firstly, a brief synopsis of relevant events in the development of Higher Education in the UK is discussed, to set the context for the rest of the discussion. Pertinent concepts are then described, before the limits upon the management of research and teaching are explored. Finally some implications for University management are described. We begin by considering how the United Kingdom (UK) Higher Education (HE) sector has been developing of late.

    2 The higher education context

    Universities have been considered to be collegial institutions, consisting of scholarly academics who create and disseminate new knowledge. That knowledge is imparted upon a community of students, who after a period of time, acquire a degree and move on within the wider economic community. The scholarly pursuit, perhaps as a means in itself, would be a key motivation in such an environment. Government policies that apportion funding to universities, would insulate a Higher Education Institution (HEI) from accounting for its activities, unlike private industry that needs to create financial profit now and in the future.

    Those who are employed in a UK HEI understand that this halcyon description lies some distance from the reality. For some time now, UK Government policies have steadily influenced HEIs by augmenting different sets of conditions upon how a university might function. An emerging need to demonstrate that the public funds are been spent wisely and appropriately, has led to substantial effort being expended upon the quality of an HEI’s provision. The UK Quality Assurance Agency (QAA) has substantial influence over the way in which a university manages its processes, and when this is combined with strictly enforced guidelines from the Higher Education Funding Council for England (HEFCE), a university can find itself needing to react to these constraints.

    The objectives of HEI management thus become more defined than the traditional, nebulous pursuit of knowledge. Queries from funding bodies require managerial systems to provide the requisite information. Activities that were once undefined, become scrutinised in terms of resource consumption, and whether the activity itself provides `value’. Indicators of performance become more overt, with league tables appearing that rate institutions on their relative results for teaching, research, employability and `student experience’. The recent trend in the introduction of partial fees, and latterly, whole fees (albeit capped at certain thresholds at the time of writing) has introduced a quasi-market environment in which UK HEIs function (Le Grand and Bartlett, 1993).

    The requirement to report upon performance sharply opposes the more collegial culture of HEIs. As institutions begin to focus upon the minutiae and install systems to manage performance, efficiencies in the way individual staff work are immediately called into question. Activities that were once regarded as part of the norm, are now exceptionally identified as being wasteful or redundant when considered at the micro level. Staff find that as a direct consequence of the systems being measured, that their own performance is assessed and reported, leading to an implicit pressure to do more with less (Smyth, 1995; Cuthbert, 1996). In times when external funding is reduced, that implicit pressure becomes explicit as academic managers direct and control the activities and working conditions of academic staff (Trowler, 1998).

    From a cultural perspective, there appear to be HEIs who are more ready to accept managerial practices than others. Pratt (1997) identifies the general polarisation of institutions that existed before 1992, and those that were formed post 1992. Universities that existed prior to 1992 had traits of a more collegial culture; a model of governance, rather than command and control management, was more evident in their daily operations. Conversely, as polytechnic institutions became able to use the title of university post 1992, the traditional bureaucracy associated with Local Authority management tended towards a more actively managed culture, though not to the extent of a private company. McNay (1995) observed that the generalised differences between this bipartite split in the sector, have started to diminish in the light of changing funding policy. 

    Specifically, both parts of the sector are operating under the same funding regimes, and are observed and reported upon by identical agencies such as QAA.

    3 Managing operations

    As the HE quasi-market has developed, universities have undergone transformations in an attempt to adjust to the more explicit demands that are placed upon them. The increased desire to act rationally, is one example of how internal decision making has been affected by economic pressures.

    University managers have used private sector management approaches as inspiration for their re-interpretation in the HE sector, which is often referred to as new managerialism (Reed and Anthony, 1993; Clarke and Newman, 1994; Deem, 1998). We now consider the two most significant spaces within HE, research and teaching, and explore the limits by which pertinent activities within those spaces can be managed. First of all, we shall consider the research space.

    3.1 Managing research spaces

    To understand the constraints of research requires some understanding of what research is, if only to clarify its distinction from teaching. For the purposes of this discussion we assume some basic definitions from Bushaway (2003) as follows:

    • Research. Using a systematic process of enquiry to undertake some original investigation, leading towards new knowledge or new under standing.
    • Research leadership. Understanding the research context, setting goals and enabling research to be directed.
    • Research management. The control and coordination of research activities to ensure its correct operation.
    • Research coordination. Managing resources in relation to research objectives, maintaining appropriate accountability within a university.
    • Research planning. The creation of a research strategy that is congruent with the aims of the university.
    • Research support. Creating and maintaining an environment in which research activities can flourish.

    Furthermore we assume that research is funded by an external source, and therefore other forms of research activity that a university will typically undertake, such as scholarship (Dearing, 1997), the application of knowledge, and the development of learning and teaching materials, will not be considered within the scope of this discussion.

    The management of research requires an appreciation of project and finanical management, quality assurance, logistics, human resources, administration, marketing and networking (Bushaway, 2003). Since it is externally funded, key stakeholders demand progress to be reported and results to be evaluated. All of these tasks must also be auditable. Thus, the assessment of performance is an important activity for the management of research, and whilst research might be considered a creative discipline, there is much that must be managed if the discipline is to be a sustainable income stream for a university.

    It is the creative part of research however, that is influenced by the need to manage and account for research performance. As funding councils and bodies demand more tangible evidence of `impact’, whether it be social or economic, research is ultimately affected by the thrust of evaluation. `Blue sky’, high risk, high reward, research is becoming increasingly difficult to conduct, as funders become more prescriptive with their desire for evidence.

    As such, whilst funded research generally lends itself to managerial activities as the measures are generally well-defined and apportioned to a finite budget, the very nature of the requirement to demonstrate tangible outcomes, limits opportunities to take risks and conduct truly innovative investigation.

    To summarise, externally funded research is actively managed and sits comfortably in an environment that measures, monitors, reports and manages performance. Management of the creative aspect is somewhat different and thus presents a boundary beyond which management activity is less productive and may even harm outputs.

    3.2 Managing teaching spaces

    At first sight, the management of teaching spaces would seem to be determined by finite sets of resources such as, facilities, staff, programme timetables, specialist equipment, length of module or programme, etc. Within this there is the knowledge capability of each staff member (what they can teach), and the interplay between different subjects upon a learner’s (and an academic’s) timetable. For example, an academic may teach two closely related modules and another might teach three disconnected subjects, with a clear difference in the workloading between both situations. Other, discrete constraints are how much time staff can make available; teaching duties assumes the inclusion of other activities that are distinct from teaching itself, such as administration, pastoral care, attendance at departmental meetings, marketing and open days.

    The management of these constraints can often focus around a normative currency, which is often time-related in terms of the number of hours `contact’. Contact refers to the amount of hours a tutor spends with students face to face; immediately this does not take account of electronic interactions and communication, which as technology becomes ever more pervasive, is an increased part of the academic’s working life. Using the currency of contact, systems emerge whereby other activities are converted into `contact hours’, so that they can be included as part of an overall assessment of an individuals workload. The manifestation of all the teaching constraints may result in a delivery norm of 1 hour lecture and 2 hour tutorial per week, per module, for example.

    The interpretation of this varies in relation to management style, as well as the characteristics of the academics being measured. Such styles range from trust-based laissez faire approaches, through to more prescriptive models that attempt to account for all activities. The reporting of teaching outcomes is challenging, since it is considered to be largely based upon qualitative data, yet there is often a demand to report it quantitatively in order to `benchmark’. The evaluation of teaching itself is a complex topic, especially when we consider the ethical constraints that are imposed upon studies of teaching practice. Dearlove (1997) argues that resource constrained teaching activities can be managed effectively, but the remainder can only be facilitated.

    Thus, the management of teaching (and teaching related activities) is often interpreted as the management of performance and culture, in response to the conflicting demands of the external HE environment as discussed earlier. In contrast to teaching delivery, scholarly activities are perhaps more nebulous to account for, and there is a tendency either to assume that an academic makes a professional judgement as to the hours they invest, or a nominal block of contact hours (referred to as self-managed time, which is outside of teaching periods), is used for the purposes of representing workload. There may of course be discrete activities such as writing an academic article, authoring a book, writing a funding bid or conducting a scientific experiment, that some attempt can be made to forecast the time required.

    In particular, a pedagogic experiment may be part of some externally funded work, where constraints were imposed at the design and planning stages of the bid application. Such work may be assumed to be more defined.

    As such, the complexity of the teaching role means that significant portions of the workload are both variable in scope and size, and challenging to account for. How does an academic manager assess the teaching quality of an academic? Assessment characteristics might be the number of complaints received, or the average grade profile of the student cohort, or even the overall student satisfaction as reported from an end of module questionnaire.

    However, all of these measures are open to manipulation, but also they can also be considerably influenced by external factors meaning that they cease to be a reliable measure. For instance, Key Performance Indicators (KPIs) for grade performance (the percentage of students who achieve 2:1 honours or above), does not take account of the ability of a particular cohort. In a climate where students are demanding more specialist programmes, smaller cohorts will demonstrate more volatile performance statistics. This complexity, and the arguments within, create an extremely challenging environment for the academic manager of teaching spaces. Academic staff understand too well the relative difficulties of attaching measures to teaching quality, satisfaction, retention, progression and achievement. Such understanding leads to frustration and tension when the measures report adverse conditions that may be beyond the influence of the staff, and of course, staff may respond to the role of measurement by performing strategically.

    However, there is clearly a conflict between the ability to measure, monitor and control sizable aspects of the teaching role, in an environment that is demanding its effective management.

    4 Managing research and teaching spaces together

    Whilst each space has its own constraints, there are also limits imposed when the two activities are combined. Indeed, universities have a need to consider the two spaces not only as separate entities, but also as the fundamental constituent activities of a HEI. It follows that the complexities of managing the spaces separately is further complicated when they are brought together.

    The character or self-perception of an institution may impose constraints upon these activities. The simplest example is whether an institution regards itself as research or teaching intensive. Since universities are typically large organisations, that are composed of smaller units, the relative achievements of a particular unit may appear to be at odds with the overall perception of the institution. 

    For instance, a small department that has aspirations to improve its research outputs and reputation may decide to submit grant applications, and therefore will be actively promoting the inclusion of research as part of its strategic plan. In a teaching-intensive university there maybe countless hurdles to overcome, since the operations will tend to reflect the predominant activities, which may not be conducive for funded research.

    Dedicated research administration and support may not exist for instance, or have insuficient capacity for certain types of projects. The academic staff time will not formally be available, since the HEFCE funding received is restricted to teaching duties and is not for the pursuit of research monies. As a consequence staff may invest their own time to write bids, until they achieve their first successful grant. This grant will then be used to `buy them out’ of teaching, or in other words, spend less time with students. This behaviour reinforces the divide between research and teaching, especially when teaching colleagues see research active colleagues’ careers progress at a greater pace.

    Loosely-coupled departmental structures, together with collegial tendencies, might be ideal conditions for teaching excellence. They are however, environmental conditions that are less than ideal for the monitoring of measurements, such as costs. Additionally, they are tolerant of poor teaching quality since it is dicult to directly challenge and manage performance that is below that what is expected. Clearly, in an age where the control of costs is mandated by external factors from free market or quasi-market forces, there is a boundary to be placed upon laissez faire cultures (at the potential expense of teaching and research quality). Conversely, managerialist practices can stifle creativity and engender educational approaches that are based on training models, rather than fostering learning through exploration and the creation of new knowledge.

    Many academic staff feel that a hard boundary exists between research and teaching spaces, even though staff may be expected to contribute to both spaces in pursuit of the university’s mission. One such reinforcement of the boundary between research and teaching is that caused by the differential in funding for either activity. 

    Resources for teaching have been steadily reduced and replaced with systems for Quality Assurance (QA). These systems are discrete from teaching activity and have served to considerably increase the administration workloads of academic staff (J.M. Consulting Ltd, 2000), whilst demonstrating no obvious support for research (Brown, 2002). QA systems are essentially managerial, causing tension when the activity to be observed does not lend itself towards direct comparison with `benchmarks’.

    There is an irony that after successive years of research funding being awarded through the UK Research Assessment Exercise (RAE, now the Research Excellence Framework, REF), teaching-intensive universities, who generally have not achieved the research esteem of research-intensive universities, are now motivated to acquire esteem and compete for funding with the HE sector at large. The motivation for this change in behaviour has been in part, the publishing of university league tables such as the Guardian newspaper (http://www.guardian.co.uk/education/table/2011/may/17/university-league-table-2012), which attempt to indicate the relative performance of each institution against each other. Having a value greater than zero in the research column is one strategic way of propelling an institution further up the league.

    However, this change in strategy means that tensions that may have existed amongst academics who fight to continue with their own research against a backdrop of a full teaching workload, must now become more exposed within departments, faculties and ultimately the institution itself.

    Clearly, institutions that decide to undergo a transformation have made a conscious choice to re-engineer their culture, and how that culture is managed. This has implications for the university management, who must recognise the limits of research and teaching management, both separately and together, with a view to pursuing a successful strategy.

    5 Implications for university management

    Within the quasi-market (Le Grand and Bartlett, 1993) of UK HE, external factors such as reduced funding, quality assurance compliance and reported performance through league tables and student satisfaction (National Student Survey), mean that HEIs have a need to manage and improve performance. As discussed earlier, the management of research and the management of teaching exposes different limits. 

    Research has a history of having to be accountable for external stakeholders, and therefore its management has developed upon a more rational basis. Teaching however, has been funded differently, in a way that has insulated expenditure from free-market volatility in the main. When cuts in teaching funding are announced, they are typically met with some objection. Within this funding model, certain acivities relating to the consumption of resources are straightforward to manage.

    Some aspects, generally related to the quality of teaching and the `learning experience’, are more difficult. Performance management in university culture is a very challenging topic, and one which has significant implications for HEI management.

    The first implication is that the university must have a clear understanding of its purpose. The categorisation of `research-intensive’ and `teaching-intensive’ will become less relevant as institutions attempt to performance manage both research and teaching to respond to external measurements.

    Indeed, institutions who still have collegiate approaches to managing teaching, alongside managerialist approaches towards research, may have a more challenging time in the emerging marketplace. Post-1992 institutions, with histories of bureaucratic teaching and QA management, may adopt more readily, the disciplines of managed research activity.

    However, the managerialist approach is essentially `top-down’ and this presents a risk that the collegial, creative environment where ideas emerge and flourish, will be silenced by KPIs and committees. 

    Shattock (2003) argues that the environmental conditions for change are more likely to exist in a university that fosters a more holistic, emergent approach to strategic management. Since both research and teaching are two fundamental constituents of a HEI, then university management must consider the institution’s strategy in a holistic manner. This contrasts with institutions that have separate research and teaching strategies (with no obvious links between the two (Gibbs, 2002)), managed by separate Pro Vice Chancellors.

    Henkel(2000) advises that the identities of instutitions have developed over a long period and therefore they may offer some considerable resistance if the future appears to be fundamentally different. Even if the perspective exists that research and teaching may be separate islands in an institution, the creation of explicit, positive links between the two is not easy to manage (J.M. Consulting Ltd (2000), referred to in Locke (2004)). Dearlove (1998) suggests that a close understanding of how the culture functions, especially its strengths, will be instrumental for university leadership to consider through a period of transformation.

    From an institutional perspective there should be strategies for research and teaching. However the implementation of these strategies is less complex if there are explicit links between the strategies; separate PVCs, with disconnected strategy documents, only create difficulties for departmental management. Therefore there should be explicit, appropriate links between the two strategy documents (if one, unified strategy is a bridge too far), indicating their mutual contribution towards the mission of the institution. For instance, `teaching informed by research activity’ is as important a statement as `the processes of research informing the teaching’. The nature of scholarship is a too broad and contested term to be the only documented nexus (Neumann, 1994) between teaching and research, and assumes that it is interpreted consistently across all functions.

    Therefore, the facilitation of an emergent environment where the holistic strategy is described by the university’s executive, to be interpreted and operationalised by departmental units, should be a key aim for an HEI. A university that has a culture of flexibility, being able to adapt to emerging trends, will be better placed to accommodate medium-term transformational objectives such as engaging with funded research for the first time.

    Understanding the core purpose can then set the scene for departments to scrutinise their own means of achieving the institution’s goals. To prevent departments from crudely interpreting the university mission, there is an implication for the institution’s Human Resources function, which must address a historical disparity between the careers of research active academics and teaching academics (Locke, 2004). A related matter is that of recruitment; institutions may choose to be more selective with the appointment of new staff, to align better with the emerging values (Locke, 2004).

    The adoption of an emergent approach means that leadership should not be confined to the senior management tiers. For departments to be able to interpret the institution’s goals, and thus develop their own strategic response, leadership roles must be cultivated at departmental level also. These leaders will manage, support and facilitate (Middlehurst and Kennie (2003) referred to in Locke (2004)) the real agents of change – the academics – in order to develop responses to tensions between the core components of university operations, research and teaching. This may inform the conversations around scholarship; what it is, and what it means in the context of the academic role.

    Whilst there may be a conceptual linkage, for scholarship to act as the nexus betwixt research and teaching (Elton, 2005), it is for the actual practitioners to work this out in their own context.

    6 Conclusions

    The question as to whether there are limits to the management of research and teaching is a pertinent one for UK HEIs at this time. New managerialism can be seen as a way of `grasping the nettle’, and undoubtedly some aspects of a university’s mission, that being funded research and the resource management for teaching, appear to be suitable candidates. In fact, institutions are already demonstrating evidence that they have adopted this approach.

    However, the realisation that managerialist, top-down approaches may also have negative connotations for the other functions of a university – high quality, inspirational teaching, scholarship and research creativity – has severe ramifications for the approach that university management should take.

    It would seem that a leadership model of trust should be adopted, whereby an open and honest discourse is held to understand the current identity of an institution, as well as a future identity that the university might want to aspire to. This would then be transposed into a set of goals to be interpreted at departmental level, to reflect the cultural and subject discipline norms, the capabilities of the staff, and indicate some of the uncertainties for the future. The university Human Resources department must also prepare to facilitate the development of departmental leadership, fostering an environment where leaderly talent is nurtured, whilst also developing and enforcing policies that make staff recruitment more agile and a better fit for the needs of the departments.

    In conclusion, as HEIs operate in an `age of supercomplexity’ (Barnett, 2000), a suitably adaptable approach to management is required. Paying homage to collegiality will demand leadership at all levels of the institution, to effectively manage a shared understanding of what the core function of a particular university is. This understanding will be derived by considering the limits of research and teaching management as a holistic entity, without resorting to a corporate management approach to performance measurement.

    References

    Barnett, R. (2000). Realising the university in an age of supercomplexity. Society for Research into Higher Education. Open University Press, Buckingham.

    Brown, R. (2002). Research and teaching: repairing the damage. Exchange, 3:29{30}.

    Bushaway, R. (2003). Managing Research. Managing Universities and Colleges: Guides to good practice. Open University Press and McGraw-Hill Education, first edition.

    Clarke, J. and Newman, J. (1994). The managerialisation of public services. In A. Cochrane and E. McLaughlin, editors, Managing Social Policy, pages 13{31}. Sage, London.

    Cuthbert, R., editor (1996). Working in Higher Education. Open University Press, Buckingham.

    Dearing, R. (1997). Higher education in the learning society. Technical report, The Stationery Oce, London.

    Dearlove, J. (1997). The academic labour process: From collegiality and professionalism to managerialism and proletarianisation? Higher Education Review, 30(1):56{75}.

    Dearlove, J. (1998). The deadly dull issue of university administration? good governance, managerialism and organising academic work. Higher Education Policy, 11(1):59{79}.

    Deem, R. (1998). New managerialism and higher education: the management of performances and cultures in universities in the united kingdom. International Studies in Sociology of Education, 8:47{70}.

    Elton, L. (2005). Scholarship and the research and teaching nexus. In R. Barnett, editor, Reshaping the University: New Relationships between Research, Scholarship and Teaching, Society for Research into Higher Education, chapter 8. Open University Press, Maidenhead, first edition.

    Gibbs, G. (2002). Institutional strategies for linking research and teaching. Exchange, 3:8{11}.

    Henkel, M. (2000). Academic Identities and Policy Change in Higher Education. Jessica Kingsley, London.

    J.M. Consulting Ltd (2000). Interactions between research, teaching and other academic activities. Technical report, Higher Education Funding Council for England, Bristol.

    Le Grand, J. and Bartlett, W., editors (1993). Quasi-markets and Social Policy. Macmillan, London.

    Locke, W. (2004). Integrating research and teaching strategies: Implications for institutional management and leadership in the United Kingdom. Higher Education Management and Policy, 16(3):101{120}.

    McNay, I. (1995). From the collegial academy to corporate enterprise: the changing cultures of universities. In T. Schuller, editor, The Changing University, pages 105{115}. Open University Press, Buckingham.

    Middlehurst, R. and Kennie, T. (2003). Managing for performance today and tomorrow. In A. Hall, editor, Managing People, Society for Research into Higher Education. Open University Press, Buckingham.

    Neumann, R. (1994). The teaching-research nexus: applying a framework to university students learning experiences. European Journal of Education, 29(3):323{339}.

    Pratt, J. (1997). The Polytechnic Experiment, 1965-1992. Open University Press, Buckingham.

    Reed, M. and Anthony, P. (1993). Between an ideological rock and an organizational hard place. In T. Clarke and C. Pitelis, editors, The Political Economy of Privatization. Routledge, London.

    Shattock, M. (2003). Managing Successful Universities. Open University Press, Maidenhead.

    Smyth, J., editor (1995). Academic Work. Open University Press, Buckingham.

    Trowler, P. (1998). Academics, Work and Change. Open University Press, Buckingham.