Richard Hill

Judgement for AI-mediated work

Category: Computer Science

  • Agentic AI: Understanding LangChain and LangGraph for Intelligent Automation

    Agentic AI: Understanding LangChain and LangGraph for Intelligent Automation

    Artificial Intelligence (AI) is evolving, enabling systems to operate with increasing autonomy.

    Agentic AI refers to AI systems that can make decisions and take actions independently, much like humans.

    To understand how these systems function, we need to explore multi-agent systems, the Belief-Desire-Intention (BDI) model, and the role of Large Language Models (LLMs).

    We also need to examine two AI development tools, LangChain and LangGraph, and their respective strengths in building agentic AI applications.

    Large Language Models (LLMs) and Their Role in Agentic AI

    Large Language Models (LLMs) are a cornerstone of modern AI, trained on vast amounts of text data. They use deep learning techniques to generate and process human-like text.

    Examples of LLMs include GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and PaLM (Pathways Language Model).

    These models perform natural language processing (NLP) tasks, such as translation, summarisation, and conversational interactions. They enable agentic AI systems to:

    1. Understand Context and User Intent: LLMs help AI interpret text-based inputs and generate meaningful responses.
    2. Automate Decision-Making: AI systems can use LLMs to determine actions based on contextual understanding.
    3. Personalise User Interactions: LLM-powered agents adapt to user preferences over time.
    4. Integrate with External Systems: AI agents retrieve and process information from APIs, databases, and web sources.

    For instance, an AI-powered assistant using LangChain or LangGraph can:

    • Identify a user’s request (e.g., “Find the cheapest flights for next month”).
    • Gather flight details from airline APIs.
    • Generate recommendations in natural language.
    • Automate the booking process or set reminders.

    Multi-Agent Systems (MAS)

    Multi-Agent Systems (MAS) refer to AI architectures where multiple agents interact to solve complex problems. These systems are useful for real-world scenarios where tasks can be broken down and delegated among autonomous agents.

    For example, in an e-commerce platform, MAS could include agents responsible for:

    • Managing inventory.
    • Handling customer inquiries.
    • Processing transactions securely.
    • Coordinating logistics and deliveries.

    The integration of MAS with LLMs allows AI agents to collaborate effectively, making them more adaptable to dynamic environments.

    The Belief-Desire-Intention (BDI) Model in Agentic AI

    The Belief-Desire-Intention (BDI) model is a framework used to design intelligent agents that make rational decisions. It consists of:

    • Belief: The information the agent has about the world.
    • Desire: The goals the agent wants to achieve.
    • Intention: The planned actions to achieve those goals.

    This model is widely applied in AI systems requiring decision-making capabilities, such as robotic automation, smart assistants, and self-driving cars.

    By combining the BDI model with LLMs and MAS, AI agents can function in highly dynamic environments with improved flexibility and reasoning capabilities.

    LangChain vs. LangGraph: A Comparative Analysis

    LangChain and LangGraph serve different roles in AI-driven applications.

    LangChain is primarily used for applications that rely on Large Language Models (LLMs) for text processing and decision-making. It is best suited for linear workflows, where tasks follow a sequential order with little variation. This makes LangChain an ideal choice for chatbots, text-based query systems, and automated content generation.

    On the other hand, LangGraph is designed for structured, multi-step workflows that require complex decision-making and branching logic. It is better suited for non-linear workflows where AI processes need to adapt dynamically based on multiple input variables. LangGraph is commonly used for workflow automation, business process management, and AI-driven decision trees, such as customer onboarding systems and troubleshooting guides.

    When choosing between the two, developers should consider the nature of their application.

    If the AI system involves straightforward, sequential processing, LangChain is probably the more appropriate choice.

    However, if the AI needs to handle multiple decision points and evolving conditions, LangGraph provides the necessary flexibility and control.

    Selecting the Right Tool for AI Development

    When to Use LangChain

    • Conversational AI and Chatbots: LangChain is ideal for building chatbots that need to engage in free-flowing conversations.
    • Text-Based Query Systems: AI-powered search engines and knowledge retrieval systems benefit from LangChain’s ability to interact with LLMs.
    • Content Generation: Applications requiring text summarisation, writing assistance, or automated report generation perform well with LangChain.
    • Linear Workflows: Best suited for processes that follow a step-by-step sequence with little variation.

    When to Use LangGraph

    • Complex, Multi-Step Workflows: LangGraph is better suited for workflows that require dynamic branching and decision trees.
    • Automated Business Processes: Applications like customer onboarding, logistics management, and financial automation benefit from LangGraph’s structured approach.
    • Decision Tree-Based AI: Troubleshooting systems and rule-based decision-making benefit from LangGraph’s non-linear processing.
    • Non-Linear Workflows: Ideal for cases where processes must adapt based on multiple input variables and dependencies.

    The Future of Agentic AI

    As AI continues to advance, the integration of LLMs, MAS, and workflow automation frameworks like LangChain and LangGraph will pave the way for more sophisticated agentic AI systems. Future developments may focus on:

    • Enhanced AI Reasoning: Improving LLMs’ ability to perform deeper reasoning and understand abstract concepts.
    • Greater Autonomy: AI agents becoming more self-reliant, requiring less human intervention.
    • Scalability and Efficiency: Optimising multi-agent collaboration for large-scale industrial applications.
    • Ethical Considerations: Ensuring AI aligns with ethical standards and is used responsibly.

    Conclusion

    Agentic AI represents a significant shift in artificial intelligence, enabling autonomous decision-making and action execution.

    LLMs serve as the foundation for modern AI applications, providing language processing and reasoning capabilities. However, to build robust AI agents, choosing the right framework is essential.

    • LangChain is best for linear workflows where AI-driven conversations, content generation, and information retrieval play a critical role.
    • LangGraph excels at non-linear workflows, where structured decision-making, workflow automation, and complex branching logic are required.

    By understanding these tools and their applications, developers can build more effective and intelligent AI applications that enhance efficiency, automation, and user experience.

    Keywords

    Agentic AI, LangChain, LangGraph, AI workflow automation, Large Language Models, LLM-powered AI, AI decision-making, multi-agent systems, BDI model, AI chatbots, AI content generation, structured AI workflows, non-linear AI workflows, AI business automation.

    References

    • Jennings, N. R., & Wooldridge, M. (1998). “Applications of Intelligent Agents.” Agent Technology: Foundations, Applications, and Markets.
    • Padgham, L., & Winikoff, M. (2004). “Developing Intelligent Agent Systems: A Practical Guide.” John Wiley & Sons.
    • Rao, A. S., & Georgeff, M. P. (1991). “Modeling Rational Agents within a BDI-Architecture.” Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning.
    • Russell, S. (2019). “Human Compatible: Artificial Intelligence and the Problem of Control.” Penguin Books.
  • Cyber-Physical Systems challenges: iSCI2020 Invited Talk

    Here is my invited talk about Cyber-Physical Systems challenges, for iSCI2020 in Guangzhou, China. It was pre-recorded due to the COVID-19 travel ban.

    Research Challenges for the Industrial Adoption of Cyber-Physical Systems

    Abstract: Interest in the ‘digitalisation’ of industry, specifically manufacturing, is driving the development of innovative technologies that make the exchange of data, and the inference of knowledge increasingly accessible. Large organisations are able to rapidly acquire and evaluate Cyber-Physical Systems technology, enabling new business models to be created. However, significant challenges exist with regard to the design, operation and evaluation of industrial CPSs in terms of their accuracy, calibration, robustness and ability to fail safely. Traditionally, such systems would have been designed using formal approaches, but the scale of CPS adoption is such that there is less reliance on the established methods of validation and verification. This talk explores some significant challenges for the CPS and associated software development research communities.

  • Modelling robots and Cyber Physical Systems

    Modelling robots and Cyber Physical Systems

    Webots (https://cyberbotics.com) is a simulation environment for the design and modelling of robotic systems.  Since robotics invariably results in some physical actuation, Webots can also be used to model Cyber-Physical Systems, and being open source, the software is free to use and experiment with.

    Webots is particularly suited to newcomers to robotic systems, though it can still be used more formally in industrial scenarios, via links to the Robot Operating System (ROS – https://www.ros.org).

    If you have a need to develop a robot, a physical control of a process, or you are just curious, Webots is a good place to start.

    I use Webots for teaching both robotic systems and cyber-physical systems, usually in the context of digital manufacturing/Industry 4.0. You can model entire systems at an abstract level, or focus on the detail of sensors interacting with each other.

    Some more reasons to use Webots can be found here: https://cyberbotics.com/#webots

    To get a working installation of Webots, you should visit the excellent documentation that is located at: https://cyberbotics.com/doc/guide/installation-procedure#installation-on-macos

  • Some initial considerations for simulating systems

    Some initial considerations for simulating systems

    There are a few approaches to creating simulations that affect how we construct our models.

    The first approach is to do something called a Discrete-Event Simulation. With this model, we look at our system and think about the sequence of what happens in that system. When we say “what happens” we are interested in particular events where a change is state observed. An example of this is a machine tool starting a job; from a system modelling perspective we need to recognise that it has changed from an idle state to an operational state. Similarly when the machining is finished, the machine returns to an idle state again.

    If we imagine that our model can represent such state, we shall also want to have a clock running so that we can record the time at which these events occurs. This will help us understand the performance of the system. As the clock is advanced, we commonly refer to this as a “tick” in the simulation, just like the sound of a mechanical clock as the pendulum swings side to side (or a metronome, even).

    So, we have a clock running, and for each “tick” the model records the states of all of the entities within the model. The model is our factory, and the entities are all the things that have states.

    However, even in a complex model, there isn’t always a change of state that occurs for every tick of the simulation. So, we are in effect, recording “dead time” where nothing of any importance is happening.

    An alternative to this is to still have the clock running, but only to record the changes of state when they occur. This is referred to as “next-event time progression”, and this can drastically reduce the time taken for the simulation to run.

    We want the simulation to be as fast as possible, so that we can experiment more; we don’t want a replication of our real system if it takes the same amount of time to run, as this would be of limited help to us.

    An alternate way of simulating systems is to model them as a set of continuously changing variables. Such simulations are described as a set of differential equations, and immediately puts many people off as they regard such work as being too theoretical/mathematical.

    The most accurate simulations of complex scenarios can include both next-event time progression (DES) and continuous simulation. 

    One example might be a machine tool that has been set up for a new job. Once the set up has been completed, the cutting tool has been renewed and therefore has no wear. The machine tool changes state and goes into operation. While the job is being processed the tool is continuously worn and when it reaches a certain threshold, it triggers a change in state for the machine, which must be stopped to renew the tool.

    In many cases though, we don’t have to simulate to such a degree of detail and it is good practice to start with the simplest model possible (usually a DES) and then iterate the development of that model, adding detail and complexity as driven by the new questions that we want to ask of the simulation.

  • Simulation as experimentation

    Simulation as experimentation

    Resources are generally more constrained in an SME manufacturer than for a large corporate organisation; there isn’t the luxury of a research and development department, or an intelligence unit that deals with reporting, analysis and planning.

    This situation becomes much clearer for those who have experience of working within an SME. It is not that the staff cannot think innovatively, or solve problems for themselves. It just isn’t feasible to commit any time to experimentation as the actual cost to daily production is too high.

    SMEs are often preoccupied by the orders that need delivering now, and cannot halt production to ‘have a go’ at a potentially interesting idea.

    For owner-managers, there is a tension between the requirement to plan for the future, and the pressing need to deliver the next order. Some SME manufacturer’s don’t plan too extensively and are at the mercy of the prevailing market conditions. They rely on an ability to be agile, to ‘duck-and-dive’ in response to external factors.

    Those manufacturers that do forecast generally apply it to sales, and then use historical experience to translate this into approximations for stock-holding and subsequently the demands that might be placed upon the factory.

    Theory tells us that having access to more data improves the quality of our decisions. Well, this is true up to a point. Too much data, especially raw, granular data, requires too much effort (and know-how) to get its into the condition where it can be useful.

    IIoT technologies have data production and sharing as a functional priority, and while this might give the production supervisor some new insights about how the plant operates, the volume of data that is accumulated will soon become too much to comprehend.

    As such, the ‘experience’ model of production management cannot scale to accommodate the tidal wave of data that a digital transformation can produce.

    Indeed, many of my discussions with SME staff is about how they can make better use of the data that they already have. The discussion starts by the company wanting to see how they can embrace Industry 4.0, and then we end up exploring how they are using the data that is being produced by their existing plant.

    For instance, how are the log files from Machine X being used for planning? In the majority of cases, the data is being saved, and that is the end of it.

    Some of the operations data is waiting to be tapped into as a valuable source of information, such as the electrical power signatures that all plant produce. Hidden in those operations is a wealth of behavioural, condition, and performance data that only requires current sensors to detect.

    But, aside from taking the plunge and actually deploying some IIoT equipment, there is a general reluctance to disrupt the manufacturing schedule as it would appear to create too much of a financial risk.

    So, while the owner-manager realises that they need to strategise for the future, they might only restrict their planning activity to an accounting view. Management accounts provide an abstract means of modelling a business, but for a manufacturer this might not be enough if they are attempting to optimise to a finer degree.

    Essentially, the accounting method of modelling is a high-level simulation of hoe the business might react to external stimuli. What is needs therefore is a less abstract view of the factory, that a) permits lower-level decisions to be taken about important processes, and b) translates the high-level accounting view into a more realistic set of reactions for the manufacturing system itself.

    Simulation is a topic that can be a big turn-off for busy people. It sounds academic, it will probably involve complex mathematics, and it will take too long to learn.

    However, while these statements may be true in some circumstances, there is a much lower barrier to simulation than a lot of people realise.

    Considerable insight can be quickly gained with a spreadsheet, and this is something that is on everybody’s desktop computer these days. Simulation programming languages such as SimPy or ManPy can even open up the power of simulation to non-programmers, though many people find that a spreadsheet is sufficient for their needs.

    Simulation improves the depth of your “what if?” questions, and also answers some of them, which in turn, enhances your understanding of the manufacturing operation as a whole.

    With a little practice, simulation can become a tool for evaluating various options, before you make a decision. This can really empower SMEs to ‘experiment’ with the introduction of IIoT, before they spend a penny!

  • Lean + IIoT = skills gap?

    Lean + IIoT = skills gap?

    Central to lean manufacturing is the empowerment of people. Lean can’t work unless there is a pervasive culture that will experiment, implement and evaluate. The lean methods enable employees to focus on fault diagnosis and to propose innovative solutions that minimise the resources required to maximise throughput.

    A lean implementation will help build a good standard of information literacy amongst staff who might previously had very little experience of data analysis. And this could be useful if a factory embarks upon enhancing its data capture with IIoT equipment.

    There is an added complication with IIoT/Industry 4.0/digital manufacturing data capture though; these initiatives assume the use of analysis techniques that are beyond the traditions of Statistical Process Control (SPC).

    Machine Learning (ML) is being touted by many software vendors as a silver bullet (which it isn’t), but it does have a place when it comes to developing insight from data streams that are both fast and large. ML models can help us understand some of the complexities of processes by  providing different perspectives of the data; such models will probably include a number of inter-related processes, rather than the single operation that is represented by a lean Statistical Process Control (SPC) chart.

    SPC charts can often be challenging to comprehend until you have seen a few, or better still, plotted them yourself. They are an excellent way of using a data visualisation technique to communicate deeper understanding of a process.

    But using Fast Fourier transform (FFT) to make sense of streaming data requires a different set of skills, as does the selection of random walk over a support vector machine (SVM) for classification. Hence the the demand for the ‘data scientist’ who can take such decisions and help translate the complexity to us mortals.

    We are heading towards an era where there is a fundamental need for more sophisticated data analysis skills. Technologies such as IIoT are providing better, cheaper access to data. We can look beyond the confines of the individual manufacturing process and analyse operations at scale.

    However, realising this potential means that we need to address a skills gap that is rapidly emerging. A lot of Computer Science degree courses do not teach these skills, some of which are more often found in electrical/electronic engineering courses. Some Data Analytics courses are starting to appear, and these will only increase as businesses see the need to make better use of the data that their IIoT devices are churning out.

    So, lean might be a good way of introducing digital transformation technologies to a business. But this may also expose a need to develop advanced data literacy skills rather rapidly.