Richard Hill

Judgement for AI-mediated work

Category: Industry 4.0

  • Some challenges for Cyber Physical Systems

    Cyber Physical Systems are one step towards our quest to build smart systems. Smartness implies convenience, and there many compelling visions of the technology improving the quality of our lives.

    Smart phones are ubiquitous now, and there are trends developing whereby a smart phone is the only means of access to the internet that some demographics have. Smart home technology is essentially automation, that through inexpensive devices, coupled to a pervasive network of broadband and 4G cellular access, is leading towards greater uptake of lighting and heating control for instance. Smart cities are perhaps the most visionary application of technology, as they integrate technologies that are available, together with technologies that are developing rapidly such as autonomous vehicles.

    Research to date has been roughly partitioned into Wireless Sensor Networks, Internet of Things and Cyber Physical Systems, with clear areas of overlap. It is clear though that there is a move towards convergence as it becomes more difficult to separated the technologies, as at one level, any integration of physical sensing and actuation through a communication network actually becomes a cyber physical system.

    What is interesting is that we appear to have a hierarchy of systems developing, with the realisation that a CPS is not necessarily limited to a discrete set of components and capabilities that are brought together to solve a particular problem; a CPS can emerge because it has been enabled by a pre-existing network infrastructure for instance.

    So, if we have systems-of-systems, how do we:

    • verify that a solution is correct prior to execution, and
    • resolve any conflicts during execution.

    The CPSs do need to scale in an uncertain environment, as the density of data and interactions increases, as well as the availability of computation through embedded “smart” devices.

    We also have hardware and humans in the loop, which also mandates that we model behaviour and emotions if we are to explore realistic representations of the eventual systems.

    We know that a CPS acts upon its environment, and that the environment is continually changing. How do we cater for secondary, unforeseen, unintended events?

    This poses challenges for human safety and how we provide the necessary controls that can respond in real-time. We also know from experience that services are not all created equally. Different design approaches and standards can lead to different results.

    Real-time control is a concern for the CPS research community, as the effects when a system does not respond instantaneously could be fatal. Such scenarios may happen at some time in the future, and depending upon the environment may cause exposure to hazardous situations such as chemicals, radiation or unknown phenomena.

    The effects may be different for a changing demographic of user, and might not appear as we would expect.

    One approach to the arrangement of CPSs could be to consider watchdog architecture, where the components and services are partitioned into:

    • verification: we have methods in place to verify and validate design models prior to implementation. Formal methods can assist with this.
    • conflict detection: we proactively include functionality within the design to seek out and detect behaviour that would cause conflict with either the system itself or with other interacting CPSs.
    • conflict resolution: the solution has the capability to be rational and optimises its decisions based upon experience (knowledge base) and the desires of the system, which recognising any altruistic, share goals of a given community.

    Scaling and density

    Such is the potential scale of deployment of CPSs, there are challenges that must be addressed relating to both the scale of a CPS’s influence, as well as the density of sub-components of CPSs and their communications infrastructure.

    For instance, how many individual devices might represent an agent or actor in an environment? If we assemble systems from sub-systems, who owns what in a service-based environment? This is particularly relevant to the stewardship of personal data. A related challenge is knowing when to share data to exploit an opportunity that will advance an agent’s goals.

    The need to be flexible and adapt to changing circumstances is important for an effective CPS. How do we reconfigure during execution?

    All of this functionality will place a greater demand on messaging for sensing and actuation, and introduces the possibility of greater interference and cross-talk within the networks. Since these are increasingly likely to have an impact on critical functions, there will be safety implications to consider.

    Runtime complexity

    Earlier, we considered a watch-dog architecture as one potential approach to governing a CPS. The implications of this being managed in real-time are additional complexity. Not only does the design have to take account of the inherent relationships between conflict detection, resolution and verification, but during execution, conflict detection, safety analysis and re-validation needs to occur.

    This emphasises the dynamic nature of a CPS; it must react to environmental stimuli, whilst considering its internal goals. It has an obligation to not do harm, and must therefore continually re-assess its actions.

    Software engineering approaches include testing as a stage, or as an interactive part of the development process. In the case of a CPS, the testing needs to be a continuous function if the underlying design model is to retain its key principles. For instance, sensor readings can be emulated to to test a particular function. If a set of readings produces test results that are outside of scope, then the CPS has to decide whether to adapt or to retain its current configuration. One significant challenge is how we can produce models that include an element of “conscience” for a CPS, to steer its reasoning when faced with dilemmas.

    Data challenges

    Big Data has brought the characteristics of volume, velocity, veracity and value to the fore. A CPS needs to decide what data is required for a particular purpose. An interface with humans is likely to include the capability to process natural language, to extract concepts from streamed inputs, and to perform information searches, all in real-time. In addition, the CPS will also need to be able to distinguish between information that is trusted and that which is not.

    Complexity

    Normal human behaviour (whatever that is), is complex. Humans can cope with datasets that are incomplete, by using past experience, reasoning, or even guessing. How can be develop such capabilities in software? This is a key challenge for the control theory communities.

  • Systems that reason

    Systems that reason

    It is commonplace for autonomous systems to be referred to as smart systems. These are characterised by features such as high availability, the ability to repair themselves and possessing intelligence to be able to manage their abilities in response to different scenarios.

    A Cyber Physical System (CPS) may comprise a number of different operating systems, and will be able to interact with an internet communication network, which might include switching hardware and/or software. There will also be some ability to monitor processes in critical applications.

    The “smartness” assumes that a CPS can also interact, or relate to its own Knowledge-Based System (KBS), which is an evolving data structure that represents relational information that is of relevance to the CPS.

    The KBS might be an expert systems, a recommendation engine, a decision support system, or a broader knowledge and information management system, or even a combination of items.

    For the CPS to be truly adaptive, it should support the update of its KBS in relation to any experience that it gains. This includes modifications to any pre-existing goals, or the adding or replacement of goals. Similarly, a different decision making strategy might suit a certain set of circumstances and a CPS needs to be able to accommodate this.

    As such, the work of research into cognitive, robotic and agent systems is pertinent to the design of a CPS.

    In essence, the CPS (or smart system) may indeed become a system of varying complexity, but it must be able to take the initiative and demonstrate agency. To do this it must have the capacity to reason about a situation, and formulate its own strategy to achieve a goal.

    Practical reasoning is the process of figuring out what needs to be done, so it is reasoning directed towards actions. It is about evaluating the outcomes of different, competing options, and then deciding which is the best course of action to satisfy goals, desires and beliefs.

    In contrast, theoretical reasoning is directed specifically towards beliefs only.

    As humans we tend to undertake two activities when reasoning. First, we deliberate; we decide upon the state of affairs that we want to achieve.

    Second we decide about how to achieve the state of affairs. We refer to this as means-end reasoning. The outputs of deliberation are intentions.

    Intentions in practical reasoning

    Intentions pose problems for agents, who need to determine ways of achieving them.

    If I have an intention to achieve X, you would expect me to devote resources to deciding how to bring about X.

    Intentions provide a “filter” for adopting other intentions, which must not conflict.

    If I have an intention to achieve Y, you would not expect me to adopt an intention Z such that X and Z are mutually exclusive.

    Agents track the success of their intentions, and are inclined to try again if their attempts fail.

    If an agent’s first attempt to achieve X fails, then all other things being equal, it will try an alternative plan to achieve X.

    Agents believe their intentions are possible.

    That is, they believe there is at least some way that the intentions could be brought about.

    Agents do not believe they will not bring about their intentions.

    It would not be rational of me to adopt an intention to X if I believed X was not possible.

    Under certain circumstances, agents believe they will bring about their intentions.

    It would not normally be rational of me to believe that I would bring my intentions about; intentions can fail. Moreover, it does not make sense that if I believe X is inevitable that I would adopt it as an intention.

    Agents need not intend all the expected side effects of their intentions.

    If I believe X implies Y and I intend X, I do not necessarily intend Y also. (Intentions are not closed under implication.)This last problem is known as the side effect or package deal problem. We may believe that going to the dentist involves pain, and we may also intend to go to the dentist — but this does not imply that we intend to suffer pain.

    Notice that intentions are much stronger than mere desires:

    “My desire to play basketball this afternoon is merely a potential influencer of my conduct this afternoon. It must vie with my other relevant desires [. . . ] before it is settled what I will do. In contrast, once I intend to play basketball this afternoon, the matter is settled: I normally need not continue to weigh the pros and cons. When the afternoon arrives, I will normally just proceed to execute my intentions.” (Bratman, 1990)

    Planning

    Since the early 1970s, the AI planning community has been closely concerned with the design of artificial agents.

    The basic idea is to give an agent:

    • representation of goal/intention to achieve;
    • representation actions it can perform;
    • representation of the environment.

    and have it generate a plan to achieve the goal

    Planning is essentially automatic programming: the design of a course of action that will achieve some desired goal.

    Within the symbolic AI community, it has long been assumed that some form of AI planning system will be a central component of any artificial agent.

    Building largely on the early work of Fikes and Nilsson, many planning algorithms have been proposed, and the theory of planning has been well-developed.

    Dilemmas

    It makes sense that during the design of a CPS, we would like to model and explore how it might handle dilemmas. These are situations where some practical reasoning ability is tested.

    The prisoner’s dilemma is a popular example that illustrates the different outcomes that can be achieved through reasoning about what an agent believes.

    The situation is as follows:

    • Two persons have committed a crime, they are held in separate rooms.
    • If they both confess they will serve two years in jail.
    • If only one confesses one will be free and the other will get the double time in jail.
    • If both deny they will be held for one year.

    We can represent the various combinations of response in a payoff matrix as per below:

    Prisoner’s dilemma payoff matrix

    Confess is a dominant strategy for both. If both Deny they would be better off. This is the dilemma.

    This can describe a lot of situations. E.g. business cartels, nuclear armaments, climate change policies.

    Each game can be played repeatedly by the same players. Players have the opportunity to establish a reputation for cooperation and thereby encourage other players to do the same. 

    Thus, the prisoner’s dilemma may disappear.

    For egoists to play cooperatively the game has to be played an infinite number of times; there may not be a known last round of the game.

    If there is a last round in the game, Confess is a dominant strategy in that round. This will also be true in the next to the last round, and so on.

    Players cooperate because they hope that cooperation will induce further cooperation in the future, but this requires a possibility of future play.

    The implications are very different if we see it from the one shot game perspective or the repeated game perspective

    If the game is repeated the players have an incentive to cooperate so to not get punished by the opponents in the following rounds

    Rational agents will not be deterred by free-riders but continue to go about their business and devise sanctions for those agents who do not.

  • Cyber Physical Systems and Agency

    Cyber Physical Systems and Agency

    One of the attractions of Cyber Physical Systems thinking is that it has the potential to not only automate tasks that humans don’t/can’t do, but also there is the possibility of task delegation.

    From the human managers perspective, we can describe goals to staff (the “what”) and expect them to achieve the goal, without having to describe “the how”. The assumption in this case is that the staff member who is being delegated to has the capability to achieve a goal and knows “how” to do it.

    In effect, when we manage humans we rely on, and often exploit the agency that individuals have. This agency allows them to take decisions in response to a situation, without the delegator becoming involved.

    If we think of a human organisation as a domain for a CPS, there are a set of challenges that people deal with on a day-to-day business, such as:

    • an organisation typically has more than one employee, and there are numerous other stakeholders (suppliers, customers, regulators, etc.) that interact with the business. A human agent needs to be able to deal with decision-making that might be both collective and distributed;
    • the collection of stakeholders, who themselves have their own goals, agendas and capabilities, leads to a system that has complexities; there may be many ways to achieve something, and depending upon the stakeholders involved new possibilities may emerge as a result of new interactions;
    • successful delegation assumes trust, both in the ability to achieve a goal, but also in the most optimal way for a business as its resources are constrained;
    • systems that involve humans must suit the needs of humans. Humans interact in a variety of ways, including speech and visually, which means that the operational methods need to harness this to be successful.

    These challenges set quite a challenge for the design and specification of a CPS.

    How do we even approach the planning of such a system? How do we model an existing CPS, or a system that we want to convert to a CPS?

    How do we know that the result will behave as we expect it too?

    Multiagent systems

    We are, in effect describing a multiagent system (MAS). A MAS is a collection of agents that interact with each other to achieve system or individual goals.

    A community of people is a MAS, such as a team, a department, a manufacturing plant, or even a supply chain.

    At the micro level, we are interested in how an individual agent goes about its business.

    At the Macro perspective, we are concerned with how agents interact to achieve their desires, being conscious that as each agent has agency, it is feasible that they do not always have the same set of goals.

    In contrast with traditional enterprise application design, a MAS does not have formal control encoded into the DNA of its agents; a MAS relies upon communication between agents as the enabler of action and achievement.

    What is an agent?

    The simplest explanation is that an agent is an entity that can sense and act upon its environment. Sensing and actuation are of course fundamental components of a CPS. The link between sensing and actuation is some form of intelligence.

    From Wooldridge and Jennings (1995):

    “An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives”.

    So, intelligent agents:

    • have autonomy
    • act with a specific purpose;
    • are situated in an environment.

    Autonomy is an essential characteristic if we want to delegate complex tasks to intelligent agents. We want our agents to act in response to unforeseen and unpredictable events, though we still want to retain ultimate control over an agent if things go wrong.

    An intelligent agent has the following properties:

    • reactive: it provides a timely response to the sensed environment;
    • proactive: the agent’s behaviour is directed towards a goal;
    • autonomous: the agent will take the initiative when required;
    • social: an agent will cooperate and coordinate with other agents;
    • intelligent: the agent will be able to reason between its senses and its own knowledge base.

    A rational agent is said to be able to balance reactive and proactive behaviours. We don’t want our agent to react to every situation, and conversely we don’t want an agent that procrastinates by constantly planning, though some human managers may recognise both of these behaviour in human staff!

    Agency as a design metaphor

    Agents and agency help us understand human societies at all levels as the assist us to abstract ourselves away from the complex detail. 

    Agents are perhaps the ultimate “black box” approach to encapsulation that is a feature of Object Oriented software development.

    In the same way that a capable employee can be re-deployed onto a different task, a software agent is reusable in different settings.

    Indeed, the new environment may not have been present when the agent was designed originally, and it is this level of adaptation that is required by a future CPS.

    MAS also assist in the modelling of interactions within a system, and game theoretic models of interaction can be explored. Such modelling can allow us to observe the effects of more sophisticated emergent behaviours between agents, such as coalition forming, bargaining and negotiation.

    In a sense, we take such behaviours for granted in human systems, but these are exactly the behaviours that a CPS must possess if we are to delegate any meaningful work to one.

    Agent Oriented Software Engineering (AOSE) is an approach to the modelling and design of systems using the properties of agency. One example of a situation that involves the interaction between two agent actors is illustrated in the Contract Net protocol below:

    sequence diagram of contract net protocol
    Contract Net protocol

    This protocol defines the communicative acts that need to take place between the two actors (agents), and in itself creates a specification for the behaviours that each of the agents require to function correctly when negotiating a transaction.

    Isn’t agency just Artificial Intelligence?

    There is a a view that AI is limited to the study of the atomic elements that contribute to intelligence: reasoning, planning, learning and perceiving. Agency includes communication as a means by which the atomic components can be combined, and as such a MAS is actually distributed AI.

  • Why cybersecurity is not secure enough for IIoT

    Why cybersecurity is not secure enough for IIoT

    Industry is in a constant state of flux as technologies are being developed, evaluated and deployed in order to create competitive advantage, increase productivity and efficiency, and to work towards a sustainable future.

    Ever-cheaper embedded devices and transducers, together with pervasive networking infrastructure has resulted in the rapid uptake of equipment that is now being implemented at massive scale. Consumers are becoming more familiar with the term Internet of Things (IoT); this technology is essentially driving a revolution within the commercial environment and is known as the Industrial Internet of Things (IIoT).

    Since IIoT devices are being implemented across industry, the sheer increase in computational nodes that are inter-connected via a network inevitably increases the potential points of system vulnerability. More and more “back doors” to previously secure (albeit not connected) infrastructure. The value of sharing data may be arriving at some cost.

    Security is therefore a pertinent issue for industry. A data breach from an organisation may leak valuable Intellectual Property (IP) to a competitor, with potentially disastrous consequences. A leak may expose confidential customer data, at the risk of jeopardising an organisation’s reputation. In the case of Cyber Physical Systems, there could be human lives lost.

    It is the development and adoption of new technologies and business models that is at the heart of these new vulnerabilities. Cloud computing has transformed the infrastructure of many organisations by enabling processing and storage to be outsourced to shared computing facilities in data centres, enabling computing to be an on-demand, elastic utility.

    Wireless communications enable data to be shred between devices where cables are either difficult to lay or their installation cost is prohibitive.

    Both of these developments are examples of organisations needing to increase their awareness of security control measures, whilst some organisations get it wrong and suffer the consequences.

    Wireless devices can be disabled remotely, or perhaps more worryingly, can be used to “listen in” to the data that is being sensed. CPS can be taken over, and physical actuation compromised.

    Security systems to date have primarily relied upon authentication mechanisms that use a central authority to establish relationships of trust between known components. As the explosion of IIoT devices continues, such authentication systems cannot scale sufficiently and new methods – such as multiparty authentication – are viewed as one possible way of addressing this challenge.

    Machine-to-Machine (M2M) communication is a key factor within digital manufacturing and the Industry 4.0 movement. This enables more data to be collected at the source of a manufacturing process so that tighter integration and coordination can be exploited between collections of manufacturing plant. The Internet means that the physical location of plant does not affect its ability to be included within a system, and thus much more macro-level system optimisations are possible.

    The issue is that what was once a recognised risk of a rogue operator/factory worker leaking process data to a competitor for personal profit, we now have the possibility that more detailed process data, that may describe an entire function of an industry rather than just one piece of plant, can potentially be accessed remotely and silently. Thus, security is becoming a major concern for IIoT adopters.

    As such, cybersecurity from an information security perspective is somewhat limited in its effectiveness for IIoT as it is concerned with the protection of data. IIoT’s inclusion of physical actuation as part of a control system, means that the security mechanisms have to take account of control mechanisms as well, as it is feasible that an adversary may hack an IIoT system, not to steal the data, but to “mess up” a process.

  • The importance of analytics

    The importance of analytics

    Everybody is talking about analytics. Together with Artificial Intelligence or AI all of our business problems will be solved apparently.

    Analytics sounds like analysis, so it is natural to make the comparison to try and understand what is different between the two.

    Analysis is defined as “the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it.”. In common parlance, analysis often means the act of using  quantitative statistics to explain or discover something of interest.

    Analytics is explained as “the discovery, interpretation, and communication of meaningful patterns in data. It also entails applying data patterns towards effective decision making.”.

    At first glance, there isn’t much of a difference between these two statements, which I am sure does not help people understand any distinction between the two terms.

    For me, analysis remains the core activity of reducing the complexity of data so that it can be comprehended. Analytics is much broader than this, as not only does it include the methods and tools required to create a platform for analysis to take place, it also includes the context in which the data to be analysed resides. Analytics is the scientific thinking and processes behind analysis, the whys and wherefores, and therefore analysis is a component of analytics.

    In the manufacturing domain at least, predictive analytics is very topical, and in the broader business domain in general, decision analytics is popular with enterprise software vendors.

    Predictive analytics is essentially forecasting, which itself is a mature statistical subject. It is a human trait to want to understand and plan for the future, and considerable research and experience has developed knowledge in this field.

    Being able to predict behaviour using a set of input variables enables transport operators to replace service items more economically than using traditional planned maintenance schemes. Similarly, while two different items of plant may both utilise the same bearing type, the difference in work loading on each machine may result in different wear patterns and therefore service life.

    It is therefore more prudent to replace either bearing when it is predicted to approach its failure, rather than on a pre-determined date. Such a scenario is described as predictive maintenance, and often includes the topic of condition monitoring.

    Businesses often want to identify segments in their customer base, in order to develop ideas for innovative products and services that might appeal to those customer types. It requires analysis that can take a collection of data and identify the characteristics that enable that data to be classified into discrete groups. This is referred to as decision analytics.

    Taken together, both predictive analytics and decision analytics are inherent parts of digital manufacturing, and are thus commonly referred to when discussing Industry 4.0.

    Inexpensive hardware is assisting the adoption of Industrial Internet of Things, and this is creating a deluge of data that needs to be analysed and visualised in ways that aid its comprehension.

    Analytics thus helps us not only understand the insight that lies within data, but it also assists how we cope with increased data volume by way of providing the tools, platforms and methods to manage and analyse that data.

  • Simple sensing is great value

    Simple sensing is great value

    In a previous life I was a production manager. One of the frustrations of such a role is the feeling that more control and coordination could be exerted “if only I had the data for…”. Modern manufacturing plant often provides either local instrumentation, or the remote logging of its operational data, but if we want to think about integrating plant, and therefore think about optimising operations of the whole system, there is usually some basic information that is missing.

    In the late 1990s, transducers for sensing were expensive, and the computational resource required to deal with the sensing data was similarly difficult to justify. This situation was also compounded by the general lack of network infrastructure, which was essentially pre-WiFi. Radio links were available, but they were a) costly and b) prone to interference, or had poor transmission range.

    Fast forward to recent times, where:

    • transducers are cheap;
    • microprocessors are cheap, more than capable for signal conditioning and limited data storage, and easy to network;
    • network availability is pervasive via, wired, WiFi, Bluetooth, 4G, etc.

    What does this mean for the production/operations manager who still has the same answer of “if only I had …” when they are looking for ways to increase productivity?

    It means that we are in an age where it is cheap to experiment with sensing, and it is cheap to integrate the sensing that may already exist, but which is not being used for holistic decision making.

    There still exists the situation where manufacturing plant does not produce data while it operates, and it is incumbent upon operators to count items to record data about operations.

    Let’s say that we want to gather some data from a production line. Products are produced and transported to a destination via a conveyor belt. We’ll assume that the products are identical, and that they all follow each other in single file. The production manager wants some indication of what is happening in realtime, plus a set of alerts when a significant event has occurred.

    So, we fasten a light source and a photocell, or some sort of proximity detector to the side of the conveyor belt. What do we record?

    In terms of data, we record a time and date stamp every time an object is detected. We assume that the conveyor belt moves only in one direction (it doesn’t reverse in some situations for instance), and that the sensor does not produce false positives (when it says that there is an object present, we can trust the statement).

    As the production line operates, objects move along the conveyor and the sensor produces data in the form of a stream of time and date stamps, which might look like this:

    03-09-2019 10:57:23

    03-09-2019 10:57:29

    03-09-2019 10:57:35

    03-09-2019 10:57:41

    Let’s assume that we have connected our sensor to a small microprocessor such as an Arduino board, or even a Raspberry Pi. That board will enable the incoming signal from the sensor to be augmented with a timestamp and written to a file, or sent via a network connection to a PC where the data is recorded.

    What can we do with that data? With one sensor we can:

    • count the total number of objects produced;
    • measure the rate at which objects are produced;
    • identify events which occur that might interrupt the flow (e.g. system breakdown, changeover between product type, etc.)

    If we augmented the system with an additional sensor, placed either above or below the first sensor, we could also distinguish between two product types if each has a different height. But we shall keep to the simpler example of uniform products to keep it simple.

    Now that we have the data, we can produce very simple reports of production output and rate, while also creating alerts for when the objects appear not to be arriving.

    Furthermore, the data is now being captured and stored in a way that can now be synthesised with other such systems – adding more sensing to other parts of the plant will now enable a more holistic view of the operations to be created.

    These first, tentative steps towards data capture are an important introduction to the modernisation of industry through digital manufacturing (or Industry 4.0). Whilst the latest examples of integrated manufacturing plants illustrate the possibilities of Cyber Physical Systems to coordinate, control and actuate physical systems on our behalf, there is still a fundamental reliance on the generation of data via sensing, its collection, processing, and subsequent reporting in a manner that humans can comprehend.

  • What is model checking?

    What is model checking?

    As software systems become more complex, and we use rapidly developing technologies to connect systems together to achieve new objectives, the scope for the introduction of errors increases. Such errors may cost an organisation money or they may cause harm.

    We can reason as to the range of possible outcomes of a system only up to a certain point; beyond that we are reliant upon the rigour of the design of a device, as well as the information that exists to describe how the system is intended to operate.

    As such, verification of systems is an important topic for the emerging Cyber Physical Systems that constitute a lot of the thinking around Industry 4.0 and digital manufacturing. Formal methods are an approach that can both simplify the process of verification during the design stage, as well as enabling the automation of testing, which is useful when a system grows to a size that is beyond what we can comprehend.

    The aim of a formal approach to design is to use mathematical techniques and rigour to verify that a system will be specified and operate correctly. Research and industrial practice have led to the development of tools that help manage the verification process, and this includes the automation of the necessary permutations that a system will be required to demonstrate a response to.

    Model checking is an approach where every system state is explored and tested in an exhaustive manner. This exploration then presents results that document how a model will operate in relation to its original specification.

    The use of abstraction within models allows very large software specifications to be verified, assisting the discovery of inconsistencies between the model and its specification, prior to system realisation and deployment.

    Another feature of model checking is that the use of mathematics to specify the model means that any queries upon that model can reveal ambiguous properties or characteristics, which can then be re-specified in much greater detail.

    If we consider the adoption of IoT devices in a manufacturing company, there will be a requirement to manage the production of many concurrent data sources. The act of attaching cheap hardware to some manufacturing plant is much easier than it is to craft the software that is required to integrate that component into an existing system.

    This leads to the possibility that data sources and data processing may be added to exiting manufacturing systems that cannot fully realise the potential value of the extra data. In fact, the addition of the data may cause confusion and lead to an obfuscation of exiting insight, to the detriment of the business.

    Model checking is thus a powerful approach to the design of systems, that protects against the enthusiastic augmentation of software that might actually prevent the desired objective of greater insight from being achieved. Whilst model checking would ideally be applied to all software design from the outset, it can be effectively utilised to verify the design of components, and as such this means that pre-existing systems can be developed further with sub-systems that have been formally tested and shown to be rigorous.

  • Making manufacturing formal

    Making manufacturing formal

    If we take the `buzz’ around Industry 4.0, and the resurgence of interest in Artificial Intelligence (AI), add a sprinkling of imagination, and we end up with some fantastic possibilities for manufacturing systems of the future.

    Industry leaders have known for a long time of a need to combine competitive advantage with minimal costs. If we could somehow develop the ‘best’ product whilst controlling the creation and retention of intellectual property, and have someone else make the goods. Of course, this is a difficult balancing act that demands a finer-grained, more detailed understanding of manufacturing processes, environmental economics and human behaviours than most manufacturing companies possess.

    A great deal of manufacturing control exists as tacit knowledge, gained by machine operators and their supervisors over a period of time.

    The introduction of computers, and lately, the widespread availability of very cheap computation through embedded and IoT devices, has enabled considerable strides to be made in pursuit of resource optimisation.

    But, as any economist knows, there is considerable value to be had if manufacturing organisations can collaborate at the process level, rather than through board-level business agreements. If we can use technology to realise  and re-engineer the concept of the ’supply chain’, then there will be opportunities for waste minimisation, enhanced quality, increased efficiency and ultimately profit maximisation.

    So, what are the challenges that so far have prevented the manufacturing domain from realising this hitherto hidden value?

    Manufacturing systems have three inherent characteristics that need to be dealt with. First, there is the distribution of activity, that in many cases is parallel, and this is difficult to comprehend at the macro level. Second, manufacturing systems must be able to adapt to changes in economic demand, which means that there is a large degree of flexibility that needs to be accommodated. Third, such systems have also to accomodate uncertainty in many guises; economic downturns and booms; material shortages and abundance; employee skills currency, etc.

    Taken together, a manufacturing system is a highly complex entity and is thus an interesting and potentially rewarding area to study.

    If we return to the shop floor of a manufacturing company though, how do the staff cope with uncertainty in material flows, erratic sales forecasts, and staff shortages, to name just a few of the daily issues?

    Such is the demand now for organisations to offer and deliver enhanced customer service levels, that originated in swifter lead-times and are now being compounded by mass-customisation to accommodate tailored customer needs at a massive scale, that it is becoming impossible for staff to comprehend the necessary variables and make sensible business decisions in the time available. They need assistance, whether it be visualisation to aid comprehension of complex data, simulation to understand the potential impact of different strategies, or guided-automation to reduce the number of options from thousands to maybe three or four choices.

    A conversation with an experienced production supervisor or production manager will quickly reveal the experience that is called upon daily to deal with systemic problems with product design, manufacturing processes, organisation, scheduling, etc., that are all additional factors that need to be considered when managing manufacturing operations.

    Many manufacturing companies are working with computer systems that at one level enable the transactions to be completed, but which also hinder the organisation’s ability to be flexible and adaptive to uncertainties. In effect, there are restrictive characteristics that are embedded with in the manufacturing systems that staff learn to cope with.

    Thus, if we start to consider the collaboration possibilities of Industry 4.0, by way of accessible, cheap computational devices, there is also the potential for chaos if a collaborator cannot contribute positively to the shared desires. In fact, there are likely to be casualties for those companies who cannot adapt quickly enough.

    Since the computational hardware that enables data processing and sharing via networking is increasingly becoming embedded in manufacturing  plant, it is inevitable that the solution to enhanced collaboration must require software at least in part; we must ignore the human factors of such changes at our peril, but the underlying connectivity of processes, data, knowledge exchange and visualisation for comprehension are all facilitated by tools that can abstract us away from the detail, while ensuring that the detail informs the decision appropriately.

    In effect, we need to go back to the basics of software engineering; our systems need to be fit for purpose, meaning that they deliver what we need from them in a predictable way. They also need to be able to tolerate changes and be flexible. And they should be documented in a way that enables future developments to be understood in the context of the existing system capabilities. As we move towards adaptive systems, that perhaps use statistical learning or genetic algorithms to generate new functionality through system use, the requirement for self-describing systems becomes much more relevant.

    If a system design is documented, it can be tested, even before it has been written as program code. If the functionality is tested against a known set of cases, it can be wrapped-up as an abstraction to simplify the design of other systems that can utilise that functionality.

    Formal methods are approaches that can support the development of systems that do operate in complex environments, that have consequences if uncertainties are not catered for. Such methods use mathematics to precisely define concepts and system states, whilst also enabling logical reasoning to be utilised to test a specification to ensure that it meets what is required of the eventual system. Perhaps losing a network connection between two items of manufacturing plant is something that a factory can tolerate and overcome. But what would the impact be if a number of machines, across several sites (or maybe cross-cutting more than one industry), upon the optimisation of a system that ensures a mission critical order is delivered complete?

    It is the informal that we deal with on a daily basis, and this is something that needs to be addressed as part of system design, as sell as system integration. Cyber Physical Systems rely on communication and collaboration to manifest enhanced performance. As such, to trust these systems needs more formality in their make-up.

    Manufacturing systems are complex, but the next significant development opportunity of Industry 4.0 will only increase the complexity of design, validation and implementation of such systems. We need to replace informality with formality in our response.

  • Managing software complexity

    Managing software complexity

    The construction of software for an application is a complicated process. We employ Software Engineers to develop software in a way that helps us arrive at robust solutions, and it is the training and experience of such engineers that we rely on.

    One of the many effects of the adoption of Internet of Things technologies is the realisation that the inter-connected-ness of physical objects with other objects, and human beings, is creating systems that are inherently complex.

    If we consider the scenario where a machine-tool is controlled by a microprocessor, usually referred to as an embedded system, the range of outcomes that the system must govern, whilst numerous, is conceivable to the software engineer and can be accounted for in the resultant program code for that application.

    If, however, we consider a situation where collections of machine tools, all of different types, are networked such that they can exchange information for the purposes of enhanced control, optimising resources and reducing wastage, the complexity of such a system is more challenging to fathom. If we then augment such a system with inputs from the functions from within a manufacturing supply chain, the scope of complexity becomes increasingly more difficult to fathom.

    The combination of localised sensing, data processing and analytics, data exchange, data fusion and aggregation, data storage and visualisation is what constitutes a system (perhaps a Cyber Physical System) that can offer considerable benefits for an organisation that seeks competitive advantage. But how do our software engineers deal with such a challenge?

    There has been a tradition of developing the craft of software engineering, whereby the use of methods and frameworks, when combined with real-world experience of software creation, has culminated in the development of the skills and knowledge that we recognise as befitting the role of a “software engineer”.

    A significant proportion of the software engineering role is the ability to deal with complexity in system design, but also to handle the effects of complexity after a system has been implemented, through updates that might be required as a result of new requirements, unforeseen requirements, and system design deficiencies, otherwise known as “bugs”.

    As we start to comprehend the potential impact of the IoT era, there is an emerging awareness of the need to be able to design and test – more exhaustively – software before it is deployed.

    The development of formal approaches to software development is something that has been an active research topic in academic arenas for many years, with its industrial application being generally limited to “safety-critical” systems such as nuclear power plants, aircraft control systems, etc.

    But we are now in the midst of a period where CPS are increasingly accessible and as a consequence they are being introduced into application areas by individuals who a) are ignorant of formal approaches to software development and b) are not software engineers and are therefore lacking even the “craft” of software engineering.

    What do we mean by formal methods?

    Formal methods are an approach where the ability to analyse software design is an inherent part of the design process. This facilitates not only the construction of models that replicate the system to be developed (and thus use abstraction to manage the complexity), but it means that a system model can be tested and evaluated before a line of code is even written.

    The use of mathematical notation means that the specification of a system can be expressed precisely, but that it can also be formally reasoned against to test for inconsistencies.

    This formality offers considerable advantages for software development such as:

    • software specifications are more explicit and rigorous. This supports the goal of requirements engineering to ensure that the proposed system delivers what is needed;
    • program code is more rigorous as there is a formal underpinning for each aspect of the software that is being developed. The software engineer knows that functions within the model have been tested against the specification and as such has already verified that the program functionality is correct. There is the additional benefit that the formal declaration of requirements, together with logical reasoning, means that some degree of testing can actually be automated;
    • system maintenance and future modification will be simpler, partly because such a system has been more thoroughly designed, but also because the underlying documentation is explicit and includes the reasoning for the inclusion of a all functionality.

    The above are compelling arguments for a return to the thinking around formal methods, and how these can help us develop the next generation of IoT-inspired systems.