Systems that reason

It is commonplace for autonomous systems to be referred to as smart systems. These are characterised by features such as high availability, the ability to repair themselves and possessing intelligence to be able to manage their abilities in response to different scenarios.

A Cyber Physical System (CPS) may comprise a number of different operating systems, and will be able to interact with an internet communication network, which might include switching hardware and/or software. There will also be some ability to monitor processes in critical applications.

The “smartness” assumes that a CPS can also interact, or relate to its own Knowledge-Based System (KBS), which is an evolving data structure that represents relational information that is of relevance to the CPS.

The KBS might be an expert systems, a recommendation engine, a decision support system, or a broader knowledge and information management system, or even a combination of items.

For the CPS to be truly adaptive, it should support the update of its KBS in relation to any experience that it gains. This includes modifications to any pre-existing goals, or the adding or replacement of goals. Similarly, a different decision making strategy might suit a certain set of circumstances and a CPS needs to be able to accommodate this.

As such, the work of research into cognitive, robotic and agent systems is pertinent to the design of a CPS.

In essence, the CPS (or smart system) may indeed become a system of varying complexity, but it must be able to take the initiative and demonstrate agency. To do this it must have the capacity to reason about a situation, and formulate its own strategy to achieve a goal.

Practical reasoning is the process of figuring out what needs to be done, so it is reasoning directed towards actions. It is about evaluating the outcomes of different, competing options, and then deciding which is the best course of action to satisfy goals, desires and beliefs.

In contrast, theoretical reasoning is directed specifically towards beliefs only.

As humans we tend to undertake two activities when reasoning. First, we deliberate; we decide upon the state of affairs that we want to achieve.

Second we decide about how to achieve the state of affairs. We refer to this as means-end reasoning. The outputs of deliberation are intentions.

Intentions in practical reasoning

Intentions pose problems for agents, who need to determine ways of achieving them.

If I have an intention to achieve X, you would expect me to devote resources to deciding how to bring about X.

Intentions provide a “filter” for adopting other intentions, which must not conflict.

If I have an intention to achieve Y, you would not expect me to adopt an intention Z such that X and Z are mutually exclusive.

Agents track the success of their intentions, and are inclined to try again if their attempts fail.

If an agent’s first attempt to achieve X fails, then all other things being equal, it will try an alternative plan to achieve X.

Agents believe their intentions are possible.

That is, they believe there is at least some way that the intentions could be brought about.

Agents do not believe they will not bring about their intentions.

It would not be rational of me to adopt an intention to X if I believed X was not possible.

Under certain circumstances, agents believe they will bring about their intentions.

It would not normally be rational of me to believe that I would bring my intentions about; intentions can fail. Moreover, it does not make sense that if I believe X is inevitable that I would adopt it as an intention.

Agents need not intend all the expected side effects of their intentions.

If I believe X implies Y and I intend X, I do not necessarily intend Y also. (Intentions are not closed under implication.)This last problem is known as the side effect or package deal problem. We may believe that going to the dentist involves pain, and we may also intend to go to the dentist — but this does not imply that we intend to suffer pain.

Notice that intentions are much stronger than mere desires:

“My desire to play basketball this afternoon is merely a potential influencer of my conduct this afternoon. It must vie with my other relevant desires [. . . ] before it is settled what I will do. In contrast, once I intend to play basketball this afternoon, the matter is settled: I normally need not continue to weigh the pros and cons. When the afternoon arrives, I will normally just proceed to execute my intentions.” (Bratman, 1990)

Planning

Since the early 1970s, the AI planning community has been closely concerned with the design of artificial agents.

The basic idea is to give an agent:

  • representation of goal/intention to achieve;
  • representation actions it can perform;
  • representation of the environment.

and have it generate a plan to achieve the goal

Planning is essentially automatic programming: the design of a course of action that will achieve some desired goal.

Within the symbolic AI community, it has long been assumed that some form of AI planning system will be a central component of any artificial agent.

Building largely on the early work of Fikes and Nilsson, many planning algorithms have been proposed, and the theory of planning has been well-developed.

Dilemmas

It makes sense that during the design of a CPS, we would like to model and explore how it might handle dilemmas. These are situations where some practical reasoning ability is tested.

The prisoner’s dilemma is a popular example that illustrates the different outcomes that can be achieved through reasoning about what an agent believes.

The situation is as follows:

  • Two persons have committed a crime, they are held in separate rooms.
  • If they both confess they will serve two years in jail.
  • If only one confesses one will be free and the other will get the double time in jail.
  • If both deny they will be held for one year.

We can represent the various combinations of response in a payoff matrix as per below:

Prisoner’s dilemma payoff matrix

Confess is a dominant strategy for both. If both Deny they would be better off. This is the dilemma.

This can describe a lot of situations. E.g. business cartels, nuclear armaments, climate change policies.

Each game can be played repeatedly by the same players. Players have the opportunity to establish a reputation for cooperation and thereby encourage other players to do the same. 

Thus, the prisoner’s dilemma may disappear.

For egoists to play cooperatively the game has to be played an infinite number of times; there may not be a known last round of the game.

If there is a last round in the game, Confess is a dominant strategy in that round. This will also be true in the next to the last round, and so on.

Players cooperate because they hope that cooperation will induce further cooperation in the future, but this requires a possibility of future play.

The implications are very different if we see it from the one shot game perspective or the repeated game perspective

If the game is repeated the players have an incentive to cooperate so to not get punished by the opponents in the following rounds

Rational agents will not be deterred by free-riders but continue to go about their business and devise sanctions for those agents who do not.

Be the first to comment on "Systems that reason"

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.