To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Computational Logic has been developed in Artificial Intelligence over the past 50 years or so, in an attempt to program computers to display human levels of intelligence. It is based on Symbolic Logic, in which sentences are represented by symbols and reasoning is performed by manipulating symbols, like solving equations in algebra. However, attempts to use Symbolic Logic to solve practical problems by means of computers have led to many simplifications and enhancements. The resulting Computational Logic is not only more powerful for use by computers, but also more useful for the original purpose of logic, to improve human thinking.
Traditional Logic, Symbolic Logic and Computational Logic are all concerned with the abstract form of sentences and how their form affects the correctness of arguments. Although Traditional Logic goes back to Aristotle in the fourth century b.c., Symbolic Logic began primarily in the nineteenth century, with the mathematical forms of logic developed by George Boole and Gottlob Frege. It was enhanced considerably in the twentieth century by the work of Bertrand Russell, Alfred North Whitehead, Kurt Gödel and many others on its application to the Foundations of Mathematics. Computational Logic emerged in the latter half of the twentieth century, starting with attempts to mechanise the generation of proofs in mathematics, and was extended both to represent more general kinds of knowledge and to perform more general kinds of problem solving. The variety of Computational Logic presented in this book owes much to the contributions of John McCarthy and John Alan Robinson.
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prospect with horror.
Partly because these controversies attract so much attention, one of the most important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances is Computational Logic.
What do the passenger on the London Underground, the fox, the wood louse, the Mars explorer and even the heating thermostat have in common? It certainly isn’t the way they dress, the company they keep, or their table manners. It is the way that they are all embedded in a constantly changing world, which sometimes threatens their survival, but at other times provides them with opportunities to thrive and prosper.
To survive and prosper in such an environment, an agent needs to be aware of the changes taking place in the world around it, and to perform actions that change the world to suit its own purposes. No matter whether it is a human, wood louse, robot or heating thermostat, an agent’s life is an endless cycle, in which it must:
repeatedly (or concurrently)
observe the world,
think,
decide what actions to perform, and
act.
We can picture this relationship between the mind of an agent and the world like this:The observation–thought–decision–action cycle is common to all agents, no matter how primitive or how sophisticated. For some agents, thinking might involve little more than firing a collection of stimulus–response associations, without any representation of the world. For other agents, thinking might be a form of symbol processing, in which symbols in the mind represent objects and relationships in the world. For such symbol manipulating agents, the world is a semantic structure, which gives meaning to the agent’s thoughts.
Suppose, in your desperation to get rich as quickly as possible, you consider the various alternatives, infer their likely consequences and decide that the best alternative is to rob the local bank. You recruit your best friend, John, well known for his meticulous attention to detail, to help you plan and carry out the crime. Thanks to your joint efforts, you succeed in breaking into the bank in the middle of the night, opening the safe and making your get-away with a cool million pounds (approximately 1.65 million dollars – and falling – at the time of writing) in the boot (trunk) of your car.
Unfortunately, years of poverty and neglect have left your car in a state of general disrepair, and you are stopped by the police for driving at night with only one headlight. In the course of a routine investigation, they discover the suitcase with the cool million pounds in the boot. You plead ignorance of any wrong doing, but they arrest you both anyway on the suspicion of robbery.
In Chapter 2, we saw that psychological studies of the selection task have been used to attack the view that human thinking involves logical reasoning, and to support the claim that thinking uses specialised algorithms instead. I argued that these attacks fail to appreciate the relationship between logic and algorithms, as expressed by the equation:
Specialised knowledge can be expressed in logical form, and general-purpose reasoning can be understood largely in terms of forward and backward reasoning embedded in an observe–think–decide–act agent cycle.
I also argued that many of the studies that are critical of the value of logic in human thinking fail to distinguish between the problem of understanding natural-language sentences and the problem of reasoning with logical forms. This distinction and the relationship between them can also be expressed by an equation:
natural language understanding =
translation into logical form + logical reasoning.
We saw that even natural-language sentences already in seemingly logical form need to be interpreted, in order to determine, for example, whether they are missing any conditions, or whether they might be the converse of their intended meaning. Because of the need to perform this interpretation, readers typically use their own background goals and beliefs, to help them identify the intended logical form of the natural-language problem statement.
In this chapter we return to the topic of Chapters 1 and 2: the relationship between logic, natural language and the language of thought. We will look at the law regulating British Citizenship, which is the British Nationality Act 1981 (BNA), and see that its English style resembles the conditional style of Computational Logic (CL) (Sergot et al., 1986).
The BNA is similar to the London Underground Emergency Notice in its purpose of regulating human behaviour. But whereas the Emergency Notice relies on the common sense of its readers to achieve its desired effect, the BNA has the power of authority to enforce its provisions. The BNA differs from the Emergency Notice also in its greater complexity and the more specialised nature of its content.
What is the difference between the fox and the crow, on the one hand, and the cheese, on the other? Of course, the fox and the crow are animate, and the cheese is inanimate. Animate things include agents, which observe changes in the world and perform their own changes on the world. Inanimate things are entirely passive.
But if you were an Extreme Behaviourist, you might think differently. You might think that the fox, the crow and the cheese are all simply objects, distinguishable from one another only by their different input–output behaviours:
if the fox sees the crow and the crow has food in its mouth,
then the fox praises the crow.
if the fox praises the crow,
then the crow sings.
if the crow has food in its beak and the crow sings,
then the food falls to the ground.
if the food is next to the fox,
then the fox picks up the food.
Extreme Behaviourism was all the rage in Psychology in the mid-twentieth century. A more moderate form of behaviourism has been the rage in Computing for approximately the past 30 years, in the form of Object-Orientation.
Do you want to get ahead in the world, improve yourself, and be more intelligent than you already are? If so, then meta-logic is what you need.
Meta-logic is a special case of meta-language. A meta-language is a language used to represent and reason about another language, called the object language. If the object language is a form of logic, then the meta-language is also called meta-logic. Therefore, this book is an example of the use of meta-logic to study the object language of Computational Logic.
It’ls bad enough to be a Mars explorer and not to know that your purpose in life is to find life on Mars. But it’s a lot worse to be a wood louse and have nothing more important to do with your life than just follow the meaningless rules:
Goals::
In fact, it’s even worse than meaningless. Without food the louse will die, and without children the louse’s genes will disappear. What is the point of just wandering around if the louse doesn’t bother to eat and make babies?
Part of the problem is that the louse’s body isn’t giving it the right signals – not making it hungry when it is running out of energy, and not making it desire a mate when it should be having children. It also needs to be able to recognise food and eat, and to recognise potential mates and propagate.
We have already looked informally at forward and backward reasoning with conditionals without negation (definite clauses). This additional chapter defines the two inference rules more precisely and examines their semantics.
Arguably, forward reasoning is more fundamental than backward reasoning, because, as shown in Chapter A2, it is the way that minimal models are generated. However, the two inference rules can both be understood as determining whether definite goal clauses are true in all models of a definite clause program, or equivalently whether the definite goal clauses are true in the minimal model.
This additional chapter provides the technical support for abductive logic programming (ALP), which is the basis of the Computational Logic used in this book. ALP uses abduction, not only to explain observations, but to generate plans of action.
ALP extends ordinary logic programming by combining the closed predicates of logic programming, which are defined by clauses, with open predicates, which are constrained directly or indirectly by integrity constraints represented in a variant of classical logic. Integrity constraints in ALP include as special cases the functionalities of condition–action rules, maintenance goals and constraints.
As we saw in Chapter 5, negation as failure has a natural meta-logical (or autoepistemic) semantics, which interprets the phrase cannot be shown literally, as an expression in the meta-language or in autoepistemic logic. But historically the first and arguably the simplest semantics is the completion semantics (Clark, 1978), which treats conditionals as biconditionals in disguise.
Both the meta-logical and the completion semantics treat an agent’s beliefs as specifying the only conditions under which a conclusion holds. But whereas the meta-logical semantics interprets the term only in the meta-language, biconditionals in the completion semantics interpret the same term, only, in the object language.
In mathematics, semantic structures are static and truth is eternal. But for an intelligent agent embedded in the real world, semantic structures are dynamic and the only constant is change.
Perhaps the simplest way to understand change is to view actions and other events as causing a change of state from one static world structure to the next. For example:This view of change is formalised in the possible world semantics of modal logic. In modal logic, sentences are given a truth value relative to a static possible world embedded in a collection of possible worlds linked with one another by an accessibility relation.
The language of Computational Logic used in this book is an informal and simplified form of Symbolic Logic. Until now, it has also been somewhat vague and imprecise. This additional chapter is intended to specify the language more precisely. It does not affect the mainstream of the book, and the reader can either leave it out altogether, or come back to it later.
In all varieties of logic, the basic building block is the atomic formula or atom for short. In the same way that an atom in physics can be viewed as a collection of electrons held together by a nucleus, atoms in logic are collections of terms, like “train”, “ driver” and “station”, held together by predicate symbols, like “in” or “stop”. Predicate symbols are like verbs in English, and terms are like nouns or noun phrases.
Most changes in the world pass us by without notice. Our sensory organs and perceptual apparatus filter them out, so they do not clutter our thoughts with irrelevancies. Other changes enter our minds as observations. We reason forward from them to deduce their consequences, and we react to them if necessary. Most of these observations are routine, and our reactions are spontaneous. Many of them do not even make it into our conscious thoughts.
But some observations are not routine: the loud bang in the middle of the night, the pool of blood on the kitchen floor, the blackbird feathers in the pie. They demand explanation. They could have been caused by unobserved events, which might have other, perhaps more serious consequences. The loud bang could be the firing of a gun. The pool of blood could have come from the victim of the shooting. The blackbird feathers in the pie could be an inept attempt to hide the evidence.