To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What is the difference between the fox and the crow, on the one hand, and the cheese, on the other? Of course, the fox and the crow are animate, and the cheese is inanimate. Animate things include agents, which observe changes in the world and perform their own changes on the world. Inanimate things are entirely passive.
But if you were an Extreme Behaviourist, you might think differently. You might think that the fox, the crow and the cheese are all simply objects, distinguishable from one another only by their different input–output behaviours:
if the fox sees the crow and the crow has food in its mouth,
then the fox praises the crow.
if the fox praises the crow,
then the crow sings.
if the crow has food in its beak and the crow sings,
then the food falls to the ground.
if the food is next to the fox,
then the fox picks up the food.
Extreme Behaviourism was all the rage in Psychology in the mid-twentieth century. A more moderate form of behaviourism has been the rage in Computing for approximately the past 30 years, in the form of Object-Orientation.
Do you want to get ahead in the world, improve yourself, and be more intelligent than you already are? If so, then meta-logic is what you need.
Meta-logic is a special case of meta-language. A meta-language is a language used to represent and reason about another language, called the object language. If the object language is a form of logic, then the meta-language is also called meta-logic. Therefore, this book is an example of the use of meta-logic to study the object language of Computational Logic.
It’ls bad enough to be a Mars explorer and not to know that your purpose in life is to find life on Mars. But it’s a lot worse to be a wood louse and have nothing more important to do with your life than just follow the meaningless rules:
Goals::
In fact, it’s even worse than meaningless. Without food the louse will die, and without children the louse’s genes will disappear. What is the point of just wandering around if the louse doesn’t bother to eat and make babies?
Part of the problem is that the louse’s body isn’t giving it the right signals – not making it hungry when it is running out of energy, and not making it desire a mate when it should be having children. It also needs to be able to recognise food and eat, and to recognise potential mates and propagate.
We have already looked informally at forward and backward reasoning with conditionals without negation (definite clauses). This additional chapter defines the two inference rules more precisely and examines their semantics.
Arguably, forward reasoning is more fundamental than backward reasoning, because, as shown in Chapter A2, it is the way that minimal models are generated. However, the two inference rules can both be understood as determining whether definite goal clauses are true in all models of a definite clause program, or equivalently whether the definite goal clauses are true in the minimal model.
This additional chapter provides the technical support for abductive logic programming (ALP), which is the basis of the Computational Logic used in this book. ALP uses abduction, not only to explain observations, but to generate plans of action.
ALP extends ordinary logic programming by combining the closed predicates of logic programming, which are defined by clauses, with open predicates, which are constrained directly or indirectly by integrity constraints represented in a variant of classical logic. Integrity constraints in ALP include as special cases the functionalities of condition–action rules, maintenance goals and constraints.
As we saw in Chapter 5, negation as failure has a natural meta-logical (or autoepistemic) semantics, which interprets the phrase cannot be shown literally, as an expression in the meta-language or in autoepistemic logic. But historically the first and arguably the simplest semantics is the completion semantics (Clark, 1978), which treats conditionals as biconditionals in disguise.
Both the meta-logical and the completion semantics treat an agent’s beliefs as specifying the only conditions under which a conclusion holds. But whereas the meta-logical semantics interprets the term only in the meta-language, biconditionals in the completion semantics interpret the same term, only, in the object language.
In mathematics, semantic structures are static and truth is eternal. But for an intelligent agent embedded in the real world, semantic structures are dynamic and the only constant is change.
Perhaps the simplest way to understand change is to view actions and other events as causing a change of state from one static world structure to the next. For example:This view of change is formalised in the possible world semantics of modal logic. In modal logic, sentences are given a truth value relative to a static possible world embedded in a collection of possible worlds linked with one another by an accessibility relation.
The language of Computational Logic used in this book is an informal and simplified form of Symbolic Logic. Until now, it has also been somewhat vague and imprecise. This additional chapter is intended to specify the language more precisely. It does not affect the mainstream of the book, and the reader can either leave it out altogether, or come back to it later.
In all varieties of logic, the basic building block is the atomic formula or atom for short. In the same way that an atom in physics can be viewed as a collection of electrons held together by a nucleus, atoms in logic are collections of terms, like “train”, “ driver” and “station”, held together by predicate symbols, like “in” or “stop”. Predicate symbols are like verbs in English, and terms are like nouns or noun phrases.
Most changes in the world pass us by without notice. Our sensory organs and perceptual apparatus filter them out, so they do not clutter our thoughts with irrelevancies. Other changes enter our minds as observations. We reason forward from them to deduce their consequences, and we react to them if necessary. Most of these observations are routine, and our reactions are spontaneous. Many of them do not even make it into our conscious thoughts.
But some observations are not routine: the loud bang in the middle of the night, the pool of blood on the kitchen floor, the blackbird feathers in the pie. They demand explanation. They could have been caused by unobserved events, which might have other, perhaps more serious consequences. The loud bang could be the firing of a gun. The pool of blood could have come from the victim of the shooting. The blackbird feathers in the pie could be an inept attempt to hide the evidence.
If some form of Computational Logic is the language of human thought, then the best place to look for it would seem to be inside our heads. But if we simply look at the structure and activity of our brains, it would be like looking at the hardware of a computer when we want to learn about its software. Or it would be like trying to do sociology by studying the movement of atomic particles instead of studying human interactions. Better, it might seem, just to use common sense and rely on introspection.
But introspection is notoriously unreliable. Wishful thinking can trick us into seeing what we want to see, instead of seeing what is actually there. The behavioural psychologists of the first half of the twentieth century were so suspicious of introspection that they banned it altogether.
Automatic determination of synonyms and/or semantically related words has various applications in Natural Language Processing. Two mainstream paradigms to date, lexicon-based and distributional approaches, both exhibit pros and cons with regard to coverage, complexity, and quality. In this paper, we propose three novel methods—two rule-based methods and one machine learning approach—to identify synonyms from definition texts in a machine-readable dictionary. Extracted synonyms are evaluated in two extrinsic experiments and one intrinsic experiment. Evaluation results show that our pattern-based approach achieves best performance in one of the experiments and satisfactory results in the other, comparable to corpus-based state-of-the-art results.
Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple ‘best matching author’ approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods.
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.
Graph theory and the fields of natural language processing and information retrieval are well-studied disciplines. Traditionally, these areas have been perceived as distinct, with different algorithms, different applications and different potential end-users. However, recent research has shown that these disciplines are intimately connected, with a large variety of natural language processing and information retrieval applications finding efficient solutions within graph-theoretical frameworks. This book extensively covers the use of graph-based algorithms for natural language processing and information retrieval. It brings together topics as diverse as lexical semantics, text summarization, text mining, ontology construction, text classification and information retrieval, which are connected by the common underlying theme of the use of graph-theoretical methods for text and information processing tasks. Readers will come away with a firm understanding of the major methods and applications in natural language processing and information retrieval that rely on graph-based representations and algorithms.