To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents a method for the composition of at-location with other semantic relations. The method is based on inference axioms that combine two semantic relations yielding another relation that otherwise is not expressed. An experimental study conducted on PropBank, WordNet, and eXtended WordNet shows that inferences have high accuracy. The method is applicable to combining other semantic relations and it is beneficial to many semantically intense applications.
In this study, we investigate using unsupervised generative learning methods for subjectivity detection across different domains. We create an initial training set using simple lexicon information and then evaluate two iterative learning methods with a base naive Bayes classifier to learn from unannotated data. The first method is self-training, which adds instances with high confidence into the training set in each iteration. The second is a calibrated EM (expectation-maximization) method where we calibrate the posterior probabilities from EM such that the class distribution is similar to that in the real data. We evaluate both approaches on three different domains: movie data, news resource, and meeting dialogues, and we found that in some cases the unsupervised learning methods can achieve performance close to the fully supervised setup. We perform a thorough analysis to examine factors, such as self-labeling accuracy of the initial training set in unsupervised learning, the accuracy of the added examples in self-training, and the size of the initial training set in different methods. Our experiments and analysis show inherent differences across domains and impacting factors explaining the model behaviors.
In the Prisoner’s Dilemma, the need to choose between different actions is generated by the need to solve an achievement goal, obtained as the result of a request from the police to turn witness against your friend. The achievement goal, triggered by the external event, is the motivation of the action you eventually choose.
But in classical decision theory, the motivation of actions is unspecified. Moreover, you are expected to evaluate the alternatives by considering only their likely consequences.
This additional chapter explores the semantics of classical logic and conditional logic. In classical logic, the semantics of a set of sentences S is determined by the set of all the interpretations (or semantic structures), called models, that make all the sentences in S true. The main concern of classical logic is with the notion of a sentence C being a logical consequence of S, which holds when C is true in all models of S.
Semantic structures in classical logic are arbitrary sets of individuals and relationships, which constitute the denotations of the symbols of the language in which sentences are expressed. In this chapter, I argue the case for restricting the specification of semantic structures to sets of atomic sentences, called Herbrand interpretations.
In this chapter we revisit the ancient Greek fable of the fox and the crow, to show how the proactive thinking of the fox outwits the reactive thinking of the crow. In later chapters, we will see how reactive and proactive thinking can be combined.
The fox and the crow are a metaphor for different kinds of people. Some people are proactive, like the fox in the story. They like to plan ahead, foresee obstacles, and lead an orderly life. Other people are reactive, like the crow. They like to be open to what is happening around them, take advantage of new opportunities, and be spontaneous. Most people are both proactive and reactive, at different times and to varying degrees.
I have made a case for a comprehensive, logic-based theory of human intelligence, drawing upon and reconciling a number of otherwise competing paradigms in Artificial Intelligence and other fields. The most important of these paradigms are production systems, logic programming, classical logic and decision theory.
The production system cycle, suitably extended, provides the bare bones of the theory: the observe–think–decide–act agent cycle. It also provides some of the motivation for identifying an agent’s maintenance goals as the driving force of the agent’s life.
It is a common view in some fields that logic has little to do with search. For example, Paul Thagard (2005) in Mind: Introduction to Cognitive Science states on page 45: “In logic-based systems, the fundamental operation of thinking is logical deduction, but from the perspective of rule-based systems, the fundamental operation of thinking is search.”
Similarly, Jonathan Baron (2008) in his textbook Thinking and Deciding writes on page 6: “Thinking about actions, beliefs and personal goals can all be described in terms of a common framework, which asserts that thinking consists of search and inference. We search for certain objects and then make inferences from and about the objects we have found.” On page 97, Baron states that formal logic is not a complete theory of thinking because it “covers only inference”.
This additional chapter shows that both forward and backward reasoning are special cases of the resolution rule of inference. Resolution also includes compiling two clauses, like:
In the propositional case, given two clauses of the form:where B and D are conjunctions of atoms including the atom true, and C and E are disjunctions of atoms including the atom false, resolution derives the resolvent:The two clauses from which the resolvent is derived are called the parents of the resolvent, and the atom A is called the atom resolved upon.
To a first approximation, the negation as failure rule of inference is straightforward. Its name says it all:
to show that the negation of a sentence holds
try to show the sentence holds, and
if the attempt fails, then the negation holds.
But what does it mean to fail? Does it include infinite or only finite failure? To answer these questions, we need a better understanding of the semantics.
Consider, for example, the English sentence:
bob will go if no one goes
Ignore the fact that, if Bob were more normal, it would be more likely that bob will go if no one else goes. Focus instead on the problem of representing the sentence more formally as a logical conditional.
It’s easy to take negation for granted, and not give it a second thought. Either it will rain or it won’t rain. But definitely it won’t rain and not rain at the same time and in the same place. Looking at it like that, you can take your pick. Raining and not raining are on a par, like heads and tails. You can have one or the other, but not both.
So it may seem at first glance. But on closer inspection, the reality is different. The world is a positive, not a negative place, and human ways of organising our thoughts about the world are mainly positive too. We directly observe only positive facts, like this coin is showing heads, or it is raining. We have to derive the negation of a positive fact from the absence of the positive fact. The fact that this coin is showing heads implies that it is not showing tails, and the fact that it is sunny implies, everything else being equal, that it is not raining at the same place and the same time.
In this chapter, I will discuss two psychological experiments that challenge the view that people have an inbuilt ability to perform abstract logical reasoning. The first of these experiments, the “selection task”, has been widely interpreted as showing that, instead of logic, people use specialised procedures for dealing with problems that occur commonly in their environment. The second, the “suppression task”, has been interpreted as showing that people do not reason using rules of inference, like forward and backward reasoning, but instead construct a model of the problem and inspect the model for interesting properties. I will respond to some of the issues raised by these experiments in this chapter, but deal with them in greater detail in Chapter 16, after presenting the necessary background material.
Logical Extremism, which views life as all thought and no action, has given logic a bad name. It has overshadowed its near relation, Logical Moderation, which recognises that logic is only one way of thinking, and that thinking isn’t everything.
The antithesis of Logical Extremism is Extreme Behaviourism, which denies any “life of the mind” and views Life instead entirely in behavioural terms. Behaviourism, in turn, is easily confused with the condition–action rule model of thinking.
Computational Logic has been developed in Artificial Intelligence over the past 50 years or so, in an attempt to program computers to display human levels of intelligence. It is based on Symbolic Logic, in which sentences are represented by symbols and reasoning is performed by manipulating symbols, like solving equations in algebra. However, attempts to use Symbolic Logic to solve practical problems by means of computers have led to many simplifications and enhancements. The resulting Computational Logic is not only more powerful for use by computers, but also more useful for the original purpose of logic, to improve human thinking.
Traditional Logic, Symbolic Logic and Computational Logic are all concerned with the abstract form of sentences and how their form affects the correctness of arguments. Although Traditional Logic goes back to Aristotle in the fourth century b.c., Symbolic Logic began primarily in the nineteenth century, with the mathematical forms of logic developed by George Boole and Gottlob Frege. It was enhanced considerably in the twentieth century by the work of Bertrand Russell, Alfred North Whitehead, Kurt Gödel and many others on its application to the Foundations of Mathematics. Computational Logic emerged in the latter half of the twentieth century, starting with attempts to mechanise the generation of proofs in mathematics, and was extended both to represent more general kinds of knowledge and to perform more general kinds of problem solving. The variety of Computational Logic presented in this book owes much to the contributions of John McCarthy and John Alan Robinson.
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prospect with horror.
Partly because these controversies attract so much attention, one of the most important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances is Computational Logic.
What do the passenger on the London Underground, the fox, the wood louse, the Mars explorer and even the heating thermostat have in common? It certainly isn’t the way they dress, the company they keep, or their table manners. It is the way that they are all embedded in a constantly changing world, which sometimes threatens their survival, but at other times provides them with opportunities to thrive and prosper.
To survive and prosper in such an environment, an agent needs to be aware of the changes taking place in the world around it, and to perform actions that change the world to suit its own purposes. No matter whether it is a human, wood louse, robot or heating thermostat, an agent’s life is an endless cycle, in which it must:
repeatedly (or concurrently)
observe the world,
think,
decide what actions to perform, and
act.
We can picture this relationship between the mind of an agent and the world like this:The observation–thought–decision–action cycle is common to all agents, no matter how primitive or how sophisticated. For some agents, thinking might involve little more than firing a collection of stimulus–response associations, without any representation of the world. For other agents, thinking might be a form of symbol processing, in which symbols in the mind represent objects and relationships in the world. For such symbol manipulating agents, the world is a semantic structure, which gives meaning to the agent’s thoughts.
Suppose, in your desperation to get rich as quickly as possible, you consider the various alternatives, infer their likely consequences and decide that the best alternative is to rob the local bank. You recruit your best friend, John, well known for his meticulous attention to detail, to help you plan and carry out the crime. Thanks to your joint efforts, you succeed in breaking into the bank in the middle of the night, opening the safe and making your get-away with a cool million pounds (approximately 1.65 million dollars – and falling – at the time of writing) in the boot (trunk) of your car.
Unfortunately, years of poverty and neglect have left your car in a state of general disrepair, and you are stopped by the police for driving at night with only one headlight. In the course of a routine investigation, they discover the suitcase with the cool million pounds in the boot. You plead ignorance of any wrong doing, but they arrest you both anyway on the suspicion of robbery.
In Chapter 2, we saw that psychological studies of the selection task have been used to attack the view that human thinking involves logical reasoning, and to support the claim that thinking uses specialised algorithms instead. I argued that these attacks fail to appreciate the relationship between logic and algorithms, as expressed by the equation:
Specialised knowledge can be expressed in logical form, and general-purpose reasoning can be understood largely in terms of forward and backward reasoning embedded in an observe–think–decide–act agent cycle.
I also argued that many of the studies that are critical of the value of logic in human thinking fail to distinguish between the problem of understanding natural-language sentences and the problem of reasoning with logical forms. This distinction and the relationship between them can also be expressed by an equation:
natural language understanding =
translation into logical form + logical reasoning.
We saw that even natural-language sentences already in seemingly logical form need to be interpreted, in order to determine, for example, whether they are missing any conditions, or whether they might be the converse of their intended meaning. Because of the need to perform this interpretation, readers typically use their own background goals and beliefs, to help them identify the intended logical form of the natural-language problem statement.
In this chapter we return to the topic of Chapters 1 and 2: the relationship between logic, natural language and the language of thought. We will look at the law regulating British Citizenship, which is the British Nationality Act 1981 (BNA), and see that its English style resembles the conditional style of Computational Logic (CL) (Sergot et al., 1986).
The BNA is similar to the London Underground Emergency Notice in its purpose of regulating human behaviour. But whereas the Emergency Notice relies on the common sense of its readers to achieve its desired effect, the BNA has the power of authority to enforce its provisions. The BNA differs from the Emergency Notice also in its greater complexity and the more specialised nature of its content.