To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this chapter is to present an ethical landscape for humans and autonomous robots in the future of a physicalistic world, and which will touch mainly on a framework of robot ethics rather than the concrete ethical problems possibly caused by recent robot technologies. It might be difficult to find sufficient answers to such ethical problems as those occurring with future military robots unless we understand what autonomy in autonomous robots exactly implies for robot ethics. This chapter presupposes that this “autonomy” should be understood as “being able to make intentional decisions from the internal state, and to doubt and reject any rule,” a definition which requires robots to have at least a minimal folk psychology in terms of desire and belief. And if any agent has a minimal folk psychology, we would have to say that it potentially has the same “right and duties” as we humans with a fully fledged folk psychology, because ethics for us would cover any agent as far as it is regarded to have a folk psychology – even in Daniel C. Dennett’s intentional stance (Dennett, 1987). We can see the lack of autonomy in this sense in the famous Asimov’s laws (Asimov, 2000) cited by Bekey et al. in Chapter 14 of this volume, which could be interpreted to show the rules any autonomous robots in the future have to obey (see Section 14.3).
The analysis of particular telencephalic systems has led to derivation of algorithmic statements of their operation, which have grown to include communicating systems from sensory to motor and back. Like the brain circuits from which they are derived, these algorithms (e.g. Granger, 2006) perform and learn from experience. Their perception and action capabilities are often initially tested in simulated environments, which are more controllable and repeatable than robot tests, but it is widely recognized that even the most carefully devised simulated environments typically fail to transfer well to real-world settings.
Robot testing raises the specter of engineering requirements and programming minutiae, as well as sheer cost, and lack of standardization of robot platforms. For brain-derived learning systems, the primary desideratum of a robot is not that it have advanced pinpoint motor control, nor extensive scripted or preprogrammed behaviors. Rather, if the goal is to study how the robot can acquire new knowledge via actions, sensing results of actions, and incremental learning over time, as children do, then relatively simple motor capabilities will suffice when combined with high-acuity sensors (sight, sound, touch) and powerful onboard processors.
This chapter discusses how cognitive developmental robotics (CDR) can make a paradigm shift in science and technology. A synthetic approach is revisited as a candidate for the paradigm shift, and CDR is reviewed from this viewpoint. A transdisciplinary approach appears to be a necessary condition and how to represent and design “subjectivity” seems to be an essential issue.
It is no wonder that new scientific findings are dependent on the most advanced technologies. A typical example is brain-imaging technologies such as fMRI, PET, EEG, NIRS, and so on that have been developed to expand the observations of neural activities from static local images to ones that can show dynamic and global behavior, and have therefore been revealing new mysteries of brain functionality. Such advanced technologies are presumed to be mere supporting tools for biological analysis, but is there any possibility that it could be a means for new science invention?
From hardware and software to kernels and envelopes
At the beginning of robotics research, robots were seen as physical platforms on which different behavioral programs could be run, similar to the hardware and software parts of a computer. However, recent advances in developmental robotics have allowed us to consider a reversed paradigm in which a single software, called a kernel, is capable of exploring and controlling many different sensorimotor spaces, called envelopes. In this chapter, we review studies we have previously published about kernels and envelopes to retrace the history of this concept shift and discuss its consequences for robotic designs and also for developmental psychology and brain sciences.
This paper presents a method for the composition of at-location with other semantic relations. The method is based on inference axioms that combine two semantic relations yielding another relation that otherwise is not expressed. An experimental study conducted on PropBank, WordNet, and eXtended WordNet shows that inferences have high accuracy. The method is applicable to combining other semantic relations and it is beneficial to many semantically intense applications.
In this study, we investigate using unsupervised generative learning methods for subjectivity detection across different domains. We create an initial training set using simple lexicon information and then evaluate two iterative learning methods with a base naive Bayes classifier to learn from unannotated data. The first method is self-training, which adds instances with high confidence into the training set in each iteration. The second is a calibrated EM (expectation-maximization) method where we calibrate the posterior probabilities from EM such that the class distribution is similar to that in the real data. We evaluate both approaches on three different domains: movie data, news resource, and meeting dialogues, and we found that in some cases the unsupervised learning methods can achieve performance close to the fully supervised setup. We perform a thorough analysis to examine factors, such as self-labeling accuracy of the initial training set in unsupervised learning, the accuracy of the added examples in self-training, and the size of the initial training set in different methods. Our experiments and analysis show inherent differences across domains and impacting factors explaining the model behaviors.
In the Prisoner’s Dilemma, the need to choose between different actions is generated by the need to solve an achievement goal, obtained as the result of a request from the police to turn witness against your friend. The achievement goal, triggered by the external event, is the motivation of the action you eventually choose.
But in classical decision theory, the motivation of actions is unspecified. Moreover, you are expected to evaluate the alternatives by considering only their likely consequences.
This additional chapter explores the semantics of classical logic and conditional logic. In classical logic, the semantics of a set of sentences S is determined by the set of all the interpretations (or semantic structures), called models, that make all the sentences in S true. The main concern of classical logic is with the notion of a sentence C being a logical consequence of S, which holds when C is true in all models of S.
Semantic structures in classical logic are arbitrary sets of individuals and relationships, which constitute the denotations of the symbols of the language in which sentences are expressed. In this chapter, I argue the case for restricting the specification of semantic structures to sets of atomic sentences, called Herbrand interpretations.
In this chapter we revisit the ancient Greek fable of the fox and the crow, to show how the proactive thinking of the fox outwits the reactive thinking of the crow. In later chapters, we will see how reactive and proactive thinking can be combined.
The fox and the crow are a metaphor for different kinds of people. Some people are proactive, like the fox in the story. They like to plan ahead, foresee obstacles, and lead an orderly life. Other people are reactive, like the crow. They like to be open to what is happening around them, take advantage of new opportunities, and be spontaneous. Most people are both proactive and reactive, at different times and to varying degrees.
I have made a case for a comprehensive, logic-based theory of human intelligence, drawing upon and reconciling a number of otherwise competing paradigms in Artificial Intelligence and other fields. The most important of these paradigms are production systems, logic programming, classical logic and decision theory.
The production system cycle, suitably extended, provides the bare bones of the theory: the observe–think–decide–act agent cycle. It also provides some of the motivation for identifying an agent’s maintenance goals as the driving force of the agent’s life.
It is a common view in some fields that logic has little to do with search. For example, Paul Thagard (2005) in Mind: Introduction to Cognitive Science states on page 45: “In logic-based systems, the fundamental operation of thinking is logical deduction, but from the perspective of rule-based systems, the fundamental operation of thinking is search.”
Similarly, Jonathan Baron (2008) in his textbook Thinking and Deciding writes on page 6: “Thinking about actions, beliefs and personal goals can all be described in terms of a common framework, which asserts that thinking consists of search and inference. We search for certain objects and then make inferences from and about the objects we have found.” On page 97, Baron states that formal logic is not a complete theory of thinking because it “covers only inference”.
This additional chapter shows that both forward and backward reasoning are special cases of the resolution rule of inference. Resolution also includes compiling two clauses, like:
In the propositional case, given two clauses of the form:where B and D are conjunctions of atoms including the atom true, and C and E are disjunctions of atoms including the atom false, resolution derives the resolvent:The two clauses from which the resolvent is derived are called the parents of the resolvent, and the atom A is called the atom resolved upon.
To a first approximation, the negation as failure rule of inference is straightforward. Its name says it all:
to show that the negation of a sentence holds
try to show the sentence holds, and
if the attempt fails, then the negation holds.
But what does it mean to fail? Does it include infinite or only finite failure? To answer these questions, we need a better understanding of the semantics.
Consider, for example, the English sentence:
bob will go if no one goes
Ignore the fact that, if Bob were more normal, it would be more likely that bob will go if no one else goes. Focus instead on the problem of representing the sentence more formally as a logical conditional.
It’s easy to take negation for granted, and not give it a second thought. Either it will rain or it won’t rain. But definitely it won’t rain and not rain at the same time and in the same place. Looking at it like that, you can take your pick. Raining and not raining are on a par, like heads and tails. You can have one or the other, but not both.
So it may seem at first glance. But on closer inspection, the reality is different. The world is a positive, not a negative place, and human ways of organising our thoughts about the world are mainly positive too. We directly observe only positive facts, like this coin is showing heads, or it is raining. We have to derive the negation of a positive fact from the absence of the positive fact. The fact that this coin is showing heads implies that it is not showing tails, and the fact that it is sunny implies, everything else being equal, that it is not raining at the same place and the same time.
In this chapter, I will discuss two psychological experiments that challenge the view that people have an inbuilt ability to perform abstract logical reasoning. The first of these experiments, the “selection task”, has been widely interpreted as showing that, instead of logic, people use specialised procedures for dealing with problems that occur commonly in their environment. The second, the “suppression task”, has been interpreted as showing that people do not reason using rules of inference, like forward and backward reasoning, but instead construct a model of the problem and inspect the model for interesting properties. I will respond to some of the issues raised by these experiments in this chapter, but deal with them in greater detail in Chapter 16, after presenting the necessary background material.
Logical Extremism, which views life as all thought and no action, has given logic a bad name. It has overshadowed its near relation, Logical Moderation, which recognises that logic is only one way of thinking, and that thinking isn’t everything.
The antithesis of Logical Extremism is Extreme Behaviourism, which denies any “life of the mind” and views Life instead entirely in behavioural terms. Behaviourism, in turn, is easily confused with the condition–action rule model of thinking.
Computational Logic has been developed in Artificial Intelligence over the past 50 years or so, in an attempt to program computers to display human levels of intelligence. It is based on Symbolic Logic, in which sentences are represented by symbols and reasoning is performed by manipulating symbols, like solving equations in algebra. However, attempts to use Symbolic Logic to solve practical problems by means of computers have led to many simplifications and enhancements. The resulting Computational Logic is not only more powerful for use by computers, but also more useful for the original purpose of logic, to improve human thinking.
Traditional Logic, Symbolic Logic and Computational Logic are all concerned with the abstract form of sentences and how their form affects the correctness of arguments. Although Traditional Logic goes back to Aristotle in the fourth century b.c., Symbolic Logic began primarily in the nineteenth century, with the mathematical forms of logic developed by George Boole and Gottlob Frege. It was enhanced considerably in the twentieth century by the work of Bertrand Russell, Alfred North Whitehead, Kurt Gödel and many others on its application to the Foundations of Mathematics. Computational Logic emerged in the latter half of the twentieth century, starting with attempts to mechanise the generation of proofs in mathematics, and was extended both to represent more general kinds of knowledge and to perform more general kinds of problem solving. The variety of Computational Logic presented in this book owes much to the contributions of John McCarthy and John Alan Robinson.
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prospect with horror.
Partly because these controversies attract so much attention, one of the most important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances is Computational Logic.
What do the passenger on the London Underground, the fox, the wood louse, the Mars explorer and even the heating thermostat have in common? It certainly isn’t the way they dress, the company they keep, or their table manners. It is the way that they are all embedded in a constantly changing world, which sometimes threatens their survival, but at other times provides them with opportunities to thrive and prosper.
To survive and prosper in such an environment, an agent needs to be aware of the changes taking place in the world around it, and to perform actions that change the world to suit its own purposes. No matter whether it is a human, wood louse, robot or heating thermostat, an agent’s life is an endless cycle, in which it must:
repeatedly (or concurrently)
observe the world,
think,
decide what actions to perform, and
act.
We can picture this relationship between the mind of an agent and the world like this:The observation–thought–decision–action cycle is common to all agents, no matter how primitive or how sophisticated. For some agents, thinking might involve little more than firing a collection of stimulus–response associations, without any representation of the world. For other agents, thinking might be a form of symbol processing, in which symbols in the mind represent objects and relationships in the world. For such symbol manipulating agents, the world is a semantic structure, which gives meaning to the agent’s thoughts.