Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-p2v8j Total loading time: 0 Render date: 2024-04-30T15:25:52.571Z Has data issue: false hasContentIssue false

Chapter 1 - Reasoning

from Part I - Cognition

Published online by Cambridge University Press:  17 August 2023

Stephen K. Reed
Affiliation:
San Diego State University

Summary

Chapter 1 begins with the distinction between reasoning from associations and reasoning from rules – a distinction that will resurface in subsequent chapters on creativity and innovation. The associative system is reproductive, automatic, and emphasizes similarity. The rule-based system is productive, deliberative, and emphasizes verification. Daniel Kahneman’s (2011) best-selling book Thinking Fast and Slow introduced readers to how associative and rule-based reasoning influence the speed of responses. The third section on biases in reasoning describes Kahneman’s classic research with Amos Tversky on how the use of heuristics such as availability and representativeness influence frequency estimates. The final section discusses monitoring reasoning in which people use knowledge to improve their thinking skills. Monitoring reasoning is a metacognitive skill that controls the selection, evaluation, revision, and abandonment of cognitive tasks, goals, and strategies.

Type
Chapter
Information
Encouraging Innovation
Cognition, Education, and Implementation
, pp. 3 - 15
Publisher: Cambridge University Press
Print publication year: 2023

I am a cognitive psychologist, so it should not be surprising that the first part of the book is about cognition. The first three chapters on reasoning, problem solving, and creativity are fundamental cognitive skills that contribute to our ability to innovate as will become evident in the second part of the book on teaching these skills and the third part of the book on applying these skills. Chapters 4 and 5 are on group decision making and collaborative problem solving because innovation typically requires teamwork.

We begin with reasoning because it is required for all of the more complex skills discussed throughout the book. We also rely on reasoning throughout the day, and at times wish we had spent more time on reflection. Here is a simple example. I daily used a bottle in which I placed its black cap on a black countertop and later had trouble locating it because the two colors were identical. A month later, it occurred to me that the interior of the cap might have a different color. I can now easily locate a yellow cap by turning it over before placing it on the black surface. This is only one of the many ‘I wish I would have thought of it sooner’ occurrences that I (and I hope others) have experienced.

This chapter begins with the distinction between reasoning from associations and reasoning from rules – a distinction that will resurface in subsequent chapters and in the second section of this chapter on fast versus slow responses. Daniel Kahneman’s (Reference Kahneman2011) best-selling book Thinking Fast and Slow introduced many readers to this topic. The third section on biases in reasoning describes Kahneman’s classic research with Amos Tversky that was rewarded with the 2002 Nobel Prize in Economic Sciences for Daniel Kahneman following Amos Tversky’s premature death in 1996. The final section of this chapter describes monitoring reasoning in which people use knowledge to improve their thinking skills.

Associations versus Rules

Steven Sloman (Reference Sloman1996) at Brown University initially elaborated on the distinction between reasoning based on associations and reasoning based on rules. Table 1.1 lists the characteristics of the two forms of reasoning. Associative reasoning depends on similarity and associative relations such as classifying sharks as fish. Similarity relations, however, can occasionally be misleading. Whales appear similar to fish, but consulting rules avoids a misclassification. Fish lay eggs and can breathe under water. Whales cannot and therefore are not fish.

Table 1.1 Associative versus rule-based reasoning. Based on Sloman (Reference Sloman1996)

CharacteristicsAssociative systemRule-based system
Principle of operationSimilaritySymbol manipulation
RelationsAssociationsCausal and logical
Nature of processingReproductiveProductive
AutomaticStrategic
FunctionsIntuitionDeliberation
CreativityFormal analysis
ImaginationVerification

Rules depend on causal relations and the abstraction of relevant features. Examples are a list of instructions, recipes, laws, and logic. Rules help us manipulate symbols such as words by transforming an active sentence (the dog chased a ball) into a passive sentence (the ball was chased by a dog). They help us perform calculations when the symbols are numbers.

Some reasoning requires a combination of associations and rules. Children learn a multiplication table in school (6 × 6 = 36) and then rules for using these associations to solve multiplication problems (36 × 4). Rules and associations support each other in this case, but they can also conflict (Sloman, Reference Sloman1996). Consumer choices may be guided either by associations based on effective advertising or by a rule to save money by selecting a less costly, but equally effective, product that lacks a prominent brand name.

Sloman (Reference Sloman1996) reports that the distinction between associations and rules is important to educational practice in two ways. Students must learn rules to provide productivity and a method for verifying conclusions but must also develop useful associations for flexible and less effortful reasoning. Useful associations guide the learner in the right direction, while rules provide a method for checking and correcting performance. A second effect on educational practice is that the distinction between associations and rules should help teachers predict which concepts learners will find difficult. Concepts should be difficult to learn when the rules are inconsistent with students’ natural associations.

The distinction between automatic and strategic strategies in Table 1.1 has practical applications for selecting nudges or boosts to influence behavior. In their book Nudge: Improving Decisions about Health, Wealth, and Happiness (2008), Richard Thaler at the University of Chicago and Cass Sunstein at Harvard University advocated that, although people should be free to make their own choices, they should be nudged in directions that will improve their lives. Nudges try to direct people toward making good decisions as in the many government campaigns that urged people to be vaccinated against COVID-19. In 2017 Richard Thaler received the Nobel Prize in Economic Sciences for demonstrating the many beneficial effects of nudges.

An alternative to nudges is boosting. Boosting requires making an informed decision such as deciding whether to be vaccinated after studying the pros and cons of the vaccination. Ralph Hertwig in Berlin’s Max Planck Institute for Human Development and Till Grune-Yanoff in Stockholm’s Royal Institute of Technology (2017) classify nudging as associative processing because nudges do not require critical thinking. They classify boosting as rule-based processing because boosting creates new procedures and mental tools to help people make better decisions. The goal of boosting is to create competencies through enhancing skills, knowledge, and decision strategies. Boosts require active cooperation and investment in time, effort, and motivation (Hertwig & Grune-Yanoff, 2017).

In their sequel, NUDGE: The Final Edition, Thaler and Sunstein (Reference Thaler and Sunstein2021) state that they are not opposed to boosting and that there is no need to select one over the other. Choices based on education are admirable, even when nudges push people in one direction. However, receiving sufficient education to make intelligent choices is unrealistic when the choices are very, very difficult. A nudge can then be helpful.

The cognitive functions of associations, listed at the bottom of Table 1.1, are particularly relevant to innovation. Robert and Michele Root-Bernstein emphasize the contribution of intuition, creativity, and imagination to innovative thinking (Root-Bernstein & Root-Bernstein, Reference Root-Bernstein and Shavinina2003). They list 13 pre-verbal, pre-logical skills for creative thinking that have been identified from hundreds of autobiographical sources, interviews, and psychological studies. The skills range from observing to synthesizing in which emotions, feelings, sensations, knowledge, and experience combine in a unified sense of comprehension. They acknowledge that education correctly emphasizes the analytical, logical, technical, objective, and descriptive aspects of each field. But they advocate in addition that the subjective, emotional, intuitive, synthetic, and sensual aspects of creativity deserve equal recognition.

One of the skills in their list – playing – occurs before formal education. Playing stimulates our minds, bodies, knowledge, and skills for the pure emotional joy of using them. It has no serious goal but is helpful for opening new areas of discovery (Root-Bernstein & Root-Bernstein, Reference Root-Bernstein and Shavinina2003). The authors refer to Alexander Calder as an example. Calder had a lifelong interest in designing toys for children before designing his innovative kinetic sculptures and free-floating mobiles. Sandra Russ (Reference Russ1993) documented the helpful role of play in creativity in her book Affect and Creativity. The book attracted early attention to this topic by discussing artistic versus scientific creativity, adjustments in the creative process, the role of computers in learning about creativity, gender differences, and enhancing creativity in home, school, and work settings.

Fast versus Slow Responses

Daniel Kahneman’s (Reference Kahneman2011) book Thinking Fast and Slow describes two forms of reasoning that he refers to as System I and System II. System I is fast and intuitive. It aligns with the associative system in Table 1.1. System II is slow and analytical. It aligns with the rule system. Kahneman’s book reveals that reasoning results in errors when people respond too quickly by relying too much on System I.

One piece of support for this claim is performance on the Cognitive Reflection Test designed by Shane Frederick when he was an assistant professor of management science at the Massachusetts Institute of Technology. The test consists of the three questions listed in Table 1.2. Try answering the questions before reading about the findings.

Table 1.2 The cognitive reflective test

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? ___ cents
If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes.
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days.
From Frederick, S. (Reference Frederick2005). [Copyright American Economic Association; reproduced with permission of the Journal of Economic Perspectives].

The purpose of the Cognitive Reflection Test is to distinguish between spontaneous and reflective responses. The incorrect spontaneous responses are 10 cents for the first question, 100 minutes for the second question, and 24 days for the third question. Reflection typically results in the correct answers. If the ball costs 5 cents, then the bat costs $1.05 and the total cost is $1.10. If 5 machines can make 5 widgets in 5 minutes, then 100 machines can make 100 widgets in 5 minutes. If lilies double in size every day, then the lake will be covered one day after it is half-filled on day 47. Check if the correct answers make sense to you after reflecting on these questions.

If you made a mistake on any of these questions, you have lots of company. Frederick (Reference Frederick2005) found that fewer than half of the students at such elite universities as Harvard, MIT, and Princeton correctly answered all three questions. A perfect score of 3 was obtained by only 48% of the students at MIT, 26% of the students at Princeton, and 20% of the students at Harvard. These percentages were lower at less elite universities. Kahneman (Reference Kahneman2011) finds the failure to check these spontaneous responses to be remarkable because it takes only a few seconds. He reports that people apparently place too much faith in their intuitions and avoid cognitive effort to check their intuitions.

These findings should not imply that all fast responses are error-prone. Keith Stanovich (Reference Stanovich2018) at the University of Toronto proposed that answering these questions involves interaction among three stages that involve (1) activating incorrect knowledge, (2) detecting errors from spontaneous System I processing, and (3) activating correct knowledge. The three stages show how the transition from low to moderate to high knowledge influences reasoning. A key aspect of the model is whether a person has the relevant knowledge to provide a correct answer.

Let’s apply the model to a hypothetical middle-school student who has been studying linear growth in class in which growth can be plotted as a straight line. She then reads the problem in Table 1.2 about when the lake will be half-filled with lily pads. The student responds, ‘24 days’, which would be the correct answer for linear growth. This error does not conflict with the correct response because the student lacks the knowledge to answer correctly (Stage 1). Later in the year the student learns about exponential growth, which is required for a correct answer. Knowledge about both linear and exponential growth can create conflict as to which applies to a problem (Stage 2). The conflict can result in a correct response if the student overrides her initially incorrect response based on linear growth. Alternatively, the student may fail to notice the conflict so continues to misclassify the problem as linear growth. Stage 3 occurs when the student becomes expert in identifying exponential growth. In this case, her spontaneous (System 1) response regarding exponential growth is correct. An advantage of Stanovich’s model is that it specifies how reasoning changes with the accumulation of knowledge.

A question raised by the model is whether successful reasoning occurs by initially generating a correct response or by overriding an incorrect response. A team of investigators in France and Canada designed a clever experiment to answer this question (Raoelison, Thompson, & De Neys, Reference Raoelison, Thompson and De Neys2020). The method uses two responses to elicit both an initial intuitive response and a final, deliberative one. The instructions indicated that participants should initially respond quickly with the first answer that came to mind. The problem was then presented again with instructions to actively reflect on it before responding.

One hundred online participants took two standard reasoning tests to measure whether their reasoning ability on these conflict problems could be better predicted by their initial intuitive response or by their second deliberative response. Both intuitive and deliberative responses predicted performance on the two reasoning trials, but the initial intuitive responses made better predictions. The investigators caution that reasoning research should not overestimate the importance of deliberative correction in explaining successful reasoning. The initial intuitive answers may be correct. As indicated by Stanovich (Reference Stanovich2018), the source of correct responses depends on the level of knowledge.

Biases

The distinction between spontaneous and reflective reasoning has been one of the most important topics in the study of reasoning. Another very important topic has been the identification of various strategies people use to make numerical judgments. Amos Tversky and Daniel Kahneman referred to these strategies as heuristics – strategies that are often successful but can occasionally result in systematic biases as described in Michael Lewis’s (Reference Lewis2016) book The Undoing Project: A Friendship That Changed Our Minds.

One of their initial investigations studied how people judge the frequency of events. Their availability heuristic proposes that we estimate frequency by judging the ease with which relevant instances come to mind (Tversky & Kahneman, Reference Tversky and Kahneman1973). For example, we may estimate the divorce rate in a community by recalling divorces among our acquaintances. When availability is highly correlated with actual frequency, estimates are accurate.

Some instances, however, might be difficult to retrieve from memory even though they occur frequently. The availability hypothesis predicts that frequency should be underestimated in this case. Suppose you sample a four-letter word at random from an English text. Is it more likely that the word starts with a K or that K is its third letter? The availability hypothesis proposes that people try to answer this question by judging how easy it is to think of examples in each category. Because it is easier to think of words that begin with a certain letter, people should be biased toward responding that more words start with the letter K than have a K in the third position. The median estimated ratio for each of five letters was that there were twice as many words in which that letter was the first letter rather than the third letter. The estimates were obtained despite the fact that all five letters are more frequent in the third position.

Several years later Slovic, Fischhoff, and Lichtenstein (Reference Slovic, Fischoff, Lichtenstein, Carroll and Payne1976) used the availability hypothesis to account for how people estimated the relative probability of 41 causes of death, including diseases, accidents, homicide, suicide, and natural hazards. A large sample of college students judged which member of a pair was the more likely cause of death. Table 1.3 shows how often they were correct for some of these pairs. The frequencies of accidents, cancer, and tornadoes – all of which receive heavy media coverage – were greatly overestimated. Asthma and diabetes, which receive less media coverage, were underestimated. For instance, the majority of students judged that tornadoes were the more likely cause of death even though death from asthma was almost 21 times greater. Examination of the events most seriously misjudged provided indirect support for the hypothesis that availability, particularly as influenced by the media, biases probability estimates.

Table 1.3 Judgments of relative frequency of causes of death. Based on Slovic, Fischoff, & Lichtenstein (Reference Slovic, Fischoff, Lichtenstein, Carroll and Payne1976).

Less likelyMore likelyTrue ratioPercentage of correct discrimination
AsthmaFirearm accident1.2080
Breast cancerDiabetes1.2523
Lung cancerStomach cancer1.2525
All accidentsStroke1.8520
DrowningSuicide9.6070
DiabetesHeart disease18.9097
TornadoAsthma20.9042

Another heuristic that causes biases is the representativeness heuristic (Kahneman & Tversky, Reference Kahneman and Tversky1972). Questions about probabilities typically have the general form: (1) What is the probability that object A belongs to class B? or (2) What is the probability that process B will generate event A? People frequently answer such questions by evaluating the degree to which A is representative of B – that is, the degree to which A resembles B. When A is very similar to B, the probability that A originates from B is judged to be high. When A is not very similar to B, the probability that A originated from B is judged to be low.

One problem with basing decisions solely on representativeness is that the decisions ignore other relevant information such as sample size. For example, finding 600 boys in a sample of 1,000 babies was judged as likely as finding 60 boys in a sample of 100 babies, even though the latter event is much more likely. Because the similarity between the obtained proportion (0.6) and the expected proportion (0.5) is the same in both cases, people did not see any difference between them. However, statisticians tell us that it is easier to obtain a discrepancy for small samples than for large samples. The sample would, of course, be representative of the population if a researcher could sample the entire population, but populations are typically too large to make this practical.

Consider the case of McDonald’s. In the mid-1990s, McDonald’s did extensive group testing of the Arch Deluxe – an improved, but more expensive, version of the Big Mac. People in the test sample liked the new hamburger, but the Arch Deluxe turned out to be a failure. Those who volunteered to be included in the initial testing were likely big fans of McDonald’s or hamburgers or both. But the average person goes to McDonald’s for a Big Mac, not a fancy variation. The test sample did not represent the larger population of McDonald’s customers, so the Arch Deluxe survived its initial, but not final, test (List, Reference List2021).

The availability and representativeness heuristics are supplemented by other sources of bias that are discussed in Chapter 8 of Risk: A User’s Guide by Stanley McChrystal and Anna Butico (2021). Common biases are:

  • Information sampling bias that results in spending more time and energy on information that everyone already knows.

  • Confirmation bias that results in searching for information that supports existing beliefs.

  • Halo effect that results in viewing people favorably regardless of their actions.

  • Status quo bias that results in believing that the current state of affairs is preferable.

  • Hindsight bias that results in believing one could have predicted the outcome after observing the outcome.

  • Plan continuation bias that results in not changing a course of action when the situation changes.

  • Ingroup bias that results in thinking those within a group are superior to those outside the group.

An example of a person who initially benefited by these biases is Bernie Madoff, who was sentenced to a 150-year prison sentence for running a multibillion-dollar Ponzi scheme (McChrystal & Butrico, Reference McChrystal and Butrico2021). His fictional investments even fooled sophisticated investors, including corporate leaders such as those at JPMorgan. The Securities and Exchange Commission also assumed that Madoff – an experienced investor who had advised the Commission – was acting responsibly. Risk: A User’s Guide contains many other case studies in which the failure to recognize biases increases risk. It also contains exercises for readers to apply these ideas.

Monitoring Reasoning

Let’s conclude this chapter with some thoughts about monitoring reasoning. Hopefully, some of the information already presented in this chapter may help you monitor your own reasoning. You may now be more reflective before answering questions that can trick you into giving a quick but incorrect response. You may consider whether media coverage and other sources of availability bias your judgments of frequency. You may evaluate sample size as a variable that can influence the outcome of surveys and experiments.

Monitoring reasoning was a relatively unexplored topic when John Flavell introduced it in a highly cited 1979 article in the American Psychologist (Flavell, Reference Flavell1979). Previous research with preschool and elementary school children by Flavell and others had demonstrated that younger children had difficulty judging when they had learned a list of items well enough to recall them. They also believed that they had understood verbal instructions that intentionally included omissions and obscurities. These and other findings suggested that young children are quite limited in their knowledge about monitoring comprehension, memory, and other types of cognition.

Flavell (Reference Flavell1979) referred to understanding cognitive processing as ‘metacognitive knowledge’. Cognitive strategies are invoked to make progress; metacognitive strategies are needed to understand and monitor them. Metacognition controls the selection, evaluation, revision, and abandonment of cognitive tasks, goals, and strategies. Flavell believed that metacognition had an essential role in determining the success of many activities performed by both children and adults:

My present guess is that metacognitive experiences are especially likely to occur in situations that stimulate a lot of careful, highly conscious thinking: in a job or school task that expressively demands that kind of thinking; in novel roles or situations, where every major step you take requires planning beforehand and evaluation afterwards; where decisions and actions are at once weighty and risky; where high affective arousal or other inhibitors of reflective thinking are absent.

(Flavell, Reference Flavell1979, p. 908)

Flavell’s beliefs proved to be correct. Over four decades after he introduced the theoretical construct of metacognition, it has been applied across an ever-widening range of contexts in educational, developmental, cognitive, and social psychology (Kuhn, Reference Kuhn2022). The challenge therefore is to identify whether there is any core identity that exists across these many applications. A leading expert in the study of reasoning, Deanna Kuhn at Columbia University, emphasized inhibitory control as the central characteristic of metacognition. She reviewed its importance for both declarative (knowing what) and procedural (knowing how) learning.

Inhibitory control is necessary for declarative learning to replace old, incorrect knowledge with new knowledge as specified in Stella Vosniadou’s framework theory. The theory proposes that children possess a relatively coherent, but occasionally faulty, conceptual system that they use to explain and predict everyday phenomena (Vosniadou, Reference Vosniadou and Vosniadou2013; Vosniadou & Skopeliti, Reference Vosniadou and Skopeliti2014). For instance, they may conclude from observation that the sun rotates around the earth because they can see it move across the sky between morning and dusk. Conflicts arise, however, when they learn about science. Their subsequent reclassification of the earth as a solar object changes it to a spherical planet rotating in space rather than a flat, stable physical object with the sky and solar objects above it. Their older perspective must be inhibited, however, so it does not continue to compete with the new perspective.

Inhibition during procedural learning is illustrated by Robert Siegler’s (Reference Siegler2005) overlapping waves model of strategy learning in which some strategies become less frequent and others become more frequent with experience and development. Children often alternate between a less efficient and a more efficient strategy before settling on the latter. Second graders initially added and then subtracted the same number in arithmetic problems of the form a + b – b such as 5 + 4 – 4 and 6 + 2 – 2. During a transition period, they alternated between this strategy and the more efficient strategy of simply giving the first number as the answer. The two strategies competed for usage before children were able to inhibit the use of the less efficient strategy.

Figure 1.1 illustrates a comprehensive model that integrates reasoning, metacognitive monitoring, and metacognitive control (Ackerman & Thompson, Reference Ackerman and Thompson2017). The initial reasoning components distinguish between an initial rapid response and analytic processing – the distinction made earlier in the chapter. The metacognitive components apply both to this chapter and to the next chapter on problem solving.

Figure 1.1 Components of reasoning, metacognitive monitoring, and metacognitive control.

From Ackerman, & Thompson (Reference Ackerman and Thompson2017).

An initial metacognitive step for problem solving is to determine whether the problem can be solved. The year following my retirement, I attended a lecture by a prominent statistician who was working with a group of other prominent statisticians on extremely difficult problems. He explained that the initial step in their discussions was attempting to judge whether they could solve the problem. I was surprised because I spent an entire career investigating problem solving without ever considering that question. I had always assumed as a student that instructors would give me solvable problems and, as an instructor and researcher, I always gave students solvable problems. However, we may encounter many unsolvable problems in our lives, and it would be helpful to initially identify them.

Ackerman and Thompson included several examples of how their model applies to our lives. The first example discussed how a visitor to Paris might use a Metro map to plan a trip from the hotel to the Eiffel Tower. I found the example very relevant because it activated memories of my successes and failures in using a Metro map to travel from my hotel to various sites in Tokyo. After studying the map on my initial excursion, I decided I needed to exit at the fourth stop and it worked, increasing my intermediate confidence. However, I had a feeling of error when the Metro came above ground on my return trip and I saw I was in a residential neighborhood. I discovered I was on an express train and therefore should have exited at the second stop. I also changed strategies after I discovered that the map of the underground stations showed numbered exits. I could then head to the correct exit rather than the nearest one.

The authors use their theoretical framework to organize research on metacognition. They also identify many questions that will continue to occupy researchers:

  • How are reasoning and problem-solving processes monitored that extend over a period of time?

  • How do individuals differ in their ability to assess their performance?

  • What determines whether to continue, switch strategies, or terminate thinking about a problem?

  • Can reasoning be improved by insights from metacognitive research?

Ackerman and Thompson (Reference Ackerman and Thompson2017) conclude their review by emphasizing the importance of understanding metacognition in studying reasoning, particularly in understanding why reasoning is either terminated prematurely or unnecessarily extended.

Summary

This chapter began with the distinction between reasoning from associations and reasoning from rules – a distinction that will resurface in subsequent chapters on creativity and innovation. The associative system is reproductive, automatic, and emphasizes similarity. The rule-based system is productive, deliberative, and emphasizes verification. Daniel Kahneman’s (Reference Kahneman2011) best-selling book Thinking Fast and Slow introduced readers to how associative and rule-based reasoning influences the speed of responses. The third section on biases in reasoning describes Kahneman’s classic research with Amos Tversky on how the use of heuristics such as availability and representativeness influences frequency estimates. The final section discusses monitoring reasoning in which people use knowledge to improve their thinking skills. Monitoring reasoning is a metacognitive skill that controls the selection, evaluation, revision, and abandonment of cognitive tasks, goals, and strategies.

Figure 0

Table 1.1 Associative versus rule-based reasoning. Based on Sloman (1996)

Figure 1

Table 1.2 The cognitive reflective test

From Frederick, S. (2005). [Copyright American Economic Association; reproduced with permission of the Journal of Economic Perspectives].
Figure 2

Table 1.3 Judgments of relative frequency of causes of death. Based on Slovic, Fischoff, & Lichtenstein (1976).

Figure 3

Figure 1.1 Components of reasoning, metacognitive monitoring, and metacognitive control.

From Ackerman, & Thompson (2017).

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Reasoning
  • Stephen K. Reed, San Diego State University
  • Book: Encouraging Innovation
  • Online publication: 17 August 2023
  • Chapter DOI: https://doi.org/10.1017/9781009390408.003
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Reasoning
  • Stephen K. Reed, San Diego State University
  • Book: Encouraging Innovation
  • Online publication: 17 August 2023
  • Chapter DOI: https://doi.org/10.1017/9781009390408.003
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Reasoning
  • Stephen K. Reed, San Diego State University
  • Book: Encouraging Innovation
  • Online publication: 17 August 2023
  • Chapter DOI: https://doi.org/10.1017/9781009390408.003
Available formats
×