We partner with a secure submission system to handle manuscript submissions.
Please note:
You will need an account for the submission system, which is separate to your Cambridge Core account. For login and submission support, please visit the
submission and support pages.
Please review this journal's author instructions, particularly the
preparing your materials
page, before submitting your manuscript.
Click Proceed to submission system to continue to our partner's website.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Oaksford & Chater (O&C) subscribe to the view that a conditional expresses a high conditional probability of the consequent, given the antecedent, but they model conditionals as expressing a dependency between antecedent and consequent. Therefore, their model is inconsistent with their theoretical commitment. The model is also inconsistent with some findings on how people interpret conditionals and how they reason from them.
Oaksford & Chater (O&C) have rejected logic in favor of probability theory for reasons that are irrelevant to mental-logic theory, because mental-logic theory differs from standard logic in significant ways. Similar to O&C, mental-logic theory rejects the use of the material conditional and deals with the completeness problem by limiting the scope of its procedures to local sets of propositions.
Recent work in rational probabilistic modeling suggests that a kind of propositional reasoning is ubiquitous in cognition and especially in cognitive development. However, there is no reason to believe that this type of computation is necessarily conscious or resource-intensive.
A dual-code model of number processing needs to take into account the difference between a number symbol and its meaning. The transition of automatic non-abstract number representations into intentional abstract representations could be conceptualized as a translation of perceptual asemantic representations of numerals into semantic representations of the associated magnitude information. The controversy about the nature of number representations should be thus related to theories on embodied grounding of symbols.
Evans & Levinson (E&L) focus on differences between languages at a superficial level, rather than examining common processes. Their emphasis on trivial details conceals uniform design features and universally shared strategies. Lexical category distinctions between nouns and verbs are probably universal. Non-local dependencies are a general property of languages, not merely non-configurational languages. Even the latter class exhibits constituency.
It is argued that the cognitive revolution provided general support for the view that associative learning requires cognitive processing, but only limited support for the view that it requires conscious processing. The point is illustrated by two studies of associative learning that played an important role in the development of the cognitive revolution, but which are surprisingly neglected by Mitchell et al. in the target article.
The idea of a biologically evolved, universal grammar with linguistic content is a myth, perpetuated by three spurious explanatory strategies of generative linguists. To make progress in understanding human linguistic competence, cognitive scientists must abandon the idea of an innate universal grammar and instead try to build theories that explain both linguistic universals and diversity and how they emerge.
Single-neuron recordings may help resolve the issue of abstract number representation in the parietal lobes. Two manipulations in particular – reversible inactivation and adaptation of apparent numerosity – could provide important insights into the causal influence of “numeron” activity. Taken together, these tests can significantly advance our understanding of number processing in the brain.
Oaksford & Chater (O&C) begin in the halfway Bayesian house of assuming that minor premises in conditional inferences are certain. We demonstrate that this assumption is a serious limitation. They additionally suggest that appealing to Jeffrey's rule could make their approach more general. We present evidence that this rule is not limited enough to account for actual probability judgements.
Abstraction is instrumental for our understanding of how numbers are cognitively represented. We propose that the notion of abstraction becomes testable from within the framework of simulated cognition. We describe mental simulation as embodied, grounded, and situated cognition, and report evidence for number representation at each of these levels of abstraction.
We discuss Oaksford & Chater's (O&C's) probabilistic approach from a probability logical point of view. Specifically, we comment on subjective probability, the indispensability of logic, the Ramsey test, the consequence relation, human nonmonotonic reasoning, intervals, generalized quantifiers, and rational analysis.
Converging findings from English, Mandarin, and other languages suggest that observed “universals” may be algorithmic. First, computational principles behind recently developed algorithms that acquire productive constructions from raw texts or transcribed child-directed speech impose family resemblance on learnable languages. Second, child-directed speech is particularly rich in statistical (and social) cues that facilitate learning of certain types of structures.
Studies of conditioning in simple systems are best interpreted in terms of the formation of excitatory links. The mechanisms responsible for such conditioning contribute to the associative learning effects shown by more complex systems. If a dual-system approach is to be avoided, the best hope lies in developing standard associative theory to deal with phenomena said to show propositional learning.
The neural realization of number in abstract form is implausible, but from this it doesn't follow that numbers are not abstract. Clear definitions of abstraction are needed so they can be applied homogenously to numerical and non-numerical cognition. To achieve a better understanding of the neural substrate of abstraction, productive cognition – not just comprehension and perception – must be investigated.
Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
Can the phenomena of associative learning be replaced wholesale by a propositional reasoning system? Mitchell et al. make a strong case against an automatic, unconscious, and encapsulated associative system. However, their propositional account fails to distinguish inferences based on actions from those based on observation. Causal Bayes networks remedy this shortcoming, and also provide an overarching framework for both learning and reasoning. On this account, causal representations are primary, but associative learning processes are not excluded a priori.
Severity of Test (SoT) is an alternative to Popper's logical falsification that solves a number of problems of the logical view. It was presented by Popper himself in 1963. SoT is a less sophisticated probabilistic model of hypothesis testing than Oaksford & Chater's (O&C's) information gain model, but it has a number of striking similarities. Moreover, it captures the intuition of everyday hypothesis testing.
Cohen Kadosh & Walsh (CK&W) discuss the limitations of the behavioral, imaging, and single-cell studies related to number representation in human parietal cortex. The limitations of the imaging studies are grossly underestimated, particularly those using adaptation paradigms, and the problem of establishing a link between single-cell studies and imaging is not even addressed. Monkey functional magnetic resonance imaging (fMRI), however, provides a solution to these problems.
Although we endorse the primacy of uncertainty in reasoning, we argue that a probabilistic framework cannot model the fundamental skill of proof administration. Furthermore, we are skeptical about the assumption that standard probability calculus is the appropriate formalism to represent human uncertainty. There are other models up to this task, so let us not repeat the excesses of the past.