-
- Aa
- Aa
Zeppi, Andrea 2017. Language in Complexity.
Alegre, Unai Augusto, Juan Carlos and Clark, Tony 2016. Engineering context-aware systems and applications: A survey. Journal of Systems and Software, Vol. 117, p. 55.
Annila, Arto 2016. On the Character of Consciousness. Frontiers in Systems Neuroscience, Vol. 10,
Bailey, Andrew M. 2016. Composition and the cases. Inquiry, Vol. 59, Issue. 5, p. 453.
Barsalou, Lawrence W. 2016. On Staying Grounded and Avoiding Quixotic Dead Ends. Psychonomic Bulletin & Review, Vol. 23, Issue. 4, p. 1122.
Bielecka, Krystyna 2016. Symbol Grounding Problem and causal theory of reference. New Ideas in Psychology, Vol. 40, p. 77.
Brier, Søren 2016. Biosemiotic Medicine.
Brito, Carlos F. and Marques, Victor X. 2016. Fundamental Issues of Artificial Intelligence.
Brizio, Adelina and Tirassa, Maurizio 2016. Biological Agency: Its Subjective Foundations and a Large-Scale Taxonomy. Frontiers in Psychology, Vol. 7,
Cappuccio, Massimiliano L. 2016. Fundamental Issues of Artificial Intelligence.
Carvalho, Leonardo Lana de Pereira, Denis James and Coelho, Sophia Andrade 2016. Origins and evolution of enactive cognitive science: Toward an enactive cognitive architecture. Biologically Inspired Cognitive Architectures, Vol. 16, p. 169.
Chalmers, David J. 2016. Science Fiction and Philosophy.
Endicott, Ronald 2016. Developing the explanatory dimensions of part–whole realization. Philosophical Studies,
Garagnani, Max Pulvermüller, Friedemann and Barbas, Helen 2016. Conceptual grounding of language in action and perception: a neurocomputational model of the emergence of category specificity and semantic hubs. European Journal of Neuroscience, Vol. 43, Issue. 6, p. 721.
Gorbacheva, Anna and Smirnov, Sergei 2016. Converging technologies and a modern man: emergence of a new type of thinking. AI & SOCIETY,
Hauk, Olaf 2016. Only time will tell – why temporal information is essential for our neuroscientific understanding of semantics. Psychonomic Bulletin & Review, Vol. 23, Issue. 4, p. 1072.
Hernández-Orallo, José Martínez-Plumed, Fernando Schmid, Ute Siebers, Michael and Dowe, David L. 2016. Computer models solving intelligence test problems: Progress and implications. Artificial Intelligence, Vol. 230, p. 74.
Hohwy, Jakob 2016. The Self-Evidencing Brain. Noûs, Vol. 50, Issue. 2, p. 259.
Javier Lopez Frias, Francisco and Luis Pérez Triviño, José 2016. Will robots ever play sports?. Sport, Ethics and Philosophy, Vol. 10, Issue. 1, p. 67.
Kelly, Nick and Gero, John S. 2016. Generate and situated transformation as a paradigm for models of computational creativity. International Journal of Design Creativity and Innovation, p. 1.
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.
“Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.