Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-29T05:44:57.630Z Has data issue: false hasContentIssue false

Discovering the unknown unknowns of research cartography with high-throughput natural description

Published online by Cambridge University Press:  05 February 2024

Tanay Katiyar
Affiliation:
Institut Jean Nicod, Département d'études cognitives, École normale supérieure (ENS-PSL), Paris, France tanay.katiyar20@gmail.com
Jean-François Bonnefon
Affiliation:
Toulouse School of Economics, Centre National de la Recherche Scientifique (TSM-R), Toulouse, France jean-francois.bonnefon@tse-fr.eu; https://jfbonnefon.github.io
Samuel A. Mehr*
Affiliation:
School of Psychology, University of Auckland, Auckland, New Zealand https://mehr.nz/ Yale Child Study Center, Yale University, New Haven, CT, USA sam@yale.edu
Manvir Singh
Affiliation:
Department of Anthropology, University of California-Davis, Davis, CA, USA manvir.manvir@gmail.com; https://manvir.org
*
*Corresponding author.

Abstract

To succeed, we posit that research cartography will require high-throughput natural description to identify unknown unknowns in a particular design space. High-throughput natural description, the systematic collection and annotation of representative corpora of real-world stimuli, faces logistical challenges, but these can be overcome by solutions that are deployed in the later stages of integrative experiment design.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Awad, E., Dsouza, S., Bonnefon, J. F., Shariff, A., & Rahwan, I. (2020). Crowdsourcing moral machines. Communications of the ACM, 63(3), 4855.CrossRefGoogle Scholar
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … Rahwan, I. (2018). The moral machine experiment. Nature, 563, 5964.CrossRefGoogle ScholarPubMed
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 168. https://doi.org/10.1177/1529100619832930CrossRefGoogle ScholarPubMed
Benitez-Quiroz, C. F., Srinivasan, R., & Martinez, A. M. (2016). EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In 2016 IEEE Conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, USA (pp. 5562–5570). https://doi.org/10.1109/CVPR.2016.600CrossRefGoogle Scholar
Dawel, A., Miller, E. J., Horsburgh, A., & Ford, P. (2021). A systematic survey of face stimuli used in psychological research 2000–2020. Behavior Research Methods, 54(4), 18891901. https://doi.org/10.3758/s13428-021-01705-3CrossRefGoogle ScholarPubMed
Hilton, C., & Mehr, S. (2021). Citizen science can help to alleviate the generalizability crisis.45, e21.Google Scholar
Long, B., Simson, J., Buxó-Lugo, A., Watson, D. G., & Mehr, S. A. (2023). How games can make behavioural science better. Nature, 613(7944), 433436.CrossRefGoogle Scholar
Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences of the United States of America, 105(32), 1108711092. https://doi.org/10.1073/pnas.0805664105CrossRefGoogle ScholarPubMed
Schutz, M., & Gillard, J. (2020). On the generalization of tones: A detailed exploration of non-speech auditory perception stimuli. Scientific Reports, 10(1), 9520. https://doi.org/10.1038/s41598-020-63132-2CrossRefGoogle Scholar
Schutz, M., & Kubovy, M. (2009). Causality and cross-modal integration. Journal of Experimental Psychology: Human Perception and Performance, 35(6), 1791.Google ScholarPubMed
Srinivasan, R., & Martinez, A. M. (2018). Cross-cultural and cultural-specific production and perception of facial expressions of emotion in the wild. IEEE Transactions on Affective Computing, 12(3), 707721.CrossRefGoogle Scholar
Sutherland, C. A. M., Oldmeadow, J. A., Santos, I. M., Towler, J., Michael Burt, D., & Young, A. W. (2013). Social inferences from faces: Ambient images generate a three-dimensional model. Cognition, 127(1), 105118. https://doi.org/10.1016/j.cognition.2012.12.001CrossRefGoogle ScholarPubMed
Todorov, A., Said, C. P., Engell, A. D., & Oosterhof, N. N. (2008). Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12(12), 455460. https://doi.org/10.1016/j.tics.2008.10.001CrossRefGoogle ScholarPubMed
Yitzhak, N., Giladi, N., Gurevich, T., Messinger, D. S., Prince, E. B., Martin, K., & Aviezer, H. (2017). Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions. Emotion, 17(8), 11871198. https://doi.org/10.1037/emo0000287CrossRefGoogle ScholarPubMed