Hostname: page-component-77f85d65b8-grvzd Total loading time: 0 Render date: 2026-04-18T06:26:04.843Z Has data issue: false hasContentIssue false

Intermodal perception of affect in persons with autism or Down syndrome

Published online by Cambridge University Press:  04 March 2009

Katherine A. Loveland*
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
Belgin Tunali-Kotoski
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
Richard Chen
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
Kristin A. Brelsford
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
Juliana Ortegon
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
Deborah A. Pearson
Affiliation:
Center for Human Development Research, University of Texas Medical School, Houston, University of Texas-Houston, Health Science Center
*
Katherine A. Loveland, Ph.D., Center for Human Development Research, Department of Psychiatry and Behavioral Sciences, University of Texas Medical School-Houston, UTMSI-1300 Moursund Street, Houston, Texas 77030.

Abstract

Persons with autism (n = 28) or Down syndrome (n = 30) took part in a study of the ability to detect intermodal correspondence between facial and vocal/linguistic information for affect. Participants viewed 24 split-screen images of an individual talking and displaying a different affect on each side of the display (happy, sad, angry, surprised, or neutral). The vocal track, matching one affect (i.e., one side of the split-screen) but not the other, was played from a central speaker. Subjects were asked to point to the side matching the vocal track. The vocal track was desynchronized with both sides, so that rhythmic synchrony was greatly reduced and subjects must use affect to make their choices. In the first control condition, rhythmic synchrony information was restored. In a second control condition, inanimate objects and their sounds were presented. In the experimental condition, when verbal mental age and IQ were taken into account, the autism group performed more poorly than did the Down syndrome group in detecting intermodal correspondence of face and voice. When rhythmic synchrony information was available, both groups' performances improved, with the Down syndrome group performing slightly better than the group with autism. There were no group differences in the condition using inanimate objects. Results suggest that persons with autism may have difficulty detecting intermodal correspondence of facial and vocal/linguistic affect.

Information

Type
Articles
Copyright
Copyright © Cambridge University Press 1995

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable