2 results
78 Remotely monitored in-home IADLs can discriminate between normal cognition and mild cognitive impairment
- Destiny J Weaver, Chao-Yi Wu, Zachary Beattie, Samuel Lee, Catherine H Ju, Kayla Chan, John Ferguson, Hiroko Dodge, Adriana Hughes
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 381-382
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Approximately 6.5 million Americans ages 65 and older have Alzheimer’s disease and related dementias, a prevalence projected to triple by 2060. While subtle impairment in cognition and instrumental activities of daily living (IADLs) arises in the mild cognitive impairment (MCI) phase, early detection of these insidious changes is difficult to capture given limitations. Traditional IADL assessments administered infrequently are less sensitive to early MCI and not conducive to tracking subtle changes that precede significant declines. Continuous passive monitoring of IADLs using sensors and software in home environments is a promising alternative. The purpose of this study was to determine which remotely monitored IADLs best distinguish between MCI and normal cognition.
Participants and Methods:Participants were 65 years or older, independently community-dwelling, and had at least one daily medication and home internet access. Clinical assessments were performed at baseline. Electronic pillboxes (MedTracker) and computer software (Worktime) measured daily medication and computer habits using the Oregon Center for Aging and Technology (ORCATECH) platform. The Survey for Memory, Attention, and Reaction Time (SMART; Trail A, Trail B, and Stroop Tests) is a self-administered digital cognitive assessment that was deployed monthly. IADL data was aggregated for each participant at baseline (first 90 days) in each domain and various features developed for each. The receiver operating characteristic area under the curve (ROC-AUC) was calculated for each feature.
Results:Traditional IADL Questionnaires.
At baseline, 103 participants (normal n = 59, Mage = 73.6±5.5; MCI n = 44, Mage = 76.0±6.1) completed three functional questionnaires (Functional Activities Questionnaire; Measurement of Everyday Cognition (ECog), both self-report and informant). The Informant ECog demonstrated the highest AUC (72% AUC, p = <.001).
Remotely monitored in-home IADLs and self-administered brief online cognitive test performance.
Eighty-four had medication data (normal n = 48, Mage = 73.2±5.4; MCI n = 36, Mage = 75.6±6.9). Four features related to pillbox-use frequency (73% AUC) and four features related to pillbox-use time (62% AUC) were developed. The discrepancy between self-reported frequency of use versus actual use was the most discriminating (67% AUC, p = .03).
Sixty-six had computer data (normal n = 38, Mage = 73.6±6.1; MCI n = 28, Mage = 76.6±6.8). Average usage time showed 64% AUC (p = .048) and usage variability showed 60% AUC (p = .18).
One hundred and two completed the SMART (normal n = 59, Mage = 73.6±5.5; MCI n = 43, Mage = 75.9±6.2). Eleven features related to survey completion time demonstrated 80% AUC in discriminating cognition. Eleven features related to the number of clicks during the survey demonstrated 70% AUC. Lastly, seven mouse movement features demonstrated 71% AUC.
Conclusions:Pillbox use frequency combined features and self-administered brief online cognitive test combined features (e.g., completion times, mouse cursor movements) have acceptable to excellent ability to discriminate between normal cognition and MCI and are relatively comparable to informant rated IADL questionnaires. General computer usage habits demonstrated lower discriminatory ability. Our approach has applied implications for detecting and tracking older adults’ declining cognition and function in real world contexts.
The Qualitative Transparency Deliberations: Insights and Implications
- Alan M. Jacobs, Tim Büthe, Ana Arjona, Leonardo R. Arriola, Eva Bellin, Andrew Bennett, Lisa Björkman, Erik Bleich, Zachary Elkins, Tasha Fairfield, Nikhar Gaikwad, Sheena Chestnut Greitens, Mary Hawkesworth, Veronica Herrera, Yoshiko M. Herrera, Kimberley S. Johnson, Ekrem Karakoç, Kendra Koivu, Marcus Kreuzer, Milli Lake, Timothy W. Luke, Lauren M. MacLean, Samantha Majic, Rahsaan Maxwell, Zachariah Mampilly, Robert Mickey, Kimberly J. Morgan, Sarah E. Parkinson, Craig Parsons, Wendy Pearlman, Mark A. Pollack, Elliot Posner, Rachel Beatty Riedl, Edward Schatz, Carsten Q. Schneider, Jillian Schwedler, Anastasia Shesterinina, Erica S. Simmons, Diane Singerman, Hillel David Soifer, Nicholas Rush Smith, Scott Spitzer, Jonas Tallberg, Susan Thomson, Antonio Y. Vázquez-Arroyo, Barbara Vis, Lisa Wedeen, Juliet A. Williams, Elisabeth Jean Wood, Deborah J. Yashar
-
- Journal:
- Perspectives on Politics / Volume 19 / Issue 1 / March 2021
- Published online by Cambridge University Press:
- 06 January 2021, pp. 171-208
- Print publication:
- March 2021
-
- Article
- Export citation
-
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1