We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Stricker Learning Span (SLS) is a computer-adaptive word list memory test specifically designed for remote assessment and self-administration on a web-based multi-device platform (Mayo Test Drive). Given recent evidence suggesting the prominence of learning impairment in preclinical Alzheimer’s disease (AD), the SLS places greater emphasis on learning than delayed memory compared to traditional word list memory tests (see Stricker et al., Neuropsychology in press for review and test details). The primary study aim was to establish criterion validity of the SLS by comparing the ability of the remotely-administered SLS and inperson administered Rey Auditory Verbal Learning Test (AVLT) to differentiate biomarkerdefined groups in cognitively unimpaired (CU) individuals on the Alzheimer’s continuum.
Participants and Methods:
Mayo Clinic Study of Aging CU participants (N=319; mean age=71, SD=11; mean education=16, SD=2; 47% female) completed a brief remote cognitive assessment (∼0.5 months from in-person visit). Brain amyloid and brain tau PET scans were available within 3 years. Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (A+, n=110) or not (A-, n=209), and for 2) those with biological AD (A+T+, n=43) vs no evidence of AD pathology (A-T-, n=181). Primary neuropsychological outcome variables were sum of trials for both the SLS and AVLT. Secondary outcome variables examined comparability of learning (1-5 total) and delay performances. Linear model ANOVAs were used to investigate biomarker subgroup differences and Hedge’s G effect sizes were derived, with and without adjusting for demographic variables (age, education, sex).
Results:
Both SLS and AVLT performances were worse in the biomarker positive relative to biomarker negative groups (unadjusted p’s<.05). Because biomarker positive groups were significantly older than biomarker negative groups, group differences were attenuated after adjusting for demographic variables, but SLS remained significant for A+ vs A- and for A+T+ vs A-T- comparisons (adjusted p’s<.05) and AVLT approached significance (p’s .05-.10). The effect sizes for the SLS were slightly better (qualitatively, no statistical comparison) for separating biomarker-defined CU groups in comparison to AVLT. For A+ vs A- and A+T+ vs A-T- comparisons, unadjusted effect sizes for SLS were -0.53 and -0.81 and for AVLT were -0.47 and -0.61, respectively; adjusted effect sizes for SLS were -0.25 and -0.42 and for AVLT were -0.19 and -0.26, respectively. In secondary analyses, learning and delay variables were similar in terms of ability to separate biomarker groups. For example, unadjusted effect sizes for SLS learning (-.80) was similar to SLS delay (.76), and AVLT learning (-.58) was similar to AVLT 30-minute delay (-.55) for the A+T+ vs AT- comparison.
Conclusions:
Remotely administered SLS performed similarly to the in-person-administered AVLT in its ability to separate biomarker-defined groups in CU individuals, providing evidence of criterion validity. The SLS showed significantly worse performance in A+ and A+T+ groups (relative to A- and A-T-groups) in this CU sample after demographic adjustment, suggesting potential sensitivity to detecting transitional cognitive decline in preclinical AD. Measures emphasizing learning should be given equal consideration as measures of delayed memory in AD-focused studies, particularly in the preclinical phase.
Mayo Test Drive (MTD): Test Development through Rapid Iteration, Validation and Expansion, is a web-based multi-device (smartphone, tablet, personal computer) platform optimized for remote self-administered cognitive assessment that includes a computer-adaptive word list memory test (Stricker Learning Span; SLS; Stricker et al., 2022; Stricker et al., in press) and a measure of processing speed (Symbols Test: Wilks et al., 2021). Study aims were to determine criterion validity of MTD by comparing the ability of the MTD raw composite and in-person administered cognitive measures to differentiate biomarkerdefined groups in cognitively unimpaired (CU) individuals on the Alzheimer’s continuum.
Participants and Methods:
Mayo Clinic Study of Aging CU participants (N=319; mean age=71, SD=11, range=37-94; mean education=16, SD=2, range=6-20; 47% female) completed a brief remote cognitive assessment (∼0.5 months from in-person visit). Brain amyloid and brain tau PET scans were available within 3 years. Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (A+, n=110) or not (A-, n=209), and for 2) those with biological AD (A+T+, n=43) or with no evidence of AD pathology (A-T-, n=181). Primary outcome variables were MTD raw composite (SLS sum of trials + an accuracy-weighted Symbols response time measure), Global-z (average of 9 in-person neuropsychological measures) and an in-person screening measure (Kokmen Short Test of Mental Status, STMS; which is like the MMSE). Linear model ANOVAs were used to investigate biomarker subgroup differences and Hedge’s G effect sizes were derived, with and without adjusting for demographic variables (age, education, sex).
Results:
Remotely administered MTD raw composite showed comparable to slightly larger effect sizes compared to Global-z. Unadjusted effect sizes for MTD raw composite for differentiating A+ vs. A- and A+T+ vs. A-T- groups, respectively, were -0.57 and -0.84 and effect sizes for Global-z were -0.54 and -0.73 (all p’s<.05). Because biomarker positive groups were significantly older than biomarker negative groups, group differences were attenuated after adjusting for demographic variables, but MTD raw composite remained significant for A+T+ vs A-T- (adjusted effect size -0.35, p=.007); Global-z did not reach significance for A+T+ vs A-T- (adjusted effect size -0.19, p=.08). Neither composite reached significance for adjusted analyses for the A+ vs A- comparison (MTD raw composite adjusted effect size= -.22, p=.06; Global-z adjusted effect size= -.08, p=.47). Results were the same for an alternative MTD composite using traditional z-score averaging methods, but the raw score method is preferred for comparability to other screening measures. The STMS screening measure did not differentiate biomarker groups in any analyses (unadjusted and adjusted p’s>.05; d’s -0.23 to 0.05).
Conclusions:
Remotely administered MTD raw composite shows at least similar ability to separate biomarker-defined groups in CU individuals as a Global-z for person-administered measures within a neuropsychological battery, providing evidence of criterion validity. Both the MTD raw composite and Global-z showed greater ability to separate biomarker positive from negative CU groups compared to a typical screening measure (STMS) that was unable to differentiate these groups. MTD may be useful as a screening measure to aid early detection of Alzheimer’s pathological changes.
The Stricker Learning Span (SLS) is a computer-adaptive digital word list memory test specifically designed for remote assessment and self-administration on a web-based multi-device platform (Mayo Test Drive). We aimed to establish criterion validity of the SLS by comparing its ability to differentiate biomarker-defined groups to the person-administered Rey’s Auditory Verbal Learning Test (AVLT).
Method:
Participants (N = 353; mean age = 71, SD = 11; 93% cognitively unimpaired [CU]) completed the AVLT during an in-person visit, the SLS remotely (within 3 months) and had brain amyloid and tau PET scans available (within 3 years). Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (amyloid PET positive, A+, n = 125) or not (A-, n = 228), and those with biological AD (amyloid and tau PET positive, A+T+, n = 55) vs no evidence of AD pathology (A−T−, n = 195). Analyses were repeated among CU participants only.
Results:
The SLS and AVLT showed similar ability to differentiate biomarker-defined groups when comparing AUROCs (p’s > .05). In logistic regression models, SLS contributed significantly to predicting biomarker group beyond age, education, and sex, including when limited to CU participants. Medium (A− vs A+) to large (A−T− vs A+T+) unadjusted effect sizes were observed for both SLS and AVLT. Learning and delay variables were similar in terms of ability to separate biomarker groups.
Conclusions:
Remotely administered SLS performed similarly to in-person-administered AVLT in its ability to separate biomarker-defined groups, providing evidence of criterion validity. Results suggest the SLS may be sensitive to detecting subtle objective cognitive decline in preclinical AD.
Recording the police can do a lot, as this book explores. But no technological magic bullet can solve a fundamental challenge of policing—pushing some of our most severe addiction, mental health, poverty and other challenges into the criminal justice system and onto the workloads of police officers. Officers are expected to be social workers, mental health professionals, relationship counselors and more. Officers also are expected to do more with less resources to assist in an era where many departments have not recovered from budgetary cutbacks and also face officer recruiting challenges. The book concludes that as a society, we must confront the questions of how we wish to allocate our limited budgetary dollars. Technology can be tempting, offering the allure of a panacea less cumbersome and costly than dealing with messy human issues. We are a nation deeply divided about law enforcement, social services cutbacks, addiction, race, and many other polarizing issues. We turn to cameras and technology because these have the potential to bring people and groups from very different perspectives together in agreement. But the ultimate transformation people seek comes from long-term investment to tackle the severe social challenges currently straining policing and criminal justice.
This chapter is about the privacy and public disclosure challenges posed by police recordings. Officers enter our most private places and intervene at some of the worst moments of our lives. The spread of body cameras small enough to go everywhere an officer goes—and community members ready to record and post videos on social media—poses potentially grave privacy harms. Communities also are wrestling with how to balance privacy protection with public disclosure. Also known as freedom of information or sunshine laws, public disclosure laws give people the right to demand access to records held by the government to facilitate democratic governance, transparency, anticorruption efforts, and public trust. Without sufficient safeguards, a requester could get police videos and post people's sensitive and potentially painful and humiliating moments on venues like YouTube. To address the problem, some states exempt body camera footage from disclosure altogether. Others protect certain categories of sensitive footage. Others require redaction. Police department policies also detail when body cameras should or may be shut off to protect privacy. This chapter delves into the costs and benefits of the different approaches to balancing privacy protection and public disclosure.
This chapter is about the pitfalls of video evidence, which can be misleading. Video evidence can supply important additional context and present other sides of the story. But it is no silver bullet to cure interpretive conflicts. The chapter illustrates the partiality of perceptions through several stories of how different sides interpret video evidence—and how cameras from different angles can tell very different stories. The increasing use of audiovisual evidence requires greater public understanding of the biases that can affect our interpretation of the images that we see. The chapter discusses persuasion effects and potential distortions caused by angle, framing, timing, perspective, and the filter of one’s own preconceived notions. Cameras also may tempt us to ignore the relevant standard on legal questions where the proper yardstick in constitutional criminal procedure is what an officer perceived in the field, often under stress, not what a machine can capture. Moreover footage often is captured from just one perspective—the officer's, in the case of police-worn body cameras. The resulting pooled dataset may be skewed and lead to biased algorithms and findings from advanced analytics. The chapter discusses solutions for these important limitations of video evidence.
Bystanders and organized copwatch groups are increasingly wielding cell phone cameras to document and protest police activities, raising important questions about the right to record the police. People who record the police have risked retaliation and argued that they were arrested in violation of their First Amendment rights to record. The risk of arrest for recording the police is particularly acute in states with "two-party consent" wiretap laws that require the consent of all parties to a conversation before a recording. Other people who record the police have faced arrest and prosecution under broad amorphous legal provisions such as obstruction or disorderly conduct. The chapter's analysis is illustrated by stories of cases where people have been arrested and even prosecuted for recording the police. Despite these risks, there is a growing recognition of the First Amendment right to record the police by the courts. Moreover, a new generation of officers are growing accustomed to doing their job on camera in an era saturated with cell phone cameras, Facebook live, and YouTube videos. As people increasingly record officers, departments are under pressure to release their own videos from the officer's perspective in the race to manage public perceptions.
This chapter is about how to deal with failures to record by officers wearing body cameras and the overarching issue of whether to use police videos for officer monitoring and discipline. There are understandable reasons why officers may not activate their body cameras in the heat and stress of the moment or due to mistake or malfunction. But there also are challenges with resistance, subversion and selective recording. If the problem is left unchecked, rather than being a tool of police accountability, body camera recordings could amplify the problems of a perceived gross imbalance in power and public mistrust. Video recordings could just become yet another way to offer more powerful evidence to speed up a plea bargain or conviction. From a public safety perspective, there also is emerging evidence that uses of force actually increase among officers who wear body cameras but do not follow recording rules. Police departments vary in approaches on whether there are sanctions for failures to record and whether body camera videos can be a basis for officer monitoring, evaluation, and discipline. Police unions also can exert a powerful influence on these policies. The chapter explores how to address the missing police video problem.
This introduction offers an overview of how the rise of police-worn body cameras and community members recording the police has important implications for the future of proof, privacy, civil rights protection, and violence prevention. The book begins by taking readers onto the streets of West Baltimore, where body cameras now are widely deployed throughout the force, and where community members who deeply distrust the police have cell phone cameras ready to record the cops. The search for a missing shooter— and a bleeding victim—in a community that burned with fires and protests after the death of Freddie Gray captures the daunting needs and mistrust that the recording revolutions aim to address. Even people with strong fear and loathing of the police need someone to call in their hour of need. The hope is that the cops and the public will behave better if they know they are recorded, protecting civil liberties, safeguarding both sides from false claims, and reducing the risk of escalation to violence. This book explores the scientific findings behind these hopes and the evidentiary, privacy, public disclosure and civil rights implications of recording almost all that cops see and do.
The rise of two movements in recording the police—body cameras and copwatching—responds to protests over the disparate risk of death and injury in law enforcement encounters and power imbalances between cop and suspect. The chapter opens in the Anacostia neighborhood of Washington, DC, where Terry stops and frisks of several young black men do not turn up a weapon. In a heated encounter, both sides have cameras aimed at the other, making a record from different perspectives. Recording can radically change the opacity of police discretion by creating records where none existed before, documenting everyday encounters that may never lead to contraband nor make it into a report, yet greatly impact community-police relations and the experience of civil liberties. The chapter also discusses the widespread hope that recording will reduce the risk of injury and death in police encounters and resolve disputes over uses of force. Though incomplete, the existing data reveal stark racial disparities in the risk of being shot by an officer. Findings are mixed on whether recording the police actually reduces complaints against the police and the need to resort to force. What is clear is the allure of technological solutions to long-burning problems.