Editorial
A difference that matters: comparisons of structured and semi-structured psychiatric diagnostic interviews in the general population
- T. S. BRUGHA, P. E. BEBBINGTON, R. JENKINS
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1013-1020
-
- Article
-
- You have access Access
- Export citation
-
Psychiatric case-identification in general populations allows us to study both individuals with functional psychiatric disorders and the populations from which they come. The individual level of analysis permits disorders to be related to factors of potential aetiological significance and the study of attributes of the disorders that need to be assessed in non-referred populations (an initially scientific endeavour). At the population level valid case identification can be used to evaluate needs for treatment and the utilization of service resources (a public health project). Thus, prevalence is of interest both to scientists and to those responsible for commissioning and planning services (Brugha et al. 1997; Regier et al. 1998). The quality of case identification techniques and of estimates of prevalence is thus of general concern (Bartlett & Coles, 1998).
Structured diagnostic interviews were introduced into general population surveys in the 1970s as a method ‘to enable interviewers to obtain psychiatric diagnoses comparable to those a psychiatrist would obtain’ (Robins et al. 1981). The need to develop reliable standardized measures was partly driven by an earlier generation of prevalence surveys showing rates ranging widely from 10·9% (Pasamanick et al. 1956) to 55% (Leighton et al. 1963) in urban and rural North American communities respectively. If the success of large scale psychiatric epidemiological enquiries using structured diagnostic interviews and standardized classifications is measured in terms of citation rates it would seem difficult to question. But the development of standardized interviews of functional psychiatric disorders has not solved this problem of variability: the current generation of large scale surveys, using structured diagnostic interviews and serving strictly defined classification rules, have generated, for example, 12-month prevalence rates of major depression in the US of 4·2% (Robins & Regier, 1991) and 10·1% (Kessler et al. 1994). This calls into question the validity of the assessments, such that we must reopen the question of what they should be measuring and how they should do it.
Diagnosing mental disorders in the community. A difference that matters?
- H.-U. WITTCHEN, T. B. ÜSTÜN, R. C. KESSLER
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1021-1027
-
- Article
-
- You have access Access
- Export citation
-
Brugha and his colleagues in this issue raise important questions about the validity of standardized diagnostic interviews of mental disorders, such as the Composite International Diagnostic Interview (CIDI) (WHO, 1990). Although their concerns refer predominantly to the use of such instruments in epidemiological research, the authors' conclusions also have significant implications for diagnostic assessments in clinical practice and research. We agree with Brugha et al. that the inflexible approach to questioning used in standardized interviews can lead to an increased risk of invalidity with regard to some diagnoses. We also agree that the use of more semi-structured clinical questions has the potential to address this problem. However, we disagree with Brugha et al. in several other respects.
First, we disagree with the authors' initial exclusive emphasis on diagnosis with regard to need assessment and consequences for the allocation of service resources. It is becoming increasingly clear that knowledge about diagnosis does not, in itself, whether assessed by clinical or non-clinical diagnostic interviews, provide sufficient information we need for policy purposes and the determination of societal costs, or to judge clinical management guidelines and treatment needs (Regier et al. 1998). Additional, preferably dimensional, data on associated disabilities and distress as well as a focused need evaluation for those psychosocial, psychological and drug interventions that characterize modern treatment strategies are also important. It also has become evident that a great many people in the general population carry more than one diagnosis. This ‘co-morbidity’ complicates further such simple equation of diagnosis prevalence with need assessment and policy decisions. Secondly, we disagree with the conclusion of Brugha et al. that the use of a semi-structured clinical interview, like the most current version of the Structured Clinical Assessment for Neuropsychiatry (SCAN), whether in the hands of clinical or non-clinical interviewers, is most closely approximating the ‘clinical gold standard’ and is the most feasible way to correct the problem of disagreement between semi-structured clinical diagnostic interviews and standardized diagnostic interviews. We believe that the practical reliability and validity problems associated with using such a clinical interviewing approach especially in large-scale community surveys as well as in cross-national research more than cancel out any theoretical advantage this approach might have in clarifying meaning. Thirdly, we disagree with the suggestion of Brugha et al. that the problem of validity is inherent in standardized non-clinician interviews. Indeed, as detailed below, there is no evidence that across all diagnoses clinical semi-structured interviews reveal more promising psychometric properties than standardized interviews. Also methodological research shows quite clearly that a substantial number of potential validity problems in standardized interviews can be overcome.
Research Article
Cross validation of a general population survey diagnostic interview: a comparison of CIS-R with SCAN ICD-10 diagnostic categories
- T. S. BRUGHA, P. E. BEBBINGTON, R. JENKINS, H. MELTZER, N. A. TAUB, M. JANAS, J. VERNON
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1029-1042
-
- Article
- Export citation
-
Background. Comparisons of structured diagnostic interviews with clinical assessments in general population samples show marked discrepancies. In order to validate the CIS-R, a fully structured diagnostic interview used for the National Survey of Psychiatric Morbidity in Great Britain, it was compared with SCAN, a standard, semi-structured, clinical assessment.
Methods. A random sample of 1882 Leicestershire addresses from the Postcode Address File yielded 1157 eligible adults: of these 860 completed the CIS-R; 387 adults scores [ges ]8 on the CIS-R and 205 of these completed a SCAN reference examination. Neurotic symptoms, in the previous week and month only, were enquired about. Concordance was estimated for ICD-10 neurotic and depressive disorders, F32 to F42 and for depression symptom score.
Results. Sociodemographic characteristics closely resembled National Survey and 1991 census profiles. Concordance was poor for any ICD-10 neurotic disorder (kappa = 0·25 (95% CI, 0·1–0·4)) and for depressive disorder (kappa = 0·23 (95% CI, 0–0·46)). Sensitivity to the SCAN reference classification was also poor. Specificity ranged from 0·8 to 0·9. Rank order correlation for total depression symptoms was 0·43 (Kendall's tau b; P<0·001; N=205).
Discussion. High specificity indicates that the CIS-R and SCAN agree that prevalence rates for specific disorders are low compared with estimates in some community surveys. We have revealed substantial discrepancies in case finding. Therefore, published data on service utilization designed to estimate unmet need in populations requires re-interpretation. The value of large-scale CIS-R survey data can be enhanced considerably by the incorporation of concurrent semi-structured clinical assessments.
Pubertal changes in hormone levels and depression in girls
- A. ANGOLD, E. J. COSTELLO, A. ERKANLI, C. M. WORTHMAN
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1043-1053
-
- Article
- Export citation
-
Background. Throughout their reproductive years, women suffer from a higher prevalence of depression than men. Before puberty, however, this is not the case. In an earlier study, we found that reaching Tanner Stage III of puberty was associated with increased levels of depression in girls. This paper examines whether the morphological changes associated with puberty (as measured by Tanner stage) or the hormonal changes underlying them are more strongly associated with increased rates of depression in adolescent girls.
Methods. Data from three annual waves of interviews with 9 to 15-year-olds from the Great Smoky Mountains study were analysed.
Results. Models including the effects of testosterone and oestradiol eliminated the apparent effect of Tanner stage. The effect of testosterone was non-linear. FSH and LH had no effects on the probability of being depressed.
Conclusions. These findings argue against theories that explain the emergence of the female excess of depression in adulthood in terms of changes in body morphology and their resultant psychosocial effects on social interactions and self-perception. They suggest that causal explanations of the increase in depression in females need to focus on factors associated with changes in androgen and oestrogen levels rather than the morphological changes of puberty.
Early risk factors and adult person–environment relationships in affective disorder
- JIM VAN OS, PETER B. JONES
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1055-1067
-
- Article
- Export citation
-
Background. Lower cognitive ability, higher neuroticism and symptoms of anxiety and depression in childhood predict non-psychotic disorder in adulthood. This study examined whether these early risk factors act by modifying relationships with life events close to disease onset in adulthood.
Methods. Childhood measures of neuroticism (N) (including maternal N), cognitive ability (CA) and symptoms of anxiety and depression were measured in a national British birth cohort of 5362 individuals born in the week 3–9 March, 1946. At ages 36 and 43 years, mental state examinations were carried out by trained interviewers, and subjects were asked about the occurrence of stressful life events in the previous year (SLE).
Results. The effect of aggregated SLEs on mental health was greater in women, in individuals with higher childhood N and poorer childhood mental health. Higher maternal N was also associated with greater sensitivity to SLEs, independent of subject's N, suggesting possible familial transmission of vulnerability. In addition, higher childhood N predicted, independent of later mental health, greater likelihood of reported exposure to SLEs. In general, individuals with higher childhood CA also reported more SLEs.
Conclusions. The results suggest that early risk factors for affective disorder exert effects by modifying person–environment relationships close to onset of adult symptoms. Sensitivity to life events may be transmitted from parents to offspring; psychopathological continuity over the life-span may be explained in part by continuity of altered stress sensitivity.
Genetic differences in alcohol sensitivity and the inheritance of alcoholism risk
- A. C. HEATH, P. A. F. MADDEN, K. K. BUCHOLZ, S. H. DINWIDDIE, W. S. SLUTSKE, L. J. BIERUT, J. W. ROHRBAUGH, D. J. STATHAM, M. P. DUNNE, J. B. WHITFIELD, N. G. MARTIN
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1069-1081
-
- Article
- Export citation
-
Background. Substantial evidence exists for an important genetic contribution to alcohol dependence risk in women and men. It has been suggested that genetically determined differences in alcohol sensitivity may represent one pathway by which an increase in alcohol dependence risk occurs.
Methods. Telephone interview follow-up data were obtained on twins from male, female and unlike-sex twin pairs who had participated in an alcohol challenge study in 1979–81, as well as other pairs from the same Australian twin panel surveyed by mail in 1980–82.
Results. At follow-up, alcohol challenge men did not differ from other male twins from the same age cohort on measures of lifetime psychopathology or drinking habits; but alcohol challenge women were on average heavier drinkers than other women. Acomposite alcohol sensitivity measure, combining subjective intoxication and increase in body-sway after alcohol challenge in 1979–81, exhibited high heritability (60%). Parental alcoholism history was weakly associated with decreased alcohol sensitivity in women, but not after adjustment for baseline drinking history, or in men. High alcohol sensitivity in men was associated with substantially reduced alcohol dependence risk (OR=0·05, 95% CI 0·01–0·39). Furthermore, significantly decreased (i.e. low) alcohol sensitivity was observed in non-alcoholic males whose MZ co-twin had a history of alcohol dependence, compared to other non-alcoholics. These associations remained significant in conservative analyses that controlled for respondents' alcohol consumption levels and alcohol problems in 1979–81.
Conclusions. Men (but not women) at increased genetic risk of alcohol dependence (assessed by MZ co-twin's history of alcohol dependence) exhibited reduced alcohol sensitivity. Associations with parental alcoholism were inconsistent.
Stimulation of the noradrenergic system enhances and blockade reduces memory for emotional material in man
- R. E. O'CARROLL, E. DRYSDALE, L. CAHILL, P. SHAJAHAN, K. P. EBMEIER
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1083-1088
-
- Article
- Export citation
-
Background. It is clearly established that emotional events tend to be remembered particularly vividly. The neurobiological substrates of this phenomenon are poorly understood. Recently, the noradrenergic system has been implicated in that beta blockade has been shown to reduce significantly the delayed recall of emotional material with matched neutral material being unaffected.
Methods. In the present study, 36 healthy young adults were randomly allocated to receive either yohimbine, which stimulates central noradrenergic activity, metoprolol which blocks noradrenergic activity, or matched placebo. The three groups were well matched. All capsules were taken orally, prior to viewing a narrated 11 slide show described a boy being involved in an accident.
Results. Yohimbine significantly elevated, and metoprolol reduced mean heart rate during the slide show relative to placebo, thus confirming the efficacy of the pharmacological manipulation. One week later, in a ‘surprise’ test, memory for the slide show was tested. As predicted, yohimbine-treated subjects recalled significantly more and metoprolol subjects fewer slides relative to placebo. This result was confirmed via analysis of multiple-choice recognition memory scores.
Conclusions. We conclude that stimulation of the noradrenergic system results in the enhancement and blockade in a reduction of recall and recognition of emotional material in man.
The Thought Control Questionnaire – psychometric properties in a clinical sample, and relationships with PTSD and depression
- MARTINA REYNOLDS, ADRIAN WELLS
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1089-1099
-
- Article
- Export citation
-
Background. Recent developments in research suggest that particular attempts to control thoughts may contribute to the problem of intrusion. An instrument capable of identifying strategies for dealing with unwanted intrusions in clinical populations may be used for differentiating between thought control strategies that may or may not be helpful.
Methods. The Thought Control Questionnaire (TCQ) (Wells & Davies, 1994) developed and validated on a normal sample, was administered to a clinical sample in order to investigate the consistency of the original factor structure and its psychometric properties. The sensitivity of the scale to change associated with recovery was also examined. Relationships between individual differences in thought control strategies and psychiatric symptoms in patients with DSM-IV major depression, and PTSD with or without major depression were investigated.
Results. The Scree Test suggested a six-factor solution which was rotated. This solution split the original distraction subscale into separate behavioural and cognitive distraction, otherwise the subscales were almost identical to those obtained in non-clinical subjects. As this split has been shown to be unreliable, further analyses in this study were based on the five-factor version of the TCQ obtained by Wells & Davies (1994). Predictors of recovery and of symptoms in PTSD and depression were explored.
Conclusions. Correlations between the TCQ subscales and other measures suggest that particular thought control strategies may be associated with the symptoms of PTSD and depression. The TCQ scales appear to be sensitive to changes associated with recovery. Significant differences emerged in thought control strategies between depressed and PTSD patients. Hierarchical regression analysis showed distraction, punishment and reappraisal control strategies predicted depression scores in depressed patients while use of distraction predicted intrusions in PTSD.
Neuroticism and self-esteem as indices of the vulnerability to major depression in women
- SETH B. ROBERTS, KENNETH S. KENDLER
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1101-1109
-
- Article
- Export citation
-
Background. Neuroticism and self-esteem, two commonly used personality constructs, are thought to reflect a person's underlying vulnerability to major depression. The relative strength of these predictors is not known.
Method. Information was gathered on 2163 individual women from an epidemiological sample of female–female twin pairs. Neuroticism was assessed by the Eysenck Personality Questionnaire and global self-esteem by the Rosenberg Self-Esteem Scale. Major depression (DSM-III-R criteria) and stressful life events were also assessed. The personality constructs were studied in relation to major depression by logistic regression and structural equation modelling.
Results. Both cross-sectionally and prospectively, examined individually, neuroticism was a stronger predictor of risk for major depression than was self-esteem. When examined together, the predictive power of neuroticism remained substantial, while that of self-esteem largely disappeared. The same pattern of findings was obtained when a subset of subjects who had recently experienced stressful life events was analysed. By trivariate twin modelling, we found that the covariation of self-esteem, neuroticism and major depression was due largely to genetic factors. When self-esteem was the ‘upstream’ variable, a substantial genetic correlation remained between neuroticism and major depression. By contrast, when neuroticism was the ‘upstream’ variable, the genetic correlation between self-esteem and major depression disappeared.
Conclusions. The personality construct of neuroticism is a substantially better index of a woman's underlying vulnerability to major depression than is self-esteem. These findings suggest that overall emotionality or emotional reactivity to the environment reflects risk for depression better than does global self-concept.
The impact of widowhood on depression: findings from a prospective survey
- K. B. CARNELLEY, C. B. WORTMAN, R. C. KESSLER
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1111-1123
-
- Article
- Export citation
-
Background. We investigated the impact of widowhood on depression and how resources and contextual factors that define the meaning of loss modified this effect.
Method. In a prospective, nationally representative sample of women in the US aged 54 or older we compared 64 women who were widowed in the 3 years between data collection waves with 431 women who were stably married over the time interval.
Results. Those who became widowed reported more depression than controls for 2 years following the loss. However, this effect was confined to respondents whose husbands were not ill at baseline. Widowed women whose husbands were ill at baseline already had elevated depression in the baseline interview and did not become significantly more depressed after the death. Consistent with this result, women who were not depressed pre-bereavement were most vulnerable to depression following the loss of an ill spouse during the first year of widowhood.
Conclusions. Results suggest that spouses' illness may forewarn wives of their impending loss and these women may begin to grieve before his death. Those forewarned women who are not depressed pre-bereavement may experience the most post-bereavement depression. Findings are discussed in light of previous, more methodologically limited studies.
Attempted suicide in west London, I. Rates across ethnic communities
- D. BHUGRA, M. DESAI, D. S. BALDWIN
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1125-1130
-
- Article
- Export citation
-
Background. Two previous studies from the United Kingdom have suggested that rates of attempted suicide in Asian women are higher than in the native population.
Method. Over a 1-year period we identified 434 patients presenting from one catchment area to four hospitals, after episodes of self-harm. These patients were assessed using the GHQ, CIS-R, and Life Events Inventory, and by collecting details of the attempt itself.
Results. Asian women had the highest overall rates ; 1·6 times those in White women and 2·5 times the rate among Asian men. The rates were lowest among older women. Among younger Asian women (less than 30 years) the rates were 2·5 times those of White women and seven times those of Asian men. The rates among black groups were lower than expected. Self-poisoning was the commonest method of self-harm.
Conclusions. Younger Asian women are vulnerable to increased rates of attempted self-harm and deserve to be studied further.
Attempted suicide in west London, II. Inter-group comparisons
- D. BHUGRA, D. S. BALDWIN, M. DESAI, K. S. JACOB
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1131-1139
-
- Article
- Export citation
-
Background. Previous studies of attempted suicide have suggested that cultural and social factors play a significant role in the causation of deliberate self-harm.
Method. In order to measure elements of culture conflict two inter-group comparisons were undertaken. In the first, 27 Asian women who had presented to hospital services following attempted suicide (Asian group) were matched with a group of similar age Asian women attending GP surgeries for other reasons (Asian GP attenders group). The second comparison was between the Asian and 46 White attempters.
Results. On comparing Asian attempters with Asian GP attenders group the former were more likely to have a history of previous suicidal behaviour, to have a psychiatric diagnosis, and be unemployed. Their parents were more likely to have arrived in the United Kingdom at an older age. In addition, those who attempted suicide were more likely to have been in an inter-racial relationship and to have changed religions. In the second inter-group comparison, the characteristics of Asian and White suicide attempt patients were examined. White attempters were more likely to have mental illness, and were more likely to use alcohol as part of the method of attempted suicide. By contrast, Asian attempters had experienced life events pertaining to relationships, took fewer tablets and yet expressed greater regret at not succeeding in the attempt.
Conclusions. Although numbers are small, social stress and other cultural factors play an important role in the act of deliberate self-harm.
Suicide and undetermined death in south east Scotland. A case–control study using the psychological autopsy method
- J. T. O. CAVANAGH, D. G. C. OWENS, E. C. JOHNSTONE
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1141-1149
-
- Article
- Export citation
-
Background. Mental disorders are major risk factors for suicide. Not all those who suffer from mental disorders kill themselves. Additional information is required to differentiate higher and lesser risk patients.
Methods. Retrospective case–control comparison was made of cases of suicide/undetermined death with living controls using psychological autopsy in South East Scotland. Cases and controls were matched for age, sex and mental disorder. Informants were those closest to cases and controls. The subjects were 45 cases of suicide/undetermined death and 40 living controls.
Results. Cases and controls did not differ significantly in severity of mental disorder. The main factors independently associated with undetermined death or suicide were: a history of deliberate self-harm (adjusted OR 4·1); physical ill health (adjusted OR 7·8); and engagement by mental health services (adjusted OR 0·01). Other antecedents associated with increased risk (criminal record, police involvement, financial problems and failure to vote) and those associated with decreased risk (contact with a doctor and in-patient care) did not exert effects after controlling for confounding.
Conclusions. Controls were receiving more care of whatever kind. Treatment of mental disorder co-morbid with physical illness and a history of deliberate self-harm may be especially important. Factors that separate those with mental disorder at high risk from those at lesser risk relate to care levels provided, which may be a function of engagement by and with health services. The role of mental health professionals is beneficial in suicide prevention. The focusing of that role towards engaging alienated or ‘difficult’ patients should be addressed.
The prevalence of Gilles de la Tourette syndrome in children and adolescents with autism: a large scale study
- S. BARON-COHEN, V. L. SCAHILL, J. IZAGUIRRE, H. HORNSEY, M. M. ROBERTSON
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1151-1159
-
- Article
- Export citation
-
Background. An earlier small-scale study of children with autism revealed that 8·1% of such patients were co-morbid for Gilles de la Tourette syndrome (GTS). The present study is a large scale test of whether this result replicates.
Method. Four hundred and forty-seven pupils from nine schools for children and adolescents with autism were screened for the presence of motor and vocal tics.
Results. Subsequent family interviews confirmed the co-morbid diagnosis of definite GTS in 19 children, giving a prevalence rate of 4·3%. A further 10 children were diagnosed with probable GTS (2·2%).
Conclusions. These results indicate that the rate of GTS in autism exceeds that expected by chance, and the combined rate (6·5%) is similar to the rates found in the smaller-scale study. Methodological considerations and alternative explanations for an increased prevalence are discussed.
Neuropsychological assessment of young people at high genetic risk for developing schizophrenia compared with controls: preliminary findings of the Edinburgh High Risk Study (EHRS)
- M. BYRNE, A. HODGES, E. GRANT, D. C. OWENS, E. C. JOHNSTONE
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1161-1173
-
- Article
- Export citation
-
Background. Finding risk indicators for schizophrenia among groups of individuals at high genetic risk for the disorder, has been the driving force of the high risk paradigm. The current study describes the preliminary results of a neuropsychological assessment battery conducted on the first 50% of subjects from the Edinburgh High Risk Study.
Methods. One hundred and four high risk subjects and 33 normal controls, age and sex matched, were given a neuropsychological assessment battery. The areas of function assessed and reported here include intellectual function, executive function, perceptual motor speed, mental control/encoding, verbal ability and language, learning and memory measures, and handedness.
Results. The high risk subjects performed significantly more poorly than the control subjects in the following domains of neuropsychological function: intellectual function, executive function, mental control/encoding and learning, and memory. Controlling for IQ, high risk subjects made significantly more errors on the Hayling Sentence Completion Test (HSCT), took longer to complete section A of the HSCT, had lower scores on the delayed recall condition of the visual reproductions subtest of the Wechsler Memory Scale-Revised, and had significantly poorer Rivermead Behavioural Memory Test (RBMT) standardized scores. The presence of significant group by IQ interactions for the RBMT and time to complete section A of the HSCT suggested that differences among the groups were more marked in the lower IQ range. Performance on the HSCT was found to be related to the degree of family history of schizophrenia.
Conclusions. High risk subjects performed more poorly than controls on all tests of intellectual function and on aspects of executive function and memory.
Different psychopathological models and quantified EEG in schizophrenia
- A. W. F. HARRIS, L. WILLIAMS, E. GORDON, H. BAHRAMALI, S. SLEWA-YOUNAN
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1175-1181
-
- Article
- Export citation
-
Background. This study compared the ability of two different models of psychopathology in schizophrenia to account for findings in the quantified electroencephalogram (qEEG) recorded from midline sites in a group of 40 subjects with schizophrenia. The first model was based on the positive and negative syndrome dichotomy, the second was a tripartite model that resembled Liddle's syndromes of psychomotor poverty, disorganization and reality distortion (Liddle, 1987a).
Methods. A group of 40 subjects with predominantly chronic schizophrenia was assessed with the Positive and Negative Syndrome Scale (PANSS) prior to the acquisition of their quantified electroencephalogram. The relationship between EEG data and symptomatology was explored, initially with the PANSS positive and negative subscales and then with a tripartite model derived from a principal component analysis of the 14 positive and negative subscale items.
Results. The tripartite syndrome model showed a greater concordance with the qEEG of the subjects than the dichotomous model. ‘Psychomotor poverty’ was significantly positively correlated with both delta and beta power and ‘reality distortion’ was significantly positively correlated with alpha-2 power. No significant correlations between the positive and negative syndrome dichotomy and the qEEG were observed.
Conclusions. This study lends support to the factor analysis of psychopathology, and specifically the tripartite syndrome model of schizophrenia, as a step in explicating the biological dimensions of the disorder.
To what extent does symptomatic improvement result in better outcome in psychotic illness?
- J. VAN OS, C. GILVARRY, R. BALE, E. VAN HORN, T. TATTAN, I. WHITE, R. MURRAY
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1183-1195
-
- Article
- Export citation
-
Background. The effectiveness of therapeutic interventions in psychosis is increasingly reported in terms of reductions in different symptom dimensions. It remains unclear, however, to what degree such symptomatic changes are accompanied by improvement in other measures such as service use, quality of life, and needs for care.
Methods. A sample of 708 patients with chronic psychotic illness was assessed on three occasions over 2 years (baseline, year 1 and year 2). A multilevel analysis was conducted to examine to what degree reduction in psychopathological scores derived from factor analysis of the Comprehensive Psychopathological Rating Scale (CPRS), was associated with improvement in service use, disability, subjective outcomes and measures of self-harm.
Results. Reduction in positive, negative, depressive and manic symptoms over the study period were all independently associated with lessening of social disability. Reduction in negative symptoms, and to a lesser extent in positive and manic symptoms, was associated with less time in hospital and more time living independently, whereas changes in positive and manic symptoms resulted in fewer admissions. Subjective outcomes such as improvement in quality of life, perceived needs for care and dissatisfaction with services showed the strongest associations with reduction in depressive symptoms. Reduction in positive symptoms was associated with decreased likelihood of parasuicide. Results did not differ according to diagnostic category.
Conclusion. The findings suggest that changes in distinct psychopathological dimensions independently and differentially influence outcome. Therapeutic interventions aimed at reducing symptoms of more than one dimension are likely to have more widespread effects.
Urbanization and risk for schizophrenia: does the effect operate before or around the time of illness onset?
- M. MARCELIS, N. TAKEI, J. VAN OS
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1197-1203
-
- Article
- Export citation
-
Background. Higher level of urbanicity of place of birth and of place of residence at the time of illness onset has been shown to increase the risk for adult schizophrenia. However, because urban birth and urban residence are strongly correlated, no conclusions can be drawn about the timing of the risk-increasing effect. The current study discriminated between any effect of urbanization before and around the time of illness onset.
Methods. All individuals born between 1972 and 1978 were followed up through the Dutch National Psychiatric Case Register for first admission for schizophrenia until 1995 (maximum age 23 years). Exposure status was defined by a combination of place of birth and place of residence at the time of illness onset in the three most densely populated provinces of the Netherlands (the ‘Randstad’, exposed) or in all other areas (the ‘non-Randstad’, non-exposed). The risk for schizophrenia was examined in four different exposure groups: non-exposed born and non-exposed resident (NbNr, reference category), non-exposed born and exposed resident (NbEr), exposed born and non-exposed resident (EbNr) and exposed born and exposed resident (EbEr).
Results. The greatest risk for schizophrenia was found in the EbNR group, without evidence for any additive effect of urban residence (rate ratio (RR) for narrow schizophrenia in EbNr group, 2·05 (95% CI 1·18–3·57); in EbEr group, 1·96 (95% CI, 1·55–2·46)). Individuals who were not exposed at birth, but became so later in life, were not at increased risk of developing schizophrenia (RR for narrow schizophrenia in NbEr group, 0·79 (0·46–1·36)).
Conclusion. The results suggest that environmental factors associated with urbanization increase the risk for schizophrenia before rather than around the time of illness onset.
Frontal variant of frontotemporal dementia: a cross-sectional and longitudinal study of neuropsychiatric features
- CAROL A. GREGORY
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1205-1217
-
- Article
- Export citation
-
Background. The term frontotemporal dementia (FTD) covers both the temporal and frontal presentations of this condition. The frontal variant (fv) presents with insidious changes in personality and behaviour, with neuropsychological evidence of disproportionate frontal dysfunction. Although psychiatric features are well recognized, there is little systematic data examining the mental state using assessment instruments and no reported studies of the longitudinal progress and assessment.
Methods. Fifteen patients with a diagnosis of FTD(fv) were assessed using the Comprehensive Psychiatric Rating Scale (CPRS). A subgroup of five patients were reassessed annually using the same instrument, generating data over a 3-year period.
Results. At initial assessment a third of 15 patients had no psychiatric symptoms to report. Three patients reported symptoms of sadness, but only one patient met criteria for DSM-IV major depressive episode. One patient experienced symptoms of elation, but did not meet criteria for manic episode, while two patients had hypochondriacal complaints but did not meet DSM-IV criteria for hypochondriasis. One of these patients also experienced the compulsion to count but did not meet criteria for obsessive compulsive disorder. The objective mental state was, on the whole, not congruent with the reported symptoms. Five patients assessed over a 3-year period showed no progression of their subjectively reported symptoms.
Conclusions. Psychiatric symptoms although often present were characterized by their shallowness, lack of elaboration and non-development over time.
Cognitive reserve and mortality in dementia: the role of cognition, functional ability and depression
- M. I. GEERLINGS, D. J. H. DEEG, B. W. J. H. PENNINX, B. SCHMAND, C. JONKER, L. M. BOUTER, W. VAN TILBURG
-
- Published online by Cambridge University Press:
- 01 September 1999, pp. 1219-1226
-
- Article
- Export citation
-
Objective. This study examined whether dementia patients with greater cognitive reserve had increased mortality rates, and whether this association was different across strata of cognition, functional ability and depression.
Methods. In the community-based Amsterdam Study of the Elderly, 261 non-institutionalized dementia patients, identified using the Geriatric Mental State Schedule (GMS), were followed for an average of 55·5 months after which mortality data were obtained. Cognitive reserve was indicated by years of education and pre-morbid intelligence (measured using the Dutch Adult Reading Test). Cognition, functional ability and depression were indicated by Mini-Mental State scores, ADL and IADL measurements and GMS depressive syndrome, respectively.
Results. During the follow-up 146 persons (55·9%) died. Cox regression analyses showed that more highly educated dementia patients had higher mortality rates, only if they had low MMSE scores or if they had a concurrent depression. Pre-morbid intelligence was associated with a higher mortality rate, independent of cognition, but this association was much stronger among patients with depression. The positive association between education or intelligence and mortality was not modified by functional disabilities.
Conclusions. The results suggest that dementia patients with greater cognitive reserve have increased mortality rates, only if the disease has progressed to such an extent that clinical symptoms are more severe. In this respect, the reserve hypothesis needs a modification. Depression in dementia patients with greater cognitive reserve may reflect a subgroup of patients with poor prognosis.