We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Art of Child and Adolescent Psychiatry is an engaging and authoritative account of the essential skills required to practice child and adolescent psychiatry for all those working in children's mental health, from trainees to experienced professionals in paediatrics, psychiatry, psychology, and psychotherapy. The practical tasks of meeting the child and family, planning treatments, and working with colleagues are all covered, building on existing texts that mainly focus on diagnostic criteria, protocols, and laws. This book respects the evidence base, while also pointing out its limitations, and suggests ways in which to deal with these. Psychiatry is placed within broader frameworks including strategy, learning, management, philosophy, ethics, and interpersonal relations. With over 200 educational vignettes of the authors' vast experience in the field, the book is also highly illustrated. The Art of Child and Adolescent Psychiatry is an indispensable guide to thoughtful practice in children's mental health.
The making of mistakes by organisms and other living systems is a theoretically and empirically unifying feature of biological investigation. Mistake theory is a rigorous and experimentally productive way of understanding this widespread phenomenon. It does, however, run up against the long-standing ‘functions’ debate in philosophy of biology. Against the objection that mistakes are just a kind of malfunction, and that without a position on functions there can be no theory of mistakes, we reply that this is to misunderstand the theory. In this paper we set out the basic concepts of mistake theory and then argue that mistakes are a distinctive phenomenon in their own right, not just a kind of malfunction.
Hospitals play a significant and important role in funding high-cost medicines so patients can access treatments they need. High-cost medicines are often specialty medicines, which contribute to a significant and increasing portion of the hospital budget. It is imperative that these expensive medicines are governed and managed with a fair, standardized evidence-based process. We aim to provide a framework for Drugs and Therapeutics Committees (DTCs).
Methods
During 2021, Guiding Principles were developed following a literature review and survey of current practices by DTCs in Australia. An Expert Advisory Group (EAG) was convened, comprising individuals with expertise in quality use of medicines, evidence-based medicine and medicines governance. The guiding principles were drafted by the EAG, in consultation with a range of stakeholders and relevant external organizations. All feedback was collated, reviewed and discussed to refine the content of the final Guiding Principles released in January 2022.
Results
Seven overarching principles provide key recommendations for the governance of high-cost medicines:
(i) A definition of high‑cost medicines should be determined and clearly articulated for use by each medicines governance committee.
(ii) Review of high-cost medicines requires members with relevant expertize to facilitate good and effective decision-making.
(iii) The committee should engage directly with the applicant prior to review to ensure a full understanding of the rationale for the request.
(iv) consistent, robust and transparent procedure for the assessment of high-cost medicine applications should be defined and implemented for use by each medicines governance committee to ensure fair process.
(v) Ethical considerations fundamentally underpin deliberations around high-cost medicines.
(vi) The decisions and outcomes of the decision making should be transparent and appropriately communicated to the various audiences.
(vii) The high-quality assessment of high-cost medicines requires appropriate training and resourcing.
Conclusions
These national Guiding Principles promote consistent, evidence-based use of high-cost medicines and provide a framework for DTCs to assess and achieve effective governance for the quality use of high‑cost medicines.
In contrast to high-volume medicines prescribed by general practitioners, low-volume highly specialized medicines have not been supported by national quality use of medicine (QUM) programs in Australia. The first area addressed has focused on optimizing use of biological disease-modifying antirheumatic drugs (bDMARDs).
Methods
The program was designed, developed and implemented in partnership with nine consortium member organizations and four affiliate organizations representing consumer and clinical audiences, program development expertize and implementation capability. The common agenda for the collective impact approach was to achieve better health outcomes for people with inflammatory arthritis, inflammatory bowel disease and plaque psoriasis. Multidisciplinary expert working groups reviewed formative QUM research and agreed on objectives, audiences, messages and interventions. Interventions were selected based on identified barriers, enablers and behavioral drivers, informed by the Theoretical Domains Framework. Interventions were co-designed and tested with end-users. Marketing and promotion activity supported implementation of all interventions through consortium channels and networks. Evaluation includes process, impact and outcome measures, and a realist evaluation of the academic detailing.
Results
Program objectives were to optimize: (i) first-line therapy before bDMARD use; (ii) first-choice bDMARDs; (iii) biosimilar prescribing and dispensing; (iv) bDMARD dosage; (v) glucocorticoid and analgesic use. Over 60 interventions supporting key messages for each objective were developed for audiences: consumers; rheumatologists, gastroenterologists, dermatologists; pharmacists; drug and therapeutic committees. Interventions implemented between September 2020 and September 2022 included: consumer decision aids, action plans, fact sheets, lived experience videos; living guidelines and evidence summaries; guidance/position statements for hospitals, podcasts, webinars, online learning; prescribing feedback reports; and academic detailing. Uptake of interventions has largely met targets and surveys have demonstrated shifts in specialist and consumer knowledge and behavior in line with key messages and objectives. Realist and outcome evaluation is ongoing.
Conclusions
Our experience demonstrates the value of a consortium of stakeholder organizations, with different expertise and interests but agreed goals and roles, working together to progress the quality use of highly specialized drugs.
We assess emerging relationships between production decisions and market channel selection among a small sample of hemp growers (22) in Colorado and Kentucky using qualitative interviews. We found producers differences by market channel, product and state. For instance, producers who relied on intermediated marketing strategies cultivated more acres on average and used fewer distinct market channels and strategies than those relying on direct markets. Product differences were found regarding processing, storage and perishability. Respondents identified four factors critical to their choice of market channels for their hemp products: research, profitability, trust and knowledge. The findings can help inform public and private decision-making regarding best hemp marketing practices and future needs of the hemp industry.
Compositionalists hold that God the Son became human by acquiring all the parts that ordinarily compose a human being (his ‘human nature’). To be orthodox, though, they must deny that Christ's human nature is a person, even though it has all the parts that human persons ordinarily have. One way to do this is by appealing to the principle that no member of a natural kind can have another member of the same kind as a proper part. Since Christ is a person, he cannot have another person as a part, so if his human nature is a part of him, it cannot be a person. This principle is defended on the grounds that it can resolve metaphysical problems involving apparently multiple individuals of the same natural kind that share the same space. I argue that this is a weak strategy. First, it leaves unanswered key questions about how and why the principle applies to the incarnation. Second, counter-examples to the principle exist, suggesting that it is not true. Third, there is a better solution to the kinds of metaphysical paradoxes for which this principle is usually invoked, but this solution cannot be applied to the case of Christ. Consequently, compositionalists should not rely on this principle as a means of avoiding Nestorianism.
Diet is a modifiable risk factor for chronic disease and a potential modulator of telomere length (TL). The study aim was to investigate associations between diet quality and TL in Australian adults after a 12-week dietary intervention with an almond-enriched diet (AED). Participants (overweight/obese, 50–80 years) were randomised to an AED (n 62) or isoenergetic nut-free diet (NFD, n 62) for 12 weeks. Diet quality was assessed using a Dietary Guideline Index (DGI), applied to weighed food records, that consists of ten components reflecting adequacy, variety and quality of core food components and discretionary choices within the diet. TL was measured by quantitative PCR in samples of lymphocytes, neutrophils, and whole blood. There were no significant associations between DGI scores and TL at baseline. Diet quality improved with AED and decreased with NFD after 12 weeks (change from baseline AED + 9·8 %, NFD − 14·3 %; P < 0·001). TL increased in neutrophils (+9·6 bp, P = 0·009) and decreased in whole blood, to a trivial extent (–12·1 bp, P = 0·001), and was unchanged in lymphocytes. Changes did not differ between intervention groups. There were no significant relationships between changes in diet quality scores and changes in lymphocyte, neutrophil or whole blood TL. The inclusion of almonds in the diet improved diet quality scores but had no impact on TL mid-age to older Australian adults. Future studies should investigate the impact of more substantial dietary changes over longer periods of time.
Longitudinal studies report that regular nut consumption is associated with reduced risk of coronary heart disease and better cognitive function. Thus, nuts may improve cardiovascular and neurocognitive health – especially in ‘at-risk’ populations (e.g. overweight/obese). This study examined the effect of supplementing habitual diets of overweight/obese individuals for 12 weeks with either almonds or carbohydrate rich (CHO-rich) snack foods on biomarkers of cardiovascular and metabolic health, mood and cognitive performance. Participants (n = 151; overweight/obese, 50–80 years) were randomised to replace 15% energy intake with either almonds (Almond) or isocaloric CHO-rich snack foods (Comparator). Body weight, blood lipids, glucose, insulin, blood pressure (BP), arterial stiffness, mood and cognitive performance (memory, executive function, speed of processing) were measured at baseline and 12 weeks. One hundred and twenty eight participants (78F:50M, n = 63 almond, n = 65 control) completed the intervention (M ± SD: age 64 ± 8years, BMI 30.3 ± 3.6kg/m2, stable medication use: 32% BP and 19% lipid lowering). Compared with the CHO-rich comparator foods, there were a number of significant improvements associated with almond consumption. These included reduced serum triglycerides (M ± SEM, almond: 1.30 ± 0.062 to 1.16 ± 0.062 mmol/L vs CHO-rich 1.13 ± 0.061 to 1.17 ± 0.061 mmol/L, p = 0.005 whole population). In those with cholesterol below the ATPIII cut point of < 6.2mmol/L (84% of sample), almond consumption reduced total cholesterol (almond: 5.12 ± 0.13 to 4.93 ± 0.12 mmol/L vs CHO-rich 5.24 ± 0.13 to 5.21 ± 0.11 mmol/L, p < 0.001) and LDL-cholesterol (almond: 3.03 ± 0.11 to 2.87 ± 0.11 mmol/L vs CHO-rich 2.98 ± 0.10 to 3.07 ± 0.10 mmol/L, p = 0.002).). Additionally in a non-BP medicated subgroup only (n = 87, 68% of sample), there was a reduction in systolic BP (almond: 130.73 ± 2.19 to 127.02 ± 2.19 mmHg vs control:128.63 ± 2.32 to 129.48 ± 2.32mmHg, p = 0.035; and improved self-rated alertness (almond 54.73 ± 2.32 to 58.64 ± 2.32 vs CHO-rich 61.75 ± 2.28 to 61.13 ± 2.28, p = 0.048). The inclusion of almonds in the diet has the ability to improve aspects of cardiometabolic health and mood in overweight/obese adults. The lack of change in cognitive performance may reflect the fact that the study population was cognitively high performing at baseline. Future research should be directed at examining the effects of this relatively simple, cost-effective nutritional intervention in populations at greater risk of cardiometabolic and cognitive decline.
We present a new robust bootstrap method for a test when there is a nuisance parameter under the alternative, and some parameters are possibly weakly or nonidentified. We focus on a Bierens (1990, Econometrica 58, 1443–1458)-type conditional moment test of omitted nonlinearity for convenience. Existing methods include the supremum p-value which promotes a conservative test that is generally not consistent, and test statistic transforms like the supremum and average for which bootstrap methods are not valid under weak identification. We propose a new wild bootstrap method for p-value computation by targeting specific identification cases. We then combine bootstrapped p-values across polar identification cases to form an asymptotically valid p-value approximation that is robust to any identification case. Our wild bootstrap procedure does not require knowledge of the covariance structure of the bootstrapped processes, whereas Andrews and Cheng’s (2012a, Econometrica 80, 2153–2211; 2013, Journal of Econometrics 173, 36–56; 2014, Econometric Theory 30, 287–333) simulation approach generally does. Our method allows for robust bootstrap critical value computation as well. Our bootstrap method (like conventional ones) does not lead to a consistent p-value approximation for test statistic functions like the supremum and average. Therefore, we smooth over the robust bootstrapped p-value as the basis for several tests which achieve the correct asymptotic level, and are consistent, for any degree of identification. They also achieve uniform size control. A simulation study reveals possibly large empirical size distortions in nonrobust tests when weak or nonidentification arises. One of our smoothed p-value tests, however, dominates all other tests by delivering accurate empirical size and comparatively high power.
This article presents a bootstrapped p-value white noise test based on the maximum correlation, for a time series that may be weakly dependent under the null hypothesis. The time series may be prefiltered residuals. The test statistic is a normalized weighted maximum sample correlation coefficient $ \max _{1\leq h\leq \mathcal {L}_{n}}\sqrt {n}|\hat {\omega }_{n}(h)\hat {\rho }_{n}(h)|$, where $\hat {\omega }_{n}(h)$ are weights and the maximum lag $ \mathcal {L}_{n}$ increases at a rate slower than the sample size n. We only require uncorrelatedness under the null hypothesis, along with a moment contraction dependence property that includes mixing and nonmixing sequences. We show Shao’s (2011, Annals of Statistics 35, 1773–1801) dependent wild bootstrap is valid for a much larger class of processes than originally considered. It is also valid for residuals from a general class of parametric models as long as the bootstrap is applied to a first-order expansion of the sample correlation. We prove the bootstrap is asymptotically valid without exploiting extreme value theory (standard in the literature) or recent Gaussian approximation theory. Finally, we extend Escanciano and Lobato’s (2009, Journal of Econometrics 151, 140–149) automatic maximum lag selection to our setting with an unbounded lag set that ensures a consistent white noise test, and find it works extremely well in controlled experiments.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
Methods:
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Results:
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Conclusions:
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.
Optimal rheumatoid arthritis (RA) management requires coordinated management and consistent communication by health practitioners with patients. Suboptimal methotrexate use is a factor leading to increased use of biological disease modifying antirheumatic drugs (bDMARDs), which account for significant government drug expenditure. A multidisciplinary co-design approach was used to develop and implement a program aiming to improve early management and quality use of medicines (QUM) for people with RA in Australia.
Methods
Literature review and key informant interviews identified broad potential QUM issues in RA management. An initial exploratory multidisciplinary meeting prioritized QUM issues, identified audiences and perspectives, and scoped focus areas to address with education. Iteratively through co-design meetings and activities, program objectives were agreed, barriers and enablers for change explored, characteristics of intervention activities considered and rated, and program products developed and reviewed. Program evaluation included participation and distribution data, surveys and interviews, and analyses of general practice and Pharmaceutical Benefits Scheme (PBS) data.
Results
QUM issues addressed include: (i) timely initiation of conventional synthetic (cs) DMARDs; (ii) appropriate use and persistence with csDMARD therapy, especially methotrexate; and (iii) clarity around professional roles and best practice for prescribing, dispensing, and monitoring DMARDs, and managing lifestyle factors and other risks associated with RA. The educational program (October 2017 to June 2018) included: an article promoting key messages (email to ~115,000 health practitioners), prescriber feedback report based on PBS data (to all Australian rheumatologists), an RA action plan (completed by health practitioners for consumers), an interactive case study (553 participants), visits to 1200 pharmacies promoting key messages, a multidisciplinary webinar (431 live and 366 on-demand), fact sheets for consumers available through MedicineWise app (medicine management app for consumers), and social media activity.
Conclusions
A multidisciplinary co-design process has provided a model for developing a multifaceted QUM program incorporating and addressing multiple perspectives.
A renewed theoretical and empirical programme at the intersection of science and belief must begin by taking stock of our present resources. This chapter provides a top-level overview of the surveybased empirical data currently available. The goal is not to provide a traditional ‘literature review’ of existing studies, but rather to give an assessment of the data themselves – their substantive focus, promise and limitations. Moreover, this is not an exhaustive, encyclopaedic account of every survey measure related to science and religion. That would surely require its own volume. Instead, I focus on direct attempts to measure public attitudes about the relationship between science and religion. Practically, this means that I limit myself to measures of the so-called ‘conflict thesis’ (see the Introduction to this volume) and beliefs about evolution and human origins. I also limit my analyses to probability samples of the general population in the US (where the overwhelming bulk of this research has been conducted). Despite these caveats and boundaries, several important features of and limitations to the existing data emerge from this review. In short, I find that certain measures work well at a basic descriptive level but many of the important questions about why segments of the population hold to some positions and not others may require new measures and research strategies. The chapter concludes with some specific suggestions for what forms these new directions might take, many of which are taken up in various ways in the remaining contributions to this volume.
Survey items on the conflict thesis
In 1957, the National Association of Science Writers (NASW) sponsored a national poll on the American public's interest in science and technology (Davis, 1957). The Institute for Social Research at the University of Michigan administered it just a few months before Sputnik I launched. The survey instrument that was developed formed the basis for numerous public opinion questions that were later replicated in the US and around the world. In 1978, after a period of relative silence on these topics in national surveys, the National Science Foundation (NSF) recruited the political scientists Jon Miller and Kenneth Prewitt to design a national survey to follow up the original 1957 study (Miller et al, 1980). The NSF's Surveys on Public Understanding of Science and Technology were regularly administered between 1979 and 2001.
Communication deviance (CD) reflects features of the content or manner of a person's speech that may confuse the listener and inhibit the establishment of a shared focus of attention. The construct was developed in the context of the study of familial risks for psychosis based on hypotheses regarding its effects during childhood. It is not known whether parental CD is associated with nonverbal parental behaviors that may be important in early development. This study explored the association between CD in a cohort of mothers (n = 287) at 32 weeks gestation and maternal sensitivity with infants at 29 weeks in a standard play procedure. Maternal CD predicted lower overall maternal sensitivity (B = –.385; p < .001), and the effect was somewhat greater for sensitivity to infant distress (B = –.514; p < .001) than for sensitivity to nondistress (B = –.311; p < .01). After controlling for maternal age, IQ and depression, and for socioeconomic deprivation, the associations with overall sensitivity and sensitivity to distress remained significant. The findings provide new pointers to intergenerational transmission of vulnerability involving processes implicated in both verbal and nonverbal parental behaviors.
It is not known whether associations between child problem behaviours and maternal depression can be accounted for by comorbid borderline personality disorder (BPD) dysfunction.
Aim
To examine the contributions of maternal depression and BPD symptoms to child problem behaviours.
Method
Depression trajectories over the fist-year postpartum were generated using repeated measurement from a general population sample of 997 mothers recruited in pregnancy. In a stratified subsample of 251, maternal depression and BPD symptoms were examined as predictors of child problem behaviours at 2.5 years.
Results
Child problem behaviours were predicted by a high maternal depression trajectory prior to the inclusion of BPD symptoms. This association was no longer significant after the introduction of BPD symptoms.
Conclusions
Risks for child problem behaviours currently attributed to maternal depression may arise from more persistent and pervasive difficulties found in borderline personality dysfunction.
Early-life institutional deprivation produces disinhibited social engagement (DSE). Portrayed as a childhood condition, little is known about the persistence of DSE-type behaviours into, presentation during, and their impact on, functioning in adulthood.
Aims
We examine these issues in the young adult follow-up of the English and Romanian Adoptees study.
Method
A total of 122 of the original 165 Romanian adoptees who had spent up to 43 months as children in Ceauşescu's Romanian orphanages and 42 UK adoptees were assessed for DSE behaviours, neurodevelopmental and mental health problems, and impairment between ages 2 and 25 years.
Results
Young adult DSE behaviour was strongly associated with early childhood deprivation, with a sixfold increase for those who spent more than 6 months in institutions. However, although DSE overlapped with autism spectrum disorder and attention-deficit hyperactivity disorder symptoms it was not, in itself, related to broader patterns of mental health problems or impairments in daily functioning in young adulthood.
Conclusions
DSE behaviour remained a prominent, but largely clinically benign, young adult feature of some adoptees who experienced early deprivation.
The Participatory Model of Atonement (PMA) offers an alternative view of Christian salvation, drawing on Pauline theology. It conceives of sin as a contagion which can usually be escaped only by dying. By ‘participating’ in Christ's death, the believer can escape its effects without having to die. This notion of ‘participation’ is obscure. I consider a possible way of clarifying it using metaphysical ideas taken from Jonathan Edwards. ‘Participation’ might involve becoming similar to Christ through the action of the Holy Spirit, to such a degree that a person might be called identical (in some sense) with Christ.
We consider the relative performance of two common approaches to multiple imputation (MI): joint multivariate normal (MVN) MI, in which the data are modeled as a sample from a joint MVN distribution; and conditional MI, in which each variable is modeled conditionally on all the others. In order to use the multivariate normal distribution, implementations of joint MVN MI typically assume that categories of discrete variables are probabilistically constructed from continuous values. We use simulations to examine the implications of these assumptions. For each approach, we assess (1) the accuracy of the imputed values; and (2) the accuracy of coefficients and fitted values from a model fit to completed data sets. These simulations consider continuous, binary, ordinal, and unordered-categorical variables. One set of simulations uses multivariate normal data, and one set uses data from the 2008 American National Election Studies. We implement a less restrictive approach than is typical when evaluating methods using simulations in the missing data literature: in each case, missing values are generated by carefully following the conditions necessary for missingness to be “missing at random” (MAR). We find that in these situations conditional MI is more accurate than joint MVN MI whenever the data include categorical variables.