We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Poor iron status is one of the most prevalent problems facing infants worldwide, in both developing and developed countries(1). A complex interplay of both dietary and non-dietary factors affects iron intake, absorption, and requirements, and subsequently iron status(2). We aimed to describe iron status in an ethnically diverse cohort of urban-dwelling infants. Data were collected from 364 infants aged 7.0 to 10.0 months living in two main urban centres in New Zealand (Auckland and Dunedin) between July 2020 and February 2022. Participants were grouped by total ethnicity, with any participants who did not identify as either Māori or Pacific categorised into a single ‘others’ group. Haemoglobin, plasma ferritin, soluble transferrin receptor (sTfR), C-Reactive protein, and alpha-1-acid-glycoprotein were obtained from a non-fasting venous blood sample. Inflammation was adjusted for using the Biomarkers Reflecting Inflammation and Nutritional Determinants of Anaemia (BRINDA) method(3). Body iron concentration (mg/kg body weight) was calculated using the ratio of sTfR and ferritin. A total of 96.3% of Pacific infants were iron sufficient, defined as body iron ≥0 mg/kg body weight and haemoglobin (Hb) ≥105 g/L, compared to 82.3% of Māori and 76.0% of ‘other’ (i.e. neither Māori nor Pacific) infants. ‘Other’ infants had the highest prevalence of iron deficiency overall, with 2.8% categorised with iron-deficiency anaemia (IDA) (body iron <0 mg/kg, haemoglobin <105 g/L), 11.8% with early ‘functional’ iron deficiency (body iron <0 mg/kg, haemoglobin ≥105 g/L), and 9.4% with iron depletion (ferritin <15 µg/L, in the absence of early ‘functional’ iron deficiency and iron deficiency anaemia). For Māori infants, 3.2% and 6.5% had IDA and early ‘functional’ iron deficiency respectively, and 8.1% were iron depleted. One (3.7%) Pacific infant was iron depleted, and the remainder were iron sufficient. Plasma ferritin and body iron concentration were, on average, higher in Pacific compared to non-Pacific infants. These findings give an up-to-date and robust understanding of the iron status of infants by ethnicity, highlighting an unexpected finding that infants who are neither Māori nor Pacific may be at higher risk of poor iron status in NZ.
Incorporating paleontological data into phylogenetic inference can greatly enrich our understanding of evolutionary relationships by providing insights into the diversity and morphological evolution of a clade over geological timescales. Phylogenetic analysis of fossil data has been significantly aided by the introduction of the fossilized birth–death (FBD) process, a model that accounts for fossil sampling through time. A decade on from the first implementation of the FBD model, we explore its use in more than 170 empirical studies, summarizing insights gained through its application. We identify a number of challenges in applying the model in practice: it requires a working knowledge of paleontological data and their complex properties, Bayesian phylogenetics, and the mechanics of evolutionary models. To address some of these difficulties, we provide an introduction to the Bayesian phylogenetic framework, discuss important aspects of paleontological data, and finally describe the assumptions of the models used in paleobiology. We also present a number of exemplar empirical studies that have used the FBD model in different ways. Through this review, we aim to provide clarity on how paleontological data can best be used in phylogenetic inference. We hope to encourage communication between model developers and empirical researchers, with the ultimate goal of developing models that better reflect the data we have and the processes that generated them.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Although food insecurity affects a significant proportion of young children in New Zealand (NZ)(1), evidence of its association with dietary intake and sociodemographic characteristics in this population is lacking. This study aims to assess the household food security status of young NZ children and its association with energy and nutrient intake and sociodemographic factors. This study included 289 caregiver and child (1-3 years old) dyads from the same household in either Auckland, Wellington, or Dunedin, NZ. Household food security status was determined using a validated and NZ-specific eight-item questionnaire(2). Usual dietary intake was determined from two 24-hour food recalls, using the multiple source method(3). The prevalence of inadequate nutrient intake was assessed using the Estimated Average Requirement (EAR) cut-point method and full probability approach. Sociodemographic factors (i.e., socioeconomic status, ethnicity, caregiver education, employment status, household size and structure) were collected from questionnaires. Linear regression models were used to estimate associations with statistical significance set at p <0.05. Over 30% of participants had experienced food insecurity in the past 12 months. Of all eight indicator statements, “the variety of foods we are able to eat is limited by a lack of money,” had the highest proportion of participants responding “often” or “sometimes” (35.8%). Moderately food insecure children exhibited higher fat and saturated fat intakes, consuming 3.0 (0.2, 5.8) g/day more fat, and 2.0 (0.6, 3.5) g/day more saturated fat compared to food secure children (p<0.05). Severely food insecure children had lower g/kg/day protein intake compared to food secure children (p<0.05). In comparison to food secure children, moderately and severely food insecure children had lower fibre intake, consuming 1.6 (2.8, 0.3) g/day and 2.6 (4.0, 1.2) g/day less fibre, respectively. Severely food insecure children had the highest prevalence of inadequate calcium (7.0%) and vitamin C (9.3%) intakes, compared with food secure children [prevalence of inadequate intakes: calcium (2.3%) and vitamin C (2.8%)]. Household food insecurity was more common in those of Māori or Pacific ethnicity; living in areas of high deprivation; having a caregiver who was younger, not in paid employment, or had low educational attainment; living with ≥2 other children in the household; and living in a sole-parent household. Food insecure young NZ children consume a diet that exhibits lower nutritional quality in certain measures compared to their food-secure counterparts. Food insecurity was associated with various sociodemographic factors that are closely linked with poverty or low income. As such, there is an urgent need for poverty mitigation initiatives to safeguard vulnerable young children from the adverse consequences of food insecurity.
To investigate the symptoms of SARS-CoV-2 infection, their dynamics and their discriminatory power for the disease using longitudinally, prospectively collected information reported at the time of their occurrence. We have analysed data from a large phase 3 clinical UK COVID-19 vaccine trial. The alpha variant was the predominant strain. Participants were assessed for SARS-CoV-2 infection via nasal/throat PCR at recruitment, vaccination appointments, and when symptomatic. Statistical techniques were implemented to infer estimates representative of the UK population, accounting for multiple symptomatic episodes associated with one individual. An optimal diagnostic model for SARS-CoV-2 infection was derived. The 4-month prevalence of SARS-CoV-2 was 2.1%; increasing to 19.4% (16.0%–22.7%) in participants reporting loss of appetite and 31.9% (27.1%–36.8%) in those with anosmia/ageusia. The model identified anosmia and/or ageusia, fever, congestion, and cough to be significantly associated with SARS-CoV-2 infection. Symptoms’ dynamics were vastly different in the two groups; after a slow start peaking later and lasting longer in PCR+ participants, whilst exhibiting a consistent decline in PCR- participants, with, on average, fewer than 3 days of symptoms reported. Anosmia/ageusia peaked late in confirmed SARS-CoV-2 infection (day 12), indicating a low discrimination power for early disease diagnosis.
This research examines maternal smoking during pregnancy and risk for poorer executive function in siblings discordant for exposure. Data (N = 173 families) were drawn from the Missouri Mothers and Their Children study, a sample, identified using birth records (years 1998–2005), in which mothers changed smoking behavior between two pregnancies (Child 1 [older sibling]: Mage = 12.99; Child 2 [younger sibling]: Mage = 10.19). A sibling comparison approach was used, providing a robust test for the association between maternal smoking during pregnancy and different aspects of executive function in early-mid adolescence. Results suggested within-family (i.e., potentially causal) associations between maternal smoking during pregnancy and one working memory task (visual working memory) and one response inhibition task (color-word interference), with increased exposure associated with decreased performance. Maternal smoking during pregnancy was not associated with stop-signal reaction time, cognitive flexibility/set-shifting, or auditory working memory. Initial within-family associations between maternal smoking during pregnancy and visual working memory as well as color-word interference were fully attenuated in a model including child and familial covariates. These findings indicate that exposure to maternal smoking during pregnancy may be associated with poorer performance on some, but not all skills assessed; however, familial transmission of risk for low executive function appears more important.
Little is known about Se intakes and status in very young New Zealand children. However, Se intakes below recommendations and lower Se status compared with international studies have been reported in New Zealand (particularly South Island) adults. The Baby-Led Introduction to SolidS (BLISS) randomised controlled trial compared a modified version of baby-led weaning (infants feed themselves rather than being spoon-fed), with traditional spoon-feeding (Control). Weighed 3-d diet records were collected and plasma Se concentration measured using inductively coupled plasma mass spectrometry (ICP-MS). In total, 101 (BLISS n 50, Control n 51) 12-month-old toddlers provided complete data. The OR of Se intakes below the estimated average requirement (EAR) was no different between BLISS and Control (OR: 0·89; 95 % CI 0·39, 2·03), and there was no difference in mean plasma Se concentration between groups (0·04 μmol/l; 95 % CI −0·03, 0·11). In an adjusted model, consuming breast milk was associated with lower plasma Se concentrations (–0·12 μmol/l; 95 % CI −0·19, −0·04). Of the food groups other than infant milk (breast milk or infant formula), ‘breads and cereals’ contributed the most to Se intakes (12 % of intake). In conclusion, Se intakes and plasma Se concentrations of 12-month-old New Zealand toddlers were no different between those who had followed a baby-led approach to complementary feeding and those who followed traditional spoon-feeding. However, more than half of toddlers had Se intakes below the EAR.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
Gut microbiota data obtained by DNA sequencing are not only complex because of the number of taxa that may be detected within human cohorts, but also compositional because characteristics of the microbiota are described in relative terms (e.g., “relative abundance” of particular bacterial taxa expressed as a proportion of the total abundance of taxa). Nutrition researchers often use standard principal component analysis (PCA) to derive dietary patterns from complex food data, enabling each participant's diet to be described in terms of the extent to which it fits their cohort's dietary patterns. However, compositional PCA methods are not commonly used to describe patterns of microbiota in the way that dietary patterns are used to describe diets. This approach would be useful for identifying microbiota patterns that are associated with diet and body composition. The aim of this study is to use compositional PCA to describe gut microbiota profiles in 5 year old children and explore associations between microbiota profiles, diet, body mass index (BMI) z-score, and fat mass index (FMI) z-score. This study uses a cross-sectional data for 319 children who provided a faecal sample at 5 year of age. Their primary caregiver completed a 123-item quantitative food frequency questionnaire validated for foods of relevance to the gut microbiota. Body composition was determined using dual-energy x-ray absorptiometry, and BMI and FMI z-scores calculated. Compositional PCA identified and described gut microbiota profiles at the genus level, and profiles were examined in relation to diet and body size. Three gut microbiota profiles were found. Profile 1 (positive loadings on Blautia and Bifidobacterium; negative loadings on Bacteroides) was not related to diet or body size. Profile 2 (positive loadings on Bacteroides; negative loadings on uncultured Christensenellaceae and Ruminococcaceae) was associated with a lower BMI z-score (r = -0.16, P = 0.003). Profile 3 (positive loadings on Faecalibacterium, Eubacterium and Roseburia) was associated with higher intakes of fibre (r = 0.15, P = 0.007); total (r = 0.15, P = 0.009), and insoluble (r = 0.13, P = 0.021) non-starch polysaccharides; protein (r = 0.12, P = 0.036); meat (r = 0.15, P = 0.010); and nuts, seeds and legumes (r = 0.11, P = 0.047). Further regression analyses found that profile 2 and profile 3 were independently associated with BMI z-score and diet respectively. We encourage fellow researchers to use compositional PCA as a method for identifying further links between the gut, diet and obesity, and for developing the next generation of research in which the impact on body composition of dietary interventions that modify the gut microbiota is determined.
Estimating speciation and extinction rates is essential for understanding past and present biodiversity, but is challenging given the incompleteness of the rock and fossil records. Interest in this topic has led to a divergent suite of independent methods—paleontological estimates based on sampled stratigraphic ranges and phylogenetic estimates based on the observed branching times in a given phylogeny of living species. The fossilized birth–death (FBD) process is a model that explicitly recognizes that the branching events in a phylogenetic tree and sampled fossils were generated by the same underlying diversification process. A crucial advantage of this model is that it incorporates the possibility that some species may never be sampled. Here, we present an FBD model that estimates tree-wide diversification rates from stratigraphic range data when the underlying phylogeny of the fossil taxa may be unknown. The model can be applied when only occurrence data for taxonomically identified fossils are available, but still accounts for the incomplete phylogenetic structure of the data. We tested this new model using simulations and focused on how inferences are impacted by incomplete fossil recovery. We compared our approach with a phylogenetic model that does not incorporate incomplete species sampling and to three fossil-based alternatives for estimating diversification rates, including the widely implemented boundary-crosser and three-timer methods. The results of our simulations demonstrate that estimates under the FBD model are robust and more accurate than the alternative methods, particularly when fossil data are sparse, as the FBD model incorporates incomplete species sampling explicitly.
The spread of the Zika virus (ZIKV) in the Americas led to large outbreaks across the region and most of the Southern hemisphere. Of greatest concern were complications following acute infection during pregnancy. At the beginning of the outbreak, the risk to unborn babies and their clinical presentation was unclear. This report describes the methods and results of the UK surveillance response to assess the risk of ZIKV to children born to returning travellers. Established surveillance systems operating within the UK – the paediatric and obstetric surveillance units for rare diseases, and national laboratory monitoring – enabled rapid assessment of this emerging public health threat. A combined total of 11 women experiencing adverse pregnancy outcomes after possible ZIKV exposure were reported by the three surveillance systems; five miscarriages, two intrauterine deaths and four children with clinical presentations potentially associated with ZIKV infection. Sixteen women were diagnosed with ZIKV during pregnancy in the UK. Amongst the offspring of these women, there was unequivocal laboratory evidence of infection in only one child. In the UK, the number and risk of congenital ZIKV infection for travellers returning from ZIKV-affected countries is very small.
Vulnerability to depression can be measured in different ways. We here examine how genetic risk factors are inter-related for lifetime major depression (MD), self-report current depressive symptoms and the personality trait Neuroticism.
Method
We obtained data from three population-based adult twin samples (Virginia n = 4672, Australia #1 n = 3598 and Australia #2 n = 1878) to which we fitted a common factor model where risk for ‘broadly defined depression’ was indexed by (i) lifetime MD assessed at personal interview, (ii) depressive symptoms, and (iii) neuroticism. We examined the proportion of genetic risk for MD deriving from the common factor v. specific to MD in each sample and then analyzed them jointly. Structural equation modeling was conducted in Mx.
Results
The best fit models in all samples included additive genetic and unique environmental effects. The proportion of genetic effects unique to lifetime MD and not shared with the broad depression common factor in the three samples were estimated as 77, 61, and 65%, respectively. A cross-sample mega-analysis model fit well and estimated that 65% of the genetic risk for MD was unique.
Conclusion
A large proportion of genetic risk factors for lifetime MD was not, in the samples studied, captured by a common factor for broadly defined depression utilizing MD and self-report measures of current depressive symptoms and Neuroticism. The genetic substrate for MD may reflect neurobiological processes underlying the episodic nature of its cognitive, motor and neurovegetative manifestations, which are not well indexed by current depressive symptom and neuroticism.
Important Bird and Biodiversity Areas (IBAs) are sites identified as being globally important for the conservation of bird populations on the basis of an internationally agreed set of criteria. We present the first review of the development and spread of the IBA concept since it was launched by BirdLife International (then ICBP) in 1979 and examine some of the characteristics of the resulting inventory. Over 13,000 global and regional IBAs have so far been identified and documented in terrestrial, freshwater and marine ecosystems in almost all of the world’s countries and territories, making this the largest global network of sites of significance for biodiversity. IBAs have been identified using standardised, data-driven criteria that have been developed and applied at global and regional levels. These criteria capture multiple dimensions of a site’s significance for avian biodiversity and relate to populations of globally threatened species (68.6% of the 10,746 IBAs that meet global criteria), restricted-range species (25.4%), biome-restricted species (27.5%) and congregatory species (50.3%); many global IBAs (52.7%) trigger two or more of these criteria. IBAs range in size from < 1 km2 to over 300,000 km2 and have an approximately log-normal size distribution (median = 125.0 km2, mean = 1,202.6 km2). They cover approximately 6.7% of the terrestrial, 1.6% of the marine and 3.1% of the total surface area of the Earth. The launch in 2016 of the KBA Global Standard, which aims to identify, document and conserve sites that contribute to the global persistence of wider biodiversity, and whose criteria for site identification build on those developed for IBAs, is a logical evolution of the IBA concept. The role of IBAs in conservation planning, policy and practice is reviewed elsewhere. Future technical priorities for the IBA initiative include completion of the global inventory, particularly in the marine environment, keeping the dataset up to date, and improving the systematic monitoring of these sites.
BirdLife International´s Important Bird and Biodiversity Areas (IBA) Programme has identified, documented and mapped over 13,000 sites of international importance for birds. IBAs have been influential with governments, multilateral agreements, businesses and others in: (1) informing governments’ efforts to expand protected area networks (in particular to meet their commitments through the Convention on Biological Diversity); (2) supporting the identification of Ecologically or Biologically Significant Areas (EBSAs) in the marine realm, (3) identifying Wetlands of International Importance under the Ramsar Convention; (4) identifying sites of importance for species under the Convention on Migratory Species and its sister agreements; (5) identifying Special Protected Areas under the EU Birds Directive; (6) applying the environmental safeguards of international finance institutions such as the International Finance Corporation; (7) supporting the private sector to manage environmental risk in its operations; and (8) helping donor organisations like the Critical Ecosystems Partnership Fund (CEPF) to prioritise investment in site-based conservation. The identification of IBAs (and IBAs in Danger: the most threatened of these) has also triggered conservation and management actions at site level, most notably by civil society organisations and local conservation groups. IBA data have therefore been widely used by stakeholders at different levels to help conserve a network of sites essential to maintaining the populations and habitats of birds as well as other biodiversity. The experience of IBA identification and conservation is shaping the design and implementation of the recently launched Key Biodiversity Areas (KBA) Partnership and programme, as IBAs form a core part of the KBA network.
Prior research has documented shared heritable contributions to non-suicidal self-injury (NSSI) and suicidal ideation (SI) as well as NSSI and suicide attempt (SA). In addition, trauma exposure has been implicated in risk for NSSI and suicide. Genetically informative studies are needed to determine common sources of liability to all three self-injurious thoughts and behaviors, and to clarify the nature of their associations with traumatic experiences.
Methods
Multivariate biometric modeling was conducted using data from 9526 twins [59% female, mean age = 31.7 years (range 24–42)] from two cohorts of the Australian Twin Registry, some of whom also participated in the Childhood Trauma Study and the Nicotine Addiction Genetics Project.
Results
The prevalences of high-risk trauma exposure (HRT), NSSI, SI, and SA were 24.4, 5.6, 27.1, and 4.6%, respectively. All phenotypes were moderately to highly correlated. Genetic influences on self-injurious thoughts and behaviors and HRT were significant and highly correlated among men [rG = 0.59, 95% confidence interval (CI) (0.37–0.81)] and women [rG = 0.56 (0.49–0.63)]. Unique environmental influences were modestly correlated in women [rE = 0.23 (0.01–0.45)], suggesting that high-risk trauma may confer some direct risk for self-injurious thoughts and behaviors among females.
Conclusions
Individuals engaging in NSSI are at increased risk for suicide, and common heritable factors contribute to these associations. Preventing trauma exposure may help to mitigate risk for self-harm and suicide, either directly or indirectly via reductions in liability to psychopathology more broadly. In addition, targeting pre-existing vulnerability factors could significantly reduce risk for life-threatening behaviors among those who have experienced trauma.
The Taipan galaxy survey (hereafter simply ‘Taipan’) is a multi-object spectroscopic survey starting in 2017 that will cover 2π steradians over the southern sky (δ ≲ 10°, |b| ≳ 10°), and obtain optical spectra for about two million galaxies out to z < 0.4. Taipan will use the newly refurbished 1.2-m UK Schmidt Telescope at Siding Spring Observatory with the new TAIPAN instrument, which includes an innovative ‘Starbugs’ positioning system capable of rapidly and simultaneously deploying up to 150 spectroscopic fibres (and up to 300 with a proposed upgrade) over the 6° diameter focal plane, and a purpose-built spectrograph operating in the range from 370 to 870 nm with resolving power R ≳ 2000. The main scientific goals of Taipan are (i) to measure the distance scale of the Universe (primarily governed by the local expansion rate, H0) to 1% precision, and the growth rate of structure to 5%; (ii) to make the most extensive map yet constructed of the total mass distribution and motions in the local Universe, using peculiar velocities based on improved Fundamental Plane distances, which will enable sensitive tests of gravitational physics; and (iii) to deliver a legacy sample of low-redshift galaxies as a unique laboratory for studying galaxy evolution as a function of dark matter halo and stellar mass and environment. The final survey, which will be completed within 5 yrs, will consist of a complete magnitude-limited sample (i ⩽ 17) of about 1.2 × 106 galaxies supplemented by an extension to higher redshifts and fainter magnitudes (i ⩽ 18.1) of a luminous red galaxy sample of about 0.8 × 106 galaxies. Observations and data processing will be carried out remotely and in a fully automated way, using a purpose-built automated ‘virtual observer’ software and an automated data reduction pipeline. The Taipan survey is deliberately designed to maximise its legacy value by complementing and enhancing current and planned surveys of the southern sky at wavelengths from the optical to the radio; it will become the primary redshift and optical spectroscopic reference catalogue for the local extragalactic Universe in the southern sky for the coming decade.
Background. This paper examines genetic and environmental contributions to risk of cannabis dependence.
Method. Symptoms of cannabis dependence and measures of social, family and individual risk factors were assessed in a sample of 6265 young adult male and female Australian twins born 1964–1971.
Results. Symptoms of cannabis dependence were common: 11·0% of sample (15·1% of men and 7·8% of women) reported two or more symptoms of dependence. Correlates of cannabis dependence included educational attainment, exposure to parental conflict, sexual abuse, major depression, social anxiety and childhood conduct disorder. However, even after control for the effects of these factors, there was evidence of significant genetic effects on risk of cannabis dependence. Standard genetic modelling indicated that 44·7% (95% CI = 15–72·2) of the variance in liability to cannabis dependence could be accounted for by genetic factors, 20·1% (95% CI = 0–43·6) could be attributed to shared environment factors and 35·3% (95% CI = 26·4–45·7) could be attributed to non-shared environmental factors. However, while there was no evidence of significant gender differences in the magnitude of genetic and environmental influences, a model which assumed both genetic and shared environmental influences on risks of cannabis dependence among men and shared environmental but no genetic influences among women provided an equally good fit to the data.
Conclusions. There was consistent evidence that genetic risk factors are important determinants of risk of cannabis dependence among men. However, it remains uncertain whether there are genetic influences on liability to cannabis dependence among women.
Approximately half of the variation in wellbeing measures overlaps with variation in personality traits. Studies of non-human primate pedigrees and human twins suggest that this is due to common genetic influences. We tested whether personality polygenic scores for the NEO Five-Factor Inventory (NEO-FFI) domains and for item response theory (IRT) derived extraversion and neuroticism scores predict variance in wellbeing measures. Polygenic scores were based on published genome-wide association (GWA) results in over 17,000 individuals for the NEO-FFI and in over 63,000 for the IRT extraversion and neuroticism traits. The NEO-FFI polygenic scores were used to predict life satisfaction in 7 cohorts, positive affect in 12 cohorts, and general wellbeing in 1 cohort (maximal N = 46,508). Meta-analysis of these results showed no significant association between NEO-FFI personality polygenic scores and the wellbeing measures. IRT extraversion and neuroticism polygenic scores were used to predict life satisfaction and positive affect in almost 37,000 individuals from UK Biobank. Significant positive associations (effect sizes <0.05%) were observed between the extraversion polygenic score and wellbeing measures, and a negative association was observed between the polygenic neuroticism score and life satisfaction. Furthermore, using GWA data, genetic correlations of -0.49 and -0.55 were estimated between neuroticism with life satisfaction and positive affect, respectively. The moderate genetic correlation between neuroticism and wellbeing is in line with twin research showing that genetic influences on wellbeing are also shared with other independent personality domains.
Persistent tobacco use and excessive alcohol consumption are major public health concerns worldwide. Both alcohol and nicotine dependence (AD, ND) are genetically influenced complex disorders that exhibit a high degree of comorbidity. To identify gene variants contributing to one or both of these addictions, we first conducted a pooling-based genomewide association study (GWAS) in an Australian population, using Illumina Infinium 1M arrays. Allele frequency differences were compared between pooled DNA from case and control groups for: (1) AD, 1224 cases and 1162 controls; (2) ND, 1273 cases and 1113 controls; and (3) comorbid AD and ND, 599 cases and 488 controls. Secondly, we carried out a GWAS in independent samples from the Netherlands for AD and for ND. Thirdly, we performed a meta-analysis of the 10, 000 most significant AD- and ND-related SNPs from the Australian and Dutch samples. In the Australian GWAS, one SNP achieved genomewide significance (p < 5 x 10-8) for ND (rs964170 in ARHGAPlOon chromosome 4, p = 4.43 x 10”8) and three others for comorbid AD/ND (rs7530302 near MARK1 on chromosome 1 (p = 1.90 x 10-9), rs1784300 near DDX6 on chromosome 11 (p = 2.60 x 10-9) and rs12882384 in KIAA1409 on chromosome 14 (p = 4.86 x 10-8)). None of the SNPs achieved genomewide significance in the Australian/Dutch meta-analysis, but a gene network diagram based on the top-results revealed overrepre-sentation of genes coding for ion-channels and cell adhesion molecules. Further studies will be requirec before the detailed causes of comorbidity between AC and ND are understood.
To investigate familial influences on the full range of variability in attention and activity across adolescence, we collected maternal ratings of 339 twin pairs at ages 12, 14, and 16, and estimated the transmitted and new familial influences on attention and activity as measured by the Strengths and Weaknesses of Attention-Deficit/Hyperactivity Disorder Symptoms and Normal Behavior Scale. Familial influences were substantial for both traits across adolescence: genetic influences accounted for 54%–73% (attention) and 31%–73% (activity) of the total variance, and shared environmental influences accounted for 0%–22% of the attention variance and 13%–57% of the activity variance. The longitudinal stability of individual differences in attention and activity was largely accounted for by familial influences transmitted from previous ages. Innovations over adolescence were also partially attributable to familial influences. Studying the full range of variability in attention and activity may facilitate our understanding of attention-deficit/hyperactivity disorder's etiology and intervention.