To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Astronomical objects that change rapidly give us insight into extreme environments, allowing us to identify new phenomena, test fundamental physics, and probe the Universe on all scales. Transient and variable radio sources range from the cosmological, such as gamma-ray bursts, to much more local events, such as massive flares from stars in our Galactic neighbourhood. The capability to observe the sky repeatedly, over many frequencies and timescales, has allowed us to explore and understand dynamic phenomena in a way that has not been previously possible. In the past decade, there have been great strides forward as we prepared for the revolution in time domain radio astronomy that is being enabled by the SKA Observatory telescopes, the SKAO pathfinders and precursors, and other ‘next generation’ radio telescopes. Hence it is timely to review the current status of the field, and summarise the developments that have happened to get to our current point. This review focuses on image domain (or ‘slow’) transients, on timescales of seconds to years. We discuss the physical mechanisms that cause radio variability, and the classes of radio transients that result. We then outline what an ideal image domain radio transients survey would look like, and summarise the history of the field, from targeted observations to surveys with existing radio telescopes. We discuss methods and approaches for transient discovery and classification, and identify some of the challenges in scaling up current methods for future telescopes. Finally, we present our current understanding of the dynamic radio sky, in terms of source populations and transient rates, and look at what we can expect from surveys on future radio telescopes.
This paper reports the discovery and follow-up of four candidate redback spider pulsars: GPM J1723$-33$, GPM J1734$-28$, GPM J1752$-30$, and GPM J1815$-14$, discovered with the Murchison Widefield Array (MWA) from an imaging survey of the Galactic Plane. These sources are considered to be redback candidates based on their eclipsing variability, steep negative spectral indices, and potential Fermi$\gamma$-ray associations, with GPM J1723$-33$ and GPM J1815$-14$ lying within a Fermi 95$\%$ error ellipse. Follow-up pulsation searches with MeerKAT confirmed pulsations from GPM J1723$-33$, while the non-detections of the other three are likely due to scattering by material ablated from their companion stars. We identify possible orbital periods by applying folding algorithms to the light curves and determine that all sources have short orbital periods ($\lt$24 h), consistent with redback spider systems. Following up on the sources at multiple radio frequencies revealed that the sources exhibit frequency-dependent eclipses, with longer eclipses observed at lower frequencies. We place broad constraints on the eclipse medium, ruling out induced Compton scattering and cyclotron absorption. Three sources are spatially consistent with optical sources in the Dark Energy Camera Plane Survey imaging, which may contain the optical counterparts. Each field is affected by strong dust extinction, and follow-up with large telescopes is needed to identify the true counterparts. Identifying potential radio counterparts to four previously unassociated Fermi sources brings us closer to understanding the origin of the unexplained $\gamma$-ray excess in the Galactic Centre.
Quality improvement programmes (QIPs) are designed to enhance patient outcomes by systematically introducing evidence-based clinical practices. The CONQUEST QIP focuses on improving the identification and management of patients with COPD in primary care. The process of developing CONQUEST, recruiting, preparing systems for participation, and implementing the QIP across three integrated healthcare systems (IHSs) is examined to identify and share lessons learned.
Approach and development:
This review is organized into three stages: 1) development, 2) preparing IHSs for implementation, and 3) implementation. In each stage, key steps are described with the lessons learned and how they can inform others interested in developing QIPs designed to improve the care of patients with chronic conditions in primary care.
Stage 1 was establishing and working with steering committees to develop the QIP Quality Standards, define the target patient population, assess current management practices, and create a global operational protocol. Additionally, potential IHSs were assessed for feasibility of QIP integration into primary care practices. Factors assessed included a review of technological infrastructure, QI experience, and capacity for effective implementation.
Stage 2 was preparation for implementation. Key was enlisting clinical champions to advocate for the QIP, secure participation in primary care, and establish effective communication channels. Preparation for implementation required obtaining IHS approvals, ensuring Health Insurance Portability and Accountability Act compliance, and devising operational strategies for patient outreach and clinical decision support delivery.
Stage 3 was developing three IHS implementation models. With insight into the local context from local clinicians, implementation models were adapted to work with the resources and capacity of the IHSs while ensuring the delivery of essential elements of the programme.
Conclusion:
Developing and launching a QIP programme across primary care practices requires extensive groundwork, preparation, and committed local champions to assist in building an adaptable environment that encourages open communication and is receptive to feedback.
To assess country-level progress toward these educational goals it is important to monitor trends in educational outcomes over time. The purpose of this article is to demonstrate how optimally predictive growth models can be constructed to monitor the pace of progress at which countries are moving toward (or way from) the education sustainable development goals as specified by the United Nations. A number of growth curve models can be specified to estimate the pace of progress, however, choosing one model and using it for predictive purposes assumes that the chosen model is the one that generated the data, and this choice runs the risk of “over-confident inferences and decisions that are more risky than one thinks they are” (Hoeting et al., 1999). To mitigate this problem, we adapt and apply Bayesian stacking to form mixtures of predictive distributions from an ensemble of individual models specified to predict country-level pace of progress. We demonstrate Bayesian stacking using country-level data from the Program on International Student Assessment. Our results show that Bayesian stacking yields better predictive accuracy than any single model as measured by the Kullback–Leibler divergence. Issues of Bayesian model identification and estimation for growth models are also discussed.
The purpose of this paper is to demonstrate and evaluate the use of Bayesian dynamic borrowing (Viele et al, in Pharm Stat 13:41-54, 2014) as a means of systematically utilizing historical information with specific applications to large-scale educational assessments. Dynamic borrowing via Bayesian hierarchical models is a special case of a general framework of historical borrowing where the degree of borrowing depends on the heterogeneity among historical data and current data. A joint prior distribution over the historical and current data sets is specified with the degree of heterogeneity across the data sets controlled by the variance of the joint distribution. We apply Bayesian dynamic borrowing to both single-level and multilevel models and compare this approach to other historical borrowing methods such as complete pooling, Bayesian synthesis, and power priors. Two case studies using data from the Program for International Student Assessment reveal the utility of Bayesian dynamic borrowing in terms of predictive accuracy. This is followed by two simulation studies that reveal the utility of Bayesian dynamic borrowing over simple pooling and power priors in cases where the historical data is heterogeneous compared to the current data based on bias, mean squared error, and predictive accuracy. In cases of homogeneous historical data, Bayesian dynamic borrowing performs similarly to data pooling, Bayesian synthesis, and power priors. In contrast, for heterogeneous historical data, Bayesian dynamic borrowing performed at least as well, if not better, than other methods of borrowing with respect to mean squared error, percent bias, and leave-one-out cross-validation.
A general latent variable model is given which includes the specification of a missing data mechanism. This framework allows for an elucidating discussion of existing general multivariate theory bearing on maximum likelihood estimation with missing data. Here, missing completely at random is not a prerequisite for unbiased estimation in large samples, as when using the traditional listwise or pairwise present data approaches. The theory is connected with old and new results in the area of selection and factorial invariance. It is pointed out that in many applications, maximum likelihood estimation with missing data may be carried out by existing structural equation modeling software, such as LISREL and LISCOMP. Several sets of artifical data are generated within the general model framework. The proposed estimator is compared to the two traditional ones and found superior.
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\chi ^2$$\end{document} and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Issues of model selection have dominated the theoretical and applied statistical literature for decades. Model selection methods such as ridge regression, the lasso, and the elastic net have replaced ad hoc methods such as stepwise regression as a means of model selection. In the end, however, these methods lead to a single final model that is often taken to be the model considered ahead of time, thus ignoring the uncertainty inherent in the search for a final model. One method that has enjoyed a long history of theoretical developments and substantive applications, and that accounts directly for uncertainty in model selection, is Bayesian model averaging (BMA). BMA addresses the problem of model selection by not selecting a final model, but rather by averaging over a space of possible models that could have generated the data. The purpose of this paper is to provide a detailed and up-to-date review of BMA with a focus on its foundations in Bayesian decision theory and Bayesian predictive modeling. We consider the selection of parameter and model priors as well as methods for evaluating predictions based on BMA. We also consider important assumptions regarding BMA and extensions of model averaging methods to address these assumptions, particularly the method of Bayesian stacking. Simple empirical examples are provided and directions for future research relevant to psychometrics are discussed.
Considering that causal mechanisms unfold over time, it is important to investigate the mechanisms over time, taking into account the time-varying features of treatments and mediators. However, identification of the average causal mediation effect in the presence of time-varying treatments and mediators is often complicated by time-varying confounding. This article aims to provide a novel approach to uncovering causal mechanisms in time-varying treatments and mediators in the presence of time-varying confounding. We provide different strategies for identification and sensitivity analysis under homogeneous and heterogeneous effects. Homogeneous effects are those in which each individual experiences the same effect, and heterogeneous effects are those in which the effects vary over individuals. Most importantly, we provide an alternative definition of average causal mediation effects that evaluates a partial mediation effect; the effect that is mediated by paths other than through an intermediate confounding variable. We argue that this alternative definition allows us to better assess at least a part of the mediated effect and provides meaningful and unique interpretations. A case study using ECLS-K data that evaluates kindergarten retention policy is offered to illustrate our proposed approach.
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.
Associations between childhood trauma, neurodevelopment, alcohol use disorder (AUD), and posttraumatic stress disorder (PTSD) are understudied during adolescence.
Methods
Using 1652 participants (51.75% female, baseline Mage = 14.3) from the Collaborative Study of the Genetics of Alcoholism, we employed latent growth curve models to (1) examine associations of childhood physical, sexual, and non-assaultive trauma (CPAT, CSAT, and CNAT) with repeated measures of alpha band EEG coherence (EEGc), and (2) assess whether EEGc trajectories were associated with AUD and PTSD symptoms. Sex-specific models accommodated sex differences in trauma exposure, AUD prevalence, and neural development.
Results
In females, CSAT was associated with higher mean levels of EEGc in left frontocentral (LFC, ß = 0.13, p = 0.01) and interhemispheric prefrontal (PFI, ß = 0.16, p < 0.01) regions, but diminished growth in LFC (ß = −0.07, p = 0.02) and PFI (ß = −0.07, p = 0.02). In males, CPAT was associated with lower mean levels (ß = −0.17, p = 0.01) and increased growth (ß = 0.11, p = 0.01) of LFC EEGc. Slope of LFC EEGc was inversely associated with AUD symptoms in females (ß = −1.81, p = 0.01). Intercept of right frontocentral and PFI EEGc were associated with AUD symptoms in males, but in opposite directions. Significant associations between EEGc and PTSD symptoms were also observed in trauma-exposed individuals.
Conclusions
Childhood assaultive trauma is associated with changes in frontal alpha EEGc and subsequent AUD and PTSD symptoms, though patterns differ by sex and trauma type. EEGc findings may inform emerging treatments for PTSD and AUD.
We present the Sydney Radio Star Catalogue, a new catalogue of stars detected at megahertz to gigahertz radio frequencies. It consists of 839 unique stars with 3 405 radio detections, more than doubling the previously known number of radio stars. We have included stars from large area searches for radio stars found using circular polarisation searches, cross-matching, variability searches, and proper motion searches as well as presenting hundreds of newly detected stars from our search of Australian SKA Pathfinder observations. The focus of this first version of the catalogue is on objects detected in surveys using SKA precursor and pathfinder instruments; however, we will expand this scope in future versions. The 839 objects in the Sydney Radio Star Catalogue are distributed across the whole sky and range from ultracool dwarfs to Wolf-Rayet stars. We demonstrate that the radio luminosities of cool dwarfs are lower than the radio luminosities of more evolved sub-giant and giant stars. We use X-ray detections of 530 radio stars by the eROSITA soft X-ray instrument onboard the Spectrum Roentgen Gamma spacecraft to show that almost all of the radio stars in the catalogue are over-luminous in the radio, indicating that the majority of stars at these radio frequencies are coherent radio emitters. The Sydney Radio Star Catalogue can be found in Vizier or at https://radiostars.org.
We argue that proxy failure contributes to poor measurement practices in psychological science and that a tradeoff exists between the legibility and fidelity of proxies whereby increasing legibility can result in decreased fidelity.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging method that takes advantage of the optical properties of hemoglobin to provide an indirect measure of brain activation via task-related relative changes in oxygenated hemoglobin (HbO). Its advantage over fMRI is that fNIRS is portable and can be used while walking and talking. In this study, we used fNIRS to measure brain activity in prefrontal and motor region of interests (ROIs) during single- and dual-task walking, with the goal of identifying neural correlates.
Participants and Methods:
Nineteen healthy young adults [mean age=25.4 (SD=4.6) years; 14 female] engaged in five tasks: standing single-task cognition (serial-3 subtraction); single-task walking at a self-selected comfortable speed on a 24.5m oval-shaped course (overground walking) and on a treadmill; and dual-task cognition+walking on the same overground course and treadmill (8 trials/condition: 20 seconds standing rest, 30 seconds task). Performance on the cognitive task was quantified as the number of correct subtractions, number of incorrect subtractions, number of self-corrected errors, and percent accuracy over the 8 trials. Walking speed (m/sec) was recorded for all walking conditions. fNIRS data were collected on a system consisting of 16 sources, 15 detectors, and 8 short-separation detectors in the following ROIs: right and left lateral frontal (RLF, LLF), right and left medial frontal (RMF, LMF), right and left medial superior frontal (RMSF, LMSF), and right and left motor (RM, LM). Lateral and medial refer to ROIs’ relative positions on lateral prefrontal cortex. fNIRS data were analyzed in Homer3 using a spline motion correction and the iterative weighted least squares method in the general linear model. Correlations between the cognitive/speed variables and ROI HbO data were applied using a Bonferroni adjustment for multiple comparisons.
Results:
Subjects with missing cognitive data were excluded from analyses, resulting in sample sizes of 18 for the single-task cognition, dual-task overground walking, and dual-task treadmill walking conditions. During dual-task overground walking, there was a significant positive correlation between walking speed and relative change in HbO in RMSF [r(18)=.51, p<.05] and RM [r(18)=.53, p<.05)]. There was a significant negative correlation between total number of correct subtractions and relative change in HbO in LMSF ([r(18)=-.75, p<.05] and LM [r(18)=-.52, p<.05] during dual-task overground walking. No other significant correlations were identified.
Conclusions:
These results indicate that there is lateralization of the cognitive and motor components of overground dual-task walking. The right hemisphere appears to be more active the faster people walk during the dual-task. By contrast, the left hemisphere appears to be less active when people are working faster on the cognitive task (i.e., serial-3 subtraction). The latter results suggest that automaticity of the cognitive task (i.e., more total correct subtractions) is related to decreased brain activity in the left hemisphere. Future research will investigate whether there is a change in cognitive automaticity over trials and if there are changes in lateralization patterns in neurodegenerative disorders that are known to differentially affect the hemispheres (e.g., Parkinson’s disease).
The anesthesia workstation, commonly referred to as the “anesthesia machine,” is a complex and very specialized piece of equipment that is relatively unique in medical practice. It is, in essence, a device to control the delivery of medical gases to patients, including oxygen, air, nitrous oxide, and volatile anesthetics, along with a specialized ventilator adapted to operating room conditions. The safe use of the anesthesia workstation requires proper training, preuse checkout, and continuous monitoring of its function. The medical literature is replete with examples of patient harm from inappropriate use of the anesthesia workstation and from mechanical or electrical failure of its components. Additionally, volatile anesthetics, while valuable in medical practice, have a very low therapeutic index and manifest severe, and even fatal, side effects when administered improperly. Finally, many patients under general anesthesia are paralyzed for surgery and ventilated through an endotracheal tube. Their safety is completely dependent on the anesthesia professional’s use of the anesthesia workstation to deliver breathing gases, remove carbon dioxide from exhaled gas, and precise administration of volatile anesthetics.
We present a systematic search for radio counterparts of novae using the Australian Square Kilometer Array Pathfinder (ASKAP). Our search used the Rapid ASKAP Continuum Survey, which covered the entire sky south of declination $+41^{\circ}$ ($\sim$$34000$ square degrees) at a central frequency of 887.5 MHz, the Variables and Slow Transients Pilot Survey, which covered $\sim$$5000$ square degrees per epoch (887.5 MHz), and other ASKAP pilot surveys, which covered $\sim$200–2000 square degrees with 2–12 h integration times. We crossmatched radio sources found in these surveys over a two–year period, from 2019 April to 2021 August, with 440 previously identified optical novae, and found radio counterparts for four novae: V5668 Sgr, V1369 Cen, YZ Ret, and RR Tel. Follow-up observations with the Australian Telescope Compact Array confirm the ejecta thinning across all observed bands with spectral analysis indicative of synchrotron emission in V1369 Cen and YZ Ret. Our light-curve fit with the Hubble Flow model yields a value of $1.65\pm 0.17 \times 10^{-4} \rm \:M_\odot$ for the mass ejected in V1369 Cen. We also derive a peak surface brightness temperature of $250\pm80$ K for YZ Ret. Using Hubble Flow model simulated radio lightcurves for novae, we demonstrate that with a 5$\sigma$ sensitivity limit of 1.5 mJy in 15-min survey observations, we can detect radio emission up to a distance of 4 kpc if ejecta mass is in the range $10^{-3}\rm \:M_\odot$, and upto 1 kpc if ejecta mass is in the range $10^{-5}$–$10^{-3}\rm \:M_\odot$. Our study highlights ASKAP’s ability to contribute to future radio observations for novae within a distance of 1 kpc hosted on white dwarfs with masses $0.4$–$1.25\:\rm M_\odot$, and within a distance of 4 kpc hosted on white dwarfs with masses $0.4$–$1.0\:\rm M_\odot$.
We assessed patterns of enteric infections caused by 14 pathogens, in a longitudinal cohort study of sequelae in British Columbia (BC) Canada, 2005–2014. Our population cohort of 5.8 million individuals was followed for an average of 7.5 years/person; during this time, 40 523 individuals experienced 42 308 incident laboratory-confirmed, provincially reported enteric infections (96.4 incident infections per 100 000 person-years). Most individuals (38 882/40 523; 96%) had only one, but 4% had multiple concurrent infections or more than one infection across the study. Among individuals with more than one infection, the pathogens and combinations occurring most frequently per individual matched the pathogens occurring most frequently in the BC population. An additional 298 557 new fee-for-service physician visits and hospitalisations for enteric infections, that did not coincide with a reported enteric infection, also occurred, and some may be potentially unreported enteric infections. Our findings demonstrate that sequelae risk analyses should explore the possible impacts of multiple infections, and that estimating risk for individuals who may have had a potentially unreported enteric infection is warranted.
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.