We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The First Large Absorption Survey in H i (FLASH) is a large-area radio survey for neutral hydrogen in and around galaxies in the intermediate redshift range $0.4\lt z\lt1.0$, using the 21-cm H i absorption line as a probe of cold neutral gas. The survey uses the ASKAP radio telescope and will cover 24,000 deg$^2$ of sky over the next five years. FLASH breaks new ground in two ways – it is the first large H i absorption survey to be carried out without any optical preselection of targets, and we use an automated Bayesian line-finding tool to search through large datasets and assign a statistical significance to potential line detections. Two Pilot Surveys, covering around 3000 deg$^2$ of sky, were carried out in 2019-22 to test and verify the strategy for the full FLASH survey. The processed data products from these Pilot Surveys (spectral-line cubes, continuum images, and catalogues) are public and available online. In this paper, we describe the FLASH spectral-line and continuum data products and discuss the quality of the H i spectra and the completeness of our automated line search. Finally, we present a set of 30 new H i absorption lines that were robustly detected in the Pilot Surveys, almost doubling the number of known H i absorption systems at $0.4\lt z\lt1$. The detected lines span a wide range in H i optical depth, including three lines with a peak optical depth $\tau\gt1$, and appear to be a mixture of intervening and associated systems. Interestingly, around two-thirds of the lines found in this untargeted sample are detected against sources with a peaked-spectrum radio continuum, which are only a minor (5–20%) fraction of the overall radio-source population. The detection rate for H i absorption lines in the Pilot Surveys (0.3 to 0.5 lines per 40 deg$^2$ ASKAP field) is a factor of two below the expected value. One possible reason for this is the presence of a range of spectral-line artefacts in the Pilot Survey data that have now been mitigated and are not expected to recur in the full FLASH survey. A future paper in this series will discuss the host galaxies of the H i absorption systems identified here.
The Australian SKA Pathfinder (ASKAP) offers powerful new capabilities for studying the polarised and magnetised Universe at radio wavelengths. In this paper, we introduce the Polarisation Sky Survey of the Universe’s Magnetism (POSSUM), a groundbreaking survey with three primary objectives: (1) to create a comprehensive Faraday rotation measure (RM) grid of up to one million compact extragalactic sources across the southern $\sim50$% of the sky (20,630 deg$^2$); (2) to map the intrinsic polarisation and RM properties of a wide range of discrete extragalactic and Galactic objects over the same area; and (3) to contribute interferometric data with excellent surface brightness sensitivity, which can be combined with single-dish data to study the diffuse Galactic interstellar medium. Observations for the full POSSUM survey commenced in May 2023 and are expected to conclude by mid-2028. POSSUM will achieve an RM grid density of around 30–50 RMs per square degree with a median measurement uncertainty of $\sim$1 rad m$^{-2}$. The survey operates primarily over a frequency range of 800–1088 MHz, with an angular resolution of 20” and a typical RMS sensitivity in Stokes Q or U of 18 $\mu$Jy beam$^{-1}$. Additionally, the survey will be supplemented by similar observations covering 1296–1440 MHz over 38% of the sky. POSSUM will enable the discovery and detailed investigation of magnetised phenomena in a wide range of cosmic environments, including the intergalactic medium and cosmic web, galaxy clusters and groups, active galactic nuclei and radio galaxies, the Magellanic System and other nearby galaxies, galaxy halos and the circumgalactic medium, and the magnetic structure of the Milky Way across a very wide range of scales, as well as the interplay between these components. This paper reviews the current science case developed by the POSSUM Collaboration and provides an overview of POSSUM’s observations, data processing, outputs, and its complementarity with other radio and multi-wavelength surveys, including future work with the SKA.
Paediatric ventricular assist device patients, including those with single ventricle anatomy, are increasingly managed outside of the ICU. We used retrospective chart review of our single centre experience to quantify adverse event rates and ICU readmissions for 22 complex paediatric patients on ventricular assist device support (15 two ventricles, 7 single ventricle) after floor transfer. The median age was 1.65 years. The majority utilised the Berlin EXCOR (17, 77.3%). There were 9 ICU readmissions with median length of stay of 2 days. Adverse events were noted in 9 patients (41%), with infection being most common (1.8 events per patient year). There were no deaths. Single ventricle patients had a higher proportion of ICU readmission and adverse events. ICU readmission rates were low, and adverse event rates were comparable to published rates suggesting ventricular assist device patients can be safely managed on the floor.
Auditory verbal hallucinations (AVHs) in schizophrenia have been suggested to arise from failure of corollary discharge mechanisms to correctly predict and suppress self-initiated inner speech. However, it is unclear whether such dysfunction is related to motor preparation of inner speech during which sensorimotor predictions are formed. The contingent negative variation (CNV) is a slow-going negative event-related potential that occurs prior to executing an action. A recent meta-analysis has revealed a large effect for CNV blunting in schizophrenia. Given that inner speech, similar to overt speech, has been shown to be preceded by a CNV, the present study tested the notion that AVHs are associated with inner speech-specific motor preparation deficits.
Objectives
The present study aimed to provide a useful framework for directly testing the long-held idea that AVHs may be related to inner speech-specific CNV blunting in patients with schizophrenia. This may hold promise for a reliable biomarker of AVHs.
Methods
Hallucinating (n=52) and non-hallucinating (n=45) patients with schizophrenia, along with matched healthy controls (n=42), participated in a novel electroencephalographic (EEG) paradigm. In the Active condition, they were asked to imagine a single phoneme at a cue moment while, precisely at the same time, being presented with an auditory probe. In the Passive condition, they were asked to passively listen to the auditory probes. The amplitude of the CNV preceding the production of inner speech was examined.
Results
Healthy controls showed a larger CNV amplitude (p = .002, d = .50) in the Active compared to the Passive condition, replicating previous results of a CNV preceding inner speech. However, both patient groups did not show a difference between the two conditions (p > .05). Importantly, a repeated measure ANOVA revealed a significant interaction effect (p = .007, ηp2 = .05). Follow-up contrasts showed that healthy controls exhibited a larger CNV amplitude in the Active condition than both the hallucinating (p = .013, d = .52) and non-hallucinating patients (p < .001, d = .88). No difference was found between the two patient groups (p = .320, d = .20).
Conclusions
The results indicated that motor preparation of inner speech in schizophrenia was disrupted. While the production of inner speech resulted in a larger CNV than passive listening in healthy controls, which was indicative of the involvement of motor planning, patients exhibited markedly blunted motor preparatory activity to inner speech. This may reflect dysfunction in the formation of corollary discharges. Interestingly, the deficits did not differ between hallucinating and non-hallucinating patients. Future work is needed to elucidate the specificity of inner speech-specific motor preparation deficits with AVHs. Overall, this study provides evidence in support of atypical inner speech monitoring in schizophrenia.
Commentaries on the target article offer diverse perspectives on integrative experiment design. Our responses engage three themes: (1) Disputes of our characterization of the problem, (2) skepticism toward our proposed solution, and (3) endorsement of the solution, with accompanying discussions of its implementation in existing work and its potential for other domains. Collectively, the commentaries enhance our confidence in the promise and viability of integrative experiment design, while highlighting important considerations about how it is used.
The Australian SKA Pathfinder (ASKAP) radio telescope has carried out a survey of the entire Southern Sky at 887.5 MHz. The wide area, high angular resolution, and broad bandwidth provided by the low-band Rapid ASKAP Continuum Survey (RACS-low) allow the production of a next-generation rotation measure (RM) grid across the entire Southern Sky. Here we introduce this project as Spectral and Polarisation in Cutouts of Extragalactic sources from RACS (SPICE-RACS). In our first data release, we image 30 RACS-low fields in Stokes I, Q, U at 25$^{\prime\prime}$ angular resolution, across 744–1032 MHz with 1 MHz spectral resolution. Using a bespoke, highly parallelised, software pipeline we are able to rapidly process wide-area spectro-polarimetric ASKAP observations. Notably, we use ‘postage stamp’ cutouts to assess the polarisation properties of 105912 radio components detected in total intensity. We find that our Stokes Q and U images have an rms noise of $\sim$80 $\unicode{x03BC}$Jy PSF$^{-1}$, and our correction for instrumental polarisation leakage allows us to characterise components with $\gtrsim$1% polarisation fraction over most of the field of view. We produce a broadband polarised radio component catalogue that contains 5818 RM measurements over an area of $\sim$1300 deg$^{2}$ with an average error in RM of $1.6^{+1.1}_{-1.0}$ rad m$^{-2}$, and an average linear polarisation fraction $3.4^{+3.0}_{-1.6}$ %. We determine this subset of components using the conditions that the polarised signal-to-noise ratio is $>$8, the polarisation fraction is above our estimated polarised leakage, and the Stokes I spectrum has a reliable model. Our catalogue provides an areal density of $4\pm2$ RMs deg$^{-2}$; an increase of $\sim$4 times over the previous state-of-the-art (Taylor, Stil, Sunstrum 2009, ApJ, 702, 1230). Meaning that, having used just 3% of the RACS-low sky area, we have produced the 3rd largest RM catalogue to date. This catalogue has broad applications for studying astrophysical magnetic fields; notably revealing remarkable structure in the Galactic RM sky. We will explore this Galactic structure in a follow-up paper. We will also apply the techniques described here to produce an all-Southern-sky RM catalogue from RACS observations. Finally, we make our catalogue, spectra, images, and processing pipeline publicly available.
A porous material that has been contaminated with a hazardous chemical agent is typically decontaminated by applying a cleanser solution to the surface and allowing the cleanser to react into the porous material, neutralising the agent. The agent and cleanser are often immiscible fluids and so, if the porous material is initially saturated with agent, a reaction front develops with the decontamination reaction occurring at this interface between the fluids. We investigate the effect of different initial agent configurations within the pore space on the decontamination process. Specifically, we compare the decontamination of a material initially saturated by the agent with the situation when, initially, the agent only coats the walls of the pores (referred to as the ‘agent-on-walls’ case). In previous work (Luckins et al., European Journal of Applied Mathematics, 31(5):782–805, 2020), we derived homogenised models for both of these decontamination scenarios, and in this paper we explore the solutions of these two models. We find that, for an identical initial volume of agent, the decontamination time is generally much faster for the agent-on-walls case compared with the initially saturated case, since the surface area on which the reaction can occur is greater. However for sufficiently deep spills of contaminant, or sufficiently slow reaction rates, decontamination in the agent-on-walls scenario can be slower. We also show that, in the limit of a dilute cleanser with a deep initial agent spill, the agent-on-walls model exhibits behaviour akin to a Stefan problem of the same form as that arising in the initially saturated model. The decontamination time is shown to decrease with both the applied cleanser concentration and the rate of the chemical reaction. However, increasing the cleanser concentration is also shown to result in lower decontamination efficiency, with an increase in the amount of cleanser chemical that is wasted.
Evaporation within porous media is both a multiscale and interface-driven process, since the phase change at the evaporating interfaces within the pores generates a vapour flow and depends on the transport of vapour through the porous medium. While homogenised models of flow and chemical transport in porous media allow multiscale processes to be modelled efficiently, it is not clear how the multiscale effects impact the interface conditions required for these homogenised models. In this paper, we derive a homogenised model, including effective interface conditions, for the motion of an evaporation front through a porous medium, using a combined homogenisation and boundary layer analysis. This analysis extends previous work for a purely diffusive problem to include both gas flow and the advective–diffusive transport of material. We investigate the effect that different microscale models describing the chemistry of the evaporation have on the homogenised interface conditions. In particular, we identify a new effective parameter, $\mathcal{L}$, the average microscale interface length, which modifies the effective evaporation rate in the homogenised model. Like the effective diffusivity and permeability of a porous medium, $\mathcal{L}$ may be found by solving a periodic cell problem on the microscale. We also show that the different microscale models of the interface chemistry result in fundamentally different fine-scale behaviour at, and near, the interface.
The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment's specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. Researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm – and with far greater efficiency.
Optimizing research on the developmental origins of health and disease (DOHaD) involves implementing initiatives maximizing the use of the available cohort study data; achieving sufficient statistical power to support subgroup analysis; and using participant data presenting adequate follow-up and exposure heterogeneity. It also involves being able to undertake comparison, cross-validation, or replication across data sets. To answer these requirements, cohort study data need to be findable, accessible, interoperable, and reusable (FAIR), and more particularly, it often needs to be harmonized. Harmonization is required to achieve or improve comparability of the putatively equivalent measures collected by different studies on different individuals. Although the characteristics of the research initiatives generating and using harmonized data vary extensively, all are confronted by similar issues. Having to collate, understand, process, host, and co-analyze data from individual cohort studies is particularly challenging. The scientific success and timely management of projects can be facilitated by an ensemble of factors. The current document provides an overview of the ‘life course’ of research projects requiring harmonization of existing data and highlights key elements to be considered from the inception to the end of the project.
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36$\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz.
The Neuropsychiatric Inventory (NPI) is predicated on the assumption that psychiatric symptoms are manifestations of disease. Biopsychosocial theories suggest behavioural changes viewed as psychiatric may also arise as a result of external behavioural triggers. Knowing the causes of psychiatric symptoms is important since the treatment and management of symptoms relies on this understanding.
Aims
This study sought to understand the causes of psychiatric symptoms recorded in care home settings by investigating qualitatively described symptoms in Neuropsychiatric Inventory-Nursing Home (NPI-NH) interviews.
Method
The current study examined the NPI-NH interviews of 725 participants across 50 care homes. The qualitatively described symptoms from each of the 12 subscales of the NPI were extracted: 347 interviews included at least one qualitatively described symptom (n = 651 descriptions). A biopsychosocial algorithm developed following a process of independent researcher coding (n = 3) was applied to the symptom descriptions. This determined whether the description had predominantly psychiatric features, or features that were cognitive or attributable to other causes (i.e. issues with orientation and memory; expressions of need; poor care and communication; or understandable reactions)
Results
Our findings suggest that the majority (over 80%) of descriptions described symptoms with features that could be attributable to cognitive changes and external triggers (such as poor care and communication).
Conclusions
The finding suggest that in its current form the NPI-NH may over attribute the incidence of psychiatric symptoms in care homes by overlooking triggers for behavioural changes. Measures of psychiatric symptoms should determine the causes of behavioural changes in order to guide treatments more effectively.
This work investigated the photophysical pathways for light absorption, charge generation, and charge separation in donor–acceptor nanoparticle blends of poly(3-hexylthiophene) and indene-C60-bisadduct. Optical modeling combined with steady-state and time-resolved optoelectronic characterization revealed that the nanoparticle blends experience a photocurrent limited to 60% of a bulk solution mixture. This discrepancy resulted from imperfect free charge generation inside the nanoparticles. High-resolution transmission electron microscopy and chemically resolved X-ray mapping showed that enhanced miscibility of materials did improve the donor–acceptor blending at the center of the nanoparticles; however, a residual shell of almost pure donor still restricted energy generation from these nanoparticles.
The decontamination of hazardous chemical agents from porous media is an important and critical part of the clean-up operation following a chemical weapon attack. Decontamination is often achieved through the application of a cleanser, which reacts on contact with an agent to neutralise it. While it is relatively straightforward to write down a model that describes the interplay of the agent and cleanser on the scale of the pores in the porous medium, it is computationally expensive to solve such a model over realistic spill sizes.
In this paper, we consider the homogenisation of a pore-scale model for the interplay between agent and cleanser, with the aim of generating simplified models that can be solved more easily on the spill scale but accurately capture the microscale structure and chemical activity. We consider two situations: one in which the agent completely fills local porespaces and one in which it does not. In the case when the agent does not completely fill the porespace, we use established homogenisation techniques to systematically derive a reaction–diffusion model for the macroscale concentration of cleanser. However, in the case where the agent completely fills the porespace, the homogenisation procedure is more in-depth and involves a two-timescale approach coupled with a spatial boundary layer. The resulting homogenised model closely resembles the microscale model with the effect of the porous material being incorporated into the parameters. The two models cater for two different spill scenarios and provide the foundation for further study of reactive decontamination.
We derive a mathematical model for the drawing of a two-dimensional thin sheet of viscous fluid in the direction of gravity. If the gravitational field is sufficiently strong, then a portion of the sheet experiences a compressive stress and is thus unstable to transverse buckling. We analyse the dependence of the instability and the subsequent evolution on the process parameters, and the mutual coupling between the weakly nonlinear buckling and the stress profile in the sheet. Over long time scales, the sheet centreline ultimately adopts a universal profile, with the bulk of the sheet under tension and a single large bulge caused by a small compressive region near the bottom, and we derive a canonical inner problem that describes this behaviour. The large-time analysis involves a logarithmic asymptotic expansion, and we devise a hybrid asymptotic–numerical scheme that effectively sums the logarithmic series.
We consider the spreading of a thin viscous droplet, injected through a finite region of a substrate, under the influence of surface tension. We neglect gravity and assume that there is a precursor layer covering the whole substrate and that the rate of injection is constant. We analyse the evolution of the film profile for early and late time, and obtain power-law dependencies for the maximum film thickness at the centre of the injection region and the position of an apparent contact line, which compare well with numerical solutions of the full problem. We relax the conditions on the injection rate to consider more general time-dependent and spatially varying forms. In the case of power-law injection of the form $t^{k}$, we observe a switch in the behaviour of the evolution of the film thickness for late time from increasing to decreasing at a critical value of $k$. We show that point-source injection can be treated as a limiting case of a finite-injection slot and the solutions exhibit identical behaviours for late time. Finally, we formulate the problem with thickness-dependent injection rate, discuss the behaviour of the maximum film thickness and the position of the apparent contact line and give power-law dependencies for these.
Recent investigations of a limestone solution cave on the Queen Charlotte Islands (Haida Gwaii) have yielded skeletal remains of fauna including late Pleistocene and early Holocene bears, one specimen of which dates to ca. 14,400 14C yr B.P. This new fossil evidence sheds light on early postglacial environmental conditions in this archipelago, with implications for the timing of early human migration into the Americas.