We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Microwaves (MWs) have emerged as a promising sensing technology to complement optical methods for monitoring floating plastic litter. This study uses machine learning (ML) to identify optimal MW frequencies for detecting floating macroplastics (>5 cm) across S, C, and X-bands. Data were obtained from dedicated wideband backscattering radio measurements conducted in a controlled indoor scenario that mimics deep-sea conditions. The paper presents new strategies to directly analyze the frequency domain signals using ML algorithms, instead of generating an image from those signals and analyzing the image. We propose two ML workflows, one unsupervised, to characterize the difference in feature importance across the measured MW spectrum, and the other supervised, based on multilayer perceptron, to study the detection accuracy in unseen data. For the tested conditions, the backscatter response of the plastic litter is optimal at X-band frequencies, achieving accuracies up to 90% and 80% for lower and higher water wave heights, respectively. Multiclass classification is also investigated to distinguish between different types of plastic targets. ML results are interpreted in terms of the physical phenomena obtained through numerical analysis, and quantified through an energy-based metric.
The so-called credibility revolution dominates empirical economics, with its promise of causal identification to improve scientific knowledge and ultimately policy. By examining the case of rural electrification in the Global South, this opinion paper exposes the limits of this evidence-based policy paradigm. The electrification literature boasts many studies using the credibility revolution toolkit, but at the same time, several systematic reviews demonstrate that the evidence is divided between very positive and muted effects. This bifurcation presents a challenge to the science-policy interface, where policymakers, lacking the resources to sift through the evidence, may be drawn to the results that serve their (agency's) interests. The interpretation is furthermore complicated by unresolved methodological debates circling around external validity as well as selective reporting and publication decisions. These features, we argue, are not particular to the electrification literature but inherent to the credibility revolution toolkit.
Employment and relationship are crucial for social integration. However, individuals with major psychiatric disorders often face challenges in these domains.
Aims
We investigated employment and relationship status changes among patients across the affective and psychotic spectrum – in comparison with healthy controls, examining whether diagnostic groups or functional levels influence these transitions.
Method
The sample from the longitudinal multicentric PsyCourse Study comprised 1260 patients with affective and psychotic spectrum disorders and 441 controls (mean age ± s.d., 39.91 ± 12.65 years; 48.9% female). Multistate models (Markov) were used to analyse transitions in employment and relationship status, focusing on transition intensities. Analyses contained multiple multistate models adjusted for age, gender, job or partner, diagnostic group and Global Assessment of Functioning (GAF) in different combinations to analyse the impact of the covariates on the hazard ratio of changing employment or relationship status.
Results
The clinical group had a higher hazard ratio of losing partner (hazard ratio 1.46, P < 0.001) and job (hazard ratio 4.18, P < 0.001) than the control group (corrected for age/gender). Compared with controls, clinical groups had a higher hazard of losing partner (affective group, hazard ratio 2.69, P = 0.003; psychotic group, hazard ratio 3.06, P = 0.001) and job (affective group, hazard ratio 3.43, P < 0.001; psychotic group, hazard ratio 4.11, P < 0.001). Adjusting for GAF, the hazard ratio of losing partner and job decreased in both clinical groups compared with controls.
Conclusion
Patients face an increased hazard of job loss and relationship dissolution compared with healthy controls, and this is partially conditioned by the diagnosis and functional level. These findings underscore a high demand for destigmatisation and support for individuals in managing their functional limitations.
The influence of Cu(II) on the hydrothermal and thermal transformations of a synthetic hectorite was investigated by a combined approach using mainly X-ray diffraction, thermal analyses, and electron paramagnetic resonance spectroscopy. The presence of Cu(II) during hydrothermal treatment increased the crystallite size. Copper (II) was both structure-bound and associated with the inner surfaces of the particles. Upon heating, structural destabilization of the hectorite began at ∼400°C as indicated by the formation of free radicals. Between 600 and 700°C, the hectorite converted to enstatite, and in the presence of Cu(II), to enstatite and richterite. The formation of richterite as an additional conversion product is explained by the creation of structural weakness due to structure-bound Cu(II) in F-containing hectorite. Our results suggest that traces of Cu(II), typical of natural environments, may influence the conversion products in high-temperature geochemical systems.
Edited by
Alik Ismail-Zadeh, Karlsruhe Institute of Technology, Germany,Fabio Castelli, Università degli Studi, Florence,Dylan Jones, University of Toronto,Sabrina Sanchez, Max Planck Institute for Solar System Research, Germany
Abstract: Variational data assimilation through the adjoint method is a powerful emerging technique in geodynamics. It allows one to retrodict past states of the Earth’s mantle as optimal flow histories relative to the current state, so that poorly known mantle flow parameters such as rheology and composition can be tested explicitly against observations gleaned from the geologic record. By yielding testable time dependent Earth models, the technique links observations from seismology, geology, mineral physics, and paleomagnetism in a dynamically consistent way, greatly enhancing our understanding of the solid Earth system. It motivates three research fronts. The first is computational, because the iterative nature of the technique combined with the need of Earth models for high spatial and temporal resolution classifies the task as a grand challenge problem at the level of exa-scale computing. The second is seismological, because the seismic mantle state estimate provides key input information for retrodictions, but entails substantial uncertainties. This calls for efforts to construct 3D reference and collaborative seismic models, and to account for seismic data uncertainties. The third is geological, because retrodictions necessarily use simplified Earth models and noisy input data. Synthetic tests show that retrodictions always reduce the final state misfit, regardless of model and data error. So the quality of any retrodiction must be assessed by geological constraints on past mantle flow. Horizontal surface velocities are an input rather than an output of the retrodiction problem; but viable retrodiction tests can be linked to estimates of vertical lithosphere motion induced by mantle convective stresses.
The Centre for Advanced Laser Applications in Garching, Germany, is home to the ATLAS-3000 multi-petawatt laser, dedicated to research on laser particle acceleration and its applications. A control system based on Tango Controls is implemented for both the laser and four experimental areas. The device server approach features high modularity, which, in addition to the hardware control, enables a quick extension of the system and allows for automated data acquisition of the laser parameters and experimental data for each laser shot. In this paper we present an overview of our implementation of the control system, as well as our advances in terms of experimental operation, online supervision and data processing. We also give an outlook on advanced experimental supervision and online data evaluation – where the data can be processed in a pipeline – which is being developed on the basis of this infrastructure.
Understanding the transfer of polychlorinated dibenzo-p-dioxins (PCDDs) and dibenzofurans (PCDFs) as well as polychlorinated biphenyls (PCBs) from oral exposure into cow’s milk is not purely an experimental endeavour, as it has produced a large corpus of theoretical work. This work consists of a variety of predictive toxicokinetic models in the realms of health and environmental risk assessment and risk management. Their purpose is to provide mathematical predictive tools to organise and integrate knowledge on the absorption, distribution, metabolism and excretion processes. Toxicokinetic models are based on more than 50 years of transfer studies summarised in part I of this review series. Here in part II, several of these models are described and systematically classified with a focus on their applicability to risk analysis as well as their limitations. This part of the review highlights the opportunities and challenges along the way towards accurate, congener-specific predictive models applicable to changing animal breeds and husbandry conditions.
Polychlorinated dibenzo-para-dioxins (PCDDs) and dibenzofurans (PCDFs) (collectively and colloquially referred to as ‘dioxins’) as well as polychlorinated biphenyls (PCBs) are persistent and ubiquitous environmental contaminants that may unintentionally enter and accumulate along the food chain. Owing to their chronic toxic effects in humans and bioaccumulative properties, their presence in feed and food requires particular attention. One important exposure pathway for consumers is consumption of milk and dairy products. Their transfer from feed to milk has been studied for the past 50 years to quantify the uptake and elimination kinetics. We extracted transfer parameters (transfer rate, transfer factor, biotransfer factor and elimination half-lives) in a machine-readable format from seventy-six primary and twenty-nine secondary literature items. Kinetic data for some toxicologically relevant dioxin congeners and the elimination half-lives of dioxin-like PCBs are still not available. A well-defined selection of transfer parameters from literature was statistically analysed and shown to display high variability. To understand this variability, we discuss the data with an emphasis on influencing factors, such as experimental conditions, cow performance parameters and metabolic state. While no universal interpretation could be derived, a tendency for increased transfer into milk is apparently connected to an increase in milk yield and milk fat yield as well as during times of body fat mobilisation, for example during the negative energy balance after calving. Over the past decades, milk yield has increased to over 40 kg/d during high lactation, so more research is needed on how this impacts feed to food transfer for PCDD/Fs and PCBs.
Ordering is a well-established concept in mathematics and also plays an important role in many areas of computer science, where quasi-orderings, most notably well-founded quasi-orderings and well-quasi-orderings, are of particular interest. This paper deals with quasi-orderings on first-order terms and introduces a new notion of unification based on a special quasi-order, known as homeomorphic tree embedding. Historically, the development of unification theory began with the central notion of a most general unifier based on the subsumption order. A unifier $\sigma$ is most general, if it subsumes any other unifier $\tau$, that is, if there is a substitution $\lambda$ with $\tau=_{E}\sigma\lambda$, where E is an equational theory and $=_{E}$ denotes equality under E. Since there is in general more than one most general unifier for unification problems under equational theories E, called E-Unification, we have the notion of a complete and minimal set of unifiers under E for a unification problem $\varGamma$, denoted as $\mu\mathcal{U}\Sigma_{E}(\Gamma)$. This set is still the basic notion in unification theory today. But, unfortunately, the subsumption quasi-order is not a well-founded quasi-order, which is the reason why for certain equational theories there are solvable E-unification problems, but the set $\mu\mathcal{U}\Sigma_{E}(\Gamma)$ does not exist. They are called type nullary in the unification hierarchy. In order to overcome this problem and also to substantially reduce the number of most general unifiers, we extended the well-known encompassment order on terms to an encompassment order on substitutions (modulo E). Unification under the encompassment order is called essential unification and if $\mu\mathcal{U}\Sigma_{E}(\Gamma)$ exists, then the complete set of essential unifiers $e\mathcal{U}\Sigma_{E}(\Gamma)$ is a subset of $\mu\mathcal{U}\Sigma_{E}(\Gamma)$. An interesting effect is that many E-unification problems with an infinite set of most general unifiers (under the subsumption order) reduce to a problem with only finitely many essential unifiers. Moreover, there are cases of an equational theory E, for which the complete set of most general unifiers does not exist, the minimal and complete set of essential unifiers however does exist. Unfortunately again, the encompassment order is not a well-founded quasi-ordering either, that is, there are still theories with a solvable unification problem, for which a minimal and complete set of essential unifiers does not exist. This paper deals with a third approach, namely the extension of the well-known homeomorphic embedding of terms to a homeomorphic embedding of substitutions (modulo E). We examine the set of most general, minimal, and complete E-unifiers under the quasi-order of homeomorphic embedment modulo an equational theory E, called $\varphi U\Sigma_{E}(\Gamma)$, and propose an appropriate definitional framework based on the standard notions of unification theory extended by notions for the tree embedding theorem or Kruskal’s theorem as it is called. The main results are that for regular theories the minimal and complete set $\varphi\mathcal{U}\Sigma_{E}(\Gamma)$ always exists. If we restrict the E-embedding order to pure E-embedding, a well-known technique in logic programming and term rewriting where the difference between variables is ignored, the set $\varphi_{\pi}\mathcal{U}\Sigma_{E}(\Gamma)$ always exists and it is even finite for any theory E.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, with its impact on our way of life, is affecting our experiences and mental health. Notably, individuals with mental disorders have been reported to have a higher risk of contracting SARS-CoV-2. Personality traits could represent an important determinant of preventative health behaviour and, therefore, the risk of contracting the virus.
Aims
We examined overlapping genetic underpinnings between major psychiatric disorders, personality traits and susceptibility to SARS-CoV-2 infection.
Method
Linkage disequilibrium score regression was used to explore the genetic correlations of coronavirus disease 2019 (COVID-19) susceptibility with psychiatric disorders and personality traits based on data from the largest available respective genome-wide association studies (GWAS). In two cohorts (the PsyCourse (n = 1346) and the HeiDE (n = 3266) study), polygenic risk scores were used to analyse if a genetic association between, psychiatric disorders, personality traits and COVID-19 susceptibility exists in individual-level data.
Results
We observed no significant genetic correlations of COVID-19 susceptibility with psychiatric disorders. For personality traits, there was a significant genetic correlation for COVID-19 susceptibility with extraversion (P = 1.47 × 10−5; genetic correlation 0.284). Yet, this was not reflected in individual-level data from the PsyCourse and HeiDE studies.
Conclusions
We identified no significant correlation between genetic risk factors for severe psychiatric disorders and genetic risk for COVID-19 susceptibility. Among the personality traits, extraversion showed evidence for a positive genetic association with COVID-19 susceptibility, in one but not in another setting. Overall, these findings highlight a complex contribution of genetic and non-genetic components in the interaction between COVID-19 susceptibility and personality traits or mental disorders.
This paper studies the role of monetary policy for the dynamics of US mortgage debt, which accounts for the largest part of household debt. A time-varying parameter vector autoregressive (VAR) model allows us to study the variation in the sensitivity of mortgage debt to monetary policy. We find that an identically sized policy shock became less effective over time. We use a dynamic stochastic general equilibrium model to show that a fall in the share of adjustable rate mortgages (ARMs) can replicate this finding. Calibrating the model to the drop in the ARM share since the 1980s yields a decline in the sensitivity of housing debt to monetary policy which is quantitatively similar to the VAR results. A sacrifice ratio for mortgage debt reveals that a policy tightening directed toward reducing household debt became more expensive in terms of a loss in employment. Counterfactuals show that this result cannot be attributed to changes in monetary policy itself.
In a ruling on 25 July 2018, the Court of Justice of the European Union concluded that organisms obtained by means of techniques/methods of mutagenesis constitute GMOs in the sense of Directive 2001/18, and that organisms obtained by means of techniques/methods of directed mutagenesis are not excluded from the scope of the Directive. Following the ruling, there has been much debate about the possible wider implications of the ruling. In October 2019, the Council of the European Union requested the European Commission to submit, in light of the CJEU ruling, a study regarding the status of novel genomic techniques under Union Law. For the purpose of the study, the Commission initiated stakeholder consultations early in 2020. Those consultations focused on the technical status of novel genomic techniques.
This article aims to contribute to the discussion on the legal status of organisms developed through novel genomic techniques, by offering some historical background to the negotiations on the European Union (EU) GMO Directives as well as a technical context to some of the terms in the Directive, and by analysing the ruling. The article advances that (i) the conclusion that organisms obtained by means of techniques/methods of mutagenesis constitute GMOs under the Directive means that the resulting organisms must comply with the GMO definition, ie the genetic material of the resulting organisms has been altered in a way that does not occur naturally by mating and/or natural recombination; (ii) the conclusion that organisms obtained by means of techniques/methods of directed mutagenesis were not intended to be excluded from the scope of the Directive is not inconsistent with the negotiation history of the Directive; (iii) whether an organism falls under the description of “obtained by means of techniques/methods of directed mutagenesis” depends on whether the genetic material of the resulting organisms has been altered in a way that does not occur naturally by mating and/or natural recombination. Finally, the article offers an analysis of the EU GMO definition, concluding that for an organism to be a GMO in the sense of the Directive, the technique used, as well as the genetic alterations of the resulting organism, must be considered.
The original description of the large and characteristic belemnite species Arctoteuthis bluethgeni Doyle was based on fragmentary material from a relatively uncertain stratigraphic interval in Kong Karls Land, Svalbard. Recent collection of a belemnite assemblage in the Lower Cretaceous Rurikfjellet Formation on Spitsbergen include numerous complete specimens, allowing a detailed description of the species. With the exception of a specimen reported from Arctic Canada, its distribution is restricted to Svalbard. Its stratigraphic range appears to be restricted to the upper Valanginian – lower Hauterivian from ages obtained from palynostratigraphy. A. bluethgeni is therefore considered to be a useful Lower Cretaceous guide fossil in the Boreal High Arctic.
Köhnlein (2016) proposes to represent the Franconian tone contrast as a difference in foot structure, whereby Accent 1 appears in lexically marked syllabic trochees and Accent 2 in default moraic trochees, as an alternative to analyses with an underlying privative tone for Accent 2. After sketching the two approaches, we argue against three arguments Köhnlein advances in favour of the metrical analysis. We then show that one of the disadvantages incurred by the metrically derived tonal representations is the introduction of a novel and otherwise unsupported concept of a single tone that incorporates two morphologically different but phonologically identical tones. We also evaluate Köhnlein's (2018) more recent proposal to use the syllabic trochee to account for subtractive plurals in tonal dialects. Finally, we compare the predictive powers of the metrical and tonal analyses of the Arzbach dialect.
This chapter addresses a special category of cases in which an asserted patent is, or has been declared to be, essential to the implementation of a collaboratively developed voluntary consensus standard, and the holder of that patent has agreed to license it to implementers of the standard on terms that are fair, reasonable, and nondiscriminatory (FRAND).This chapter explores how the existence of such a FRAND commitment may affect a patent holder’s entitlement to monetary damages and injunctive relief. In addition to issues of patent law, remedies law, and contracts law, we consider the effect of competition law on this issue.
The societal mega-trends of the past four decades, such as a globalizing economy and an aging society, have challenged the understanding of the state in OECD countries. The resulting “transformations of the state” are the subject of an interdisciplinary research agenda established at the Collaborative Research Center (CRC) 597 in Bremen, Germany. A total of twenty projects from political science, law, and economics explore changes of statehood which take place in two different dimensions: first, the internationalization and, second, the privatization of activities and functions which were traditionally performed by and ascribed to the democratic, constitutional and interventionist state. While the first research phase (2003-2006) aimed at founding empirical descriptions of these internationalization and privatization processes, the current phase (2007-2010) is dedicated to explaining the observed changes in statehood. Within this general framework, the authors’ research project on “New Forms of Legal Certainty in Globalized Exchange Processes” deals with changes in the institutional organization of commerce.
This research aims to explore the submerged landscapes of the Pilbara of western Australia, using predictive archaeological modelling, airborne LiDAR, marine acoustics, coring and diver survey. It includes excavation and geophysical investigation of a submerged shell midden in Denmark to establish guidelines for the underwater discovery of such sites elsewhere.
Two-or three-dimensional avalanche-simulation models offer a wide range of applications; however, a challenging model-verification process is demanded, accompanied by a reliable determination of model-input parameters. We show that a verification process can be arranged with remote-monitoring data from an artificially triggered avalanche, leading to the calculation of avalanche mass balance. Two numerical methods are applied to increase the quality of the parameter fit and to reduce the number of simulations. The quality of the parameter fit is verified by comparing measured and simulated run-out lengths. In addition, a cross-check is performed using velocities derived from Doppler radar measurements.