To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Galaxy cluster X-ray cavities are inflated by relativistic jets that are ejected into the intracluster medium by active galactic nuclei (AGN). AGN jets prevent predicted cooling flow establishment at the cluster centre, and while this process is not well understood in existing studies, simulations have shown that the heating mechanism will depend on the type of gas that fills the cavities. Thermal and non-thermal distributions of electrons will produce different cavity Sunyaev Zel’dovich (SZ) effect signals, quantified by the ‘suppression factor’ f. This paper explores potential enhancements to prior constraints on the cavity gas type by simulating suppression factor observations with the Square Kilometre Array (SKA). Cluster cavities across different redshifts are observed to predict the optimum way of measuring f in future observations. We find that the SKA can constrain the suppression factor in the cavities of cluster MS 0735.6+7421 (MS0735) in as little as 4 h, with a smallest observable value of $f \approx 0.42$. Additionally, while the SKA may distinguish between possible thermal or non-thermal suppression factor values within the cavities of MS0735 if it observes for more than 8 h, determining the gas type of other clusters will likely require observations at multiple frequencies. The effect of cavity line of sight (LOS) position is also studied, and degeneracies between LOS position and the measured value of f are found. Finally, we find that for small cavities (radius < 80 kpc) at high redshift ($z \approx 1.5$), the proposed high frequencies of the SKA (23.75–37.5 GHz) will be optimal, and that including MeerKAT antennas will improve all observations of this type.
Sea-ice floating in the Arctic ocean is a constantly moving, growing and melting layer. The seasonal cycle of sea-ice volume has an average amplitude of $10\,000\,\mathrm{km}^3$ or 9 trillion tonnes of sea ice. The role of dynamic redistribution of sea ice is observable during winter growth by the incorporation of satellite remote sensing of ice thickness, concentration and drift. Recent advances in the processing of CryoSat-2 radar altimetry data have allowed for the retrieval of summer sea-ice thickness. This allows for a full year of a purely remote sensing-derived ice volume budget analysis.
Here, we present the closed volume budget of Arctic sea ice over the period October 2010–May 2022 revealing the key contributions to summer melt and minimum sea-ice volume and extent. We show the importance of ice drift to the inter-annual variability in Arctic sea-ice volume and the regional distribution of sea ice. The estimates of specific areas of sea-ice growth and melt provide key information on sea-ice over-production, the excess volume of ice growth compared to melt. The statistical accuracy of each key region of the Arctic is presented, revealing the current accuracy of knowledge of Arctic sea-ice volume from observational sources.
In January, 1649, James Butler, 1st Duke of Ormond, signed a treaty with the Catholic Confederacy, not knowing that the king on whose behalf he spoke was on trial in London. On January 30, 1649, Charles is executed, and a week later England became a republic, having a nonmonarchical form of government. A Council of State was created, and John Milton was appointed its “Secretary of Foreign Tongues.” The Council charged Milton to write observations on Ormond’s peace treaty and other recent documents from Ireland. The most geographically interesting reflection on Ireland to involve Milton’s work the resulting Articles of Peace offers a map of Ireland, a cultural and political geography overlaid on the ancient provinces of the island, to which Milton adds complicated interisland tensions, on the eve of Cromwell’s invasion of Ireland. Articles reflects the complexity of the situation in Ireland, ramified by English management: A Protestant Royalist signs an extraordinarily generous Peace Treaty with Irish Catholics; the Parliamentary representative in Dublin complains of English influence; the Ulster-Scots make the case for a Protestant Church in Ireland that is neither Anglican nor Episcopalian.
This paper will focus on the satellite threat to observational astronomy. As a member of the Satellite Constellation Working Group: Observatories Subgroup a recommendation was drafted for the Dark and Quiet Skies 2 Report, which shall be discussed. I shall also discuss SatHub activities and the education of observers about satellite constellations for regular citizens, amateur astronomers and professional astronomers and the outreach activities planned.
This paper presents an overview of the attitudes and awareness within the amateur astronomical community regarding modern satellite megaconstellation projects. A series of interviews and polls to assess the perspectives of this large community was conducted, aiming to uncover their concerns and issues. Additionally, the potential to involve respondents in community-driven projects focused on satellite data acquisition and processing was investigated. The objective is to enhance data quality for astronomical imagery protection and improve existing tools for scientific research, including satellite tracking, trajectory prediction, and the detection of space debris and orbital objects.
This chapter presents an overview of methods used in clinical assessment, classification, and diagnosis. After outlining the range of assessment options available to clinicians, it describes the typical goals of assessment, including diagnosis, description, treatment planning, and prediction. It also introduces some of the most important variables that affect a clinician’s choices about how to conduct an assessment, including its purpose, the clinician’s theoretical views, the psychometric properties of available assessment instruments, and other contextual factors. The chapter discusses the strengths and weaknesses of human clinical judgment when compared to AI and other actuarial procedures, focusing especially on the errors that clinicians tend to make but strive to avoid. The chapter concludes with a discussion of how the results of clinical assessments are communicated to clients and third parties, and the factors and formats associated with assessment reports.
Over the years, the Serengeti has been a model ecosystem for answering basic ecological questions about the distribution and abundance of organisms, populations, and species, and about how different species interact with each other and with their environment. Tony Sinclair and many other researchers have addressed some of these questions, and continue to work on understanding important biotic and abiotic linkages that influence ecosystem functioning. In common with all types of scientific inquiry, ecologists use predictions to test hypotheses about ecological processes; this approach is highlighted by Sinclair’s research that explored why buffalo and wildebeest populations were rapidly expanding. Like other scientists, ecologists use observation, modeling, and experimentation to generate and test hypotheses. However, in contrast with much biological inquiry, ecologists ask questions that link numerous levels of the biological hierarchy, from molecular to global ecology.
Early modern printmakers trained observers to scan the heavens above as well as faces in their midst. Peter Apian printed the Cosmographicus Liber (1524) to teach lay astronomers their place in the cosmos, while also printing practical manuals that translated principles of spherical astronomy into useful data for weather watchers, farmers, and astrologers. Physiognomy, a genre related to cosmography, taught observers how to scrutinize profiles in order to sum up peoples' characters. Neither Albrecht Dürer nor Leonardo escaped the tenacious grasp of such widely circulating manuals called practica. Few have heard of these genres today, but the kinship of their pictorial programs suggests that printers shaped these texts for readers who privileged knowledge retrieval. Cultivated by images to become visual learners, these readers were then taught to hone their skills as observers. This book unpacks these and other visual strategies that aimed to develop both the literate eye of the reader and the sovereignty of images in the early modern world.
Radio interferometers can potentially detect the sky-averaged signal from the Cosmic Dawn (CD) and the Epoch of Reionisation (EoR) by studying the Moon as a thermal block to the foreground sky. The first step is to mitigate the Earth-based radio frequency interference (RFI) reflections (Earthshine) from the Moon, which significantly contaminate the FM band $\approx 88-110$ MHz, crucial to CD-EoR science. We analysed Murchison Widefield Array (MWA) phase I data from 72 to 180 MHz at 40 kHz resolution to understand the nature of Earthshine over three observing nights. We took two approaches to correct the Earthshine component from the Moon. In the first method, we mitigated the Earthshine using the flux density of the two components from the data, while in the second method, we used simulated flux density based on an FM catalogue to mitigate the Earthshine. Using these methods, we were able to recover the expected Galactic foreground temperature of the patch of sky obscured by the Moon. We performed a joint analysis of the Galactic foregrounds and the Moon’s intrinsic temperature $(T_{\mathrm{Moon}})$ while assuming that the Moon has a constant thermal temperature throughout three epochs. We found $T_{\mathrm{Moon}}$ to be at $184.4\pm{2.6}\,\mathrm{K}$ and $173.8\pm{2.5}\,\mathrm{K}$ using the first and the second methods, respectively, and the best-fit values of the Galactic spectral index $(\alpha)$ to be within the 5% uncertainty level when compared with the global sky models. Compared with our previous work, these results improved constraints on the Galactic spectral index and the Moon’s intrinsic temperature. We also simulated the Earthshine at MWA between November and December 2023 to find suitable observing times less affected by the Earthshine. Such observing windows act as Earthshine avoidance and can be used to perform future global CD-EoR experiments using the Moon with the MWA.
Many problems in astronomy and physics lend themselves to solutions from machine learning methods for the detection and classification of astronomical signals, and model inference from those signals. The historic presentation of machine learning methods as ‘black boxes’ has generated push back from some in the the physics/astronomy communities regarding how useful they are to truly uncover the physical laws that govern our world. Skepticism about the applicability of new computational methods in scientific inference is not new; we highlight connections between the machine learning contexts and previous computational paradigm shifts in astronomy. Moreover, several advances in methodologies challenge the assumption that machine learning ‘gives us answers that we can use but do not understand’ to standing physics questions. We summarize some astronomical machine learning data challenges used in astronomy and how we can use challenges on different scales to test different parts/use cases of our analysis methods.
The aim of this study was to observe the level of alcohol-based sanitizer, mask use, and physical distancing across indoor community settings in Guelph, ON, Canada, and to identify potential barriers to practicing these behaviors.
Methods:
Shoppers were observed in June 2022 across 21 establishments. Discrete in-person observations were conducted and electronically recorded using smartphones. Multilevel logistic regression models were fitted to identify possible covariates for the 3 behavioral outcomes.
Results:
Of 946 observed shoppers, 69% shopped alone, 72% had at least 1 hand occupied, 26% touched their face, 29% physically distanced ≥ 2 m, 6% used hand sanitizer, and 29% wore masks. Sanitizer use was more commonly observed among people who wore masks and in establishments with coronavirus disease (COVID-19) signage posted at the entrance. Mask use was more commonly observed during days without precipitation and in establishments with some or all touch-free entrances. Shoppers more commonly physically distanced ≥ 2 m when they were shopping alone.
Conclusions:
This supports evidence for environmental context influencing COVID-19 preventive behaviors. Intervention efforts aimed at visible signage, tailored messaging, and redesigning spaces to facilitate preventive behaviors may be effective at increasing adherence during outbreaks.
This paper derives statistical models for predicting wintertime subseasonal temperature over the western US. The statistical models are trained on two separate datasets, namely observations and dynamical model simulations, and are based on least absolute shrinkage and selection operator (lasso). Surprisingly, statistical models trained on dynamical model simulations can predict observations better than observation-trained models. One reason for this is that simulations involve orders of magnitude more data than observational datasets.
This opening chapter sets the stage for the book. Its first part describes the author’s first day at The Farm community and the personal journey that brought her to explore the aging experiences of hippies. The second part of the chapter provides information about the study that served as the basis for this book, and the third part briefly presents the books contents.
Upcoming large-scale surveys like LSST are expected to uncover approximately 105 strong gravitational lenses within massive datasets. Traditional manual techniques are too time-consuming and impractical for such volumes of data. Consequently, machine learning methods have emerged as an alternative. In our prior work (Thuruthipilly et al. 2022), we introduced a self-attention-based machine learning model (transformers) for detecting strong gravitational lenses in simulated data from the Bologna Lens Challenge. These models offer advantages over simpler convolutional neural networks (CNNs) and competitive performance compared to state-of-the-art CNN models. We applied this model to the datasets from Bologna Lens Challenge 1 and 2 and simulated data on Euclid.
In this chapter the data assimilation problem is introduced as a control theory problem for partial differential equations, with initial conditions, model error, and empirical model parameters as optional control variables. An alternative interpretation of data assimilation as a processing of information in a dynamic-stochastic system is also introduced. Both approaches will be addressed in more detail throughout this book. The historical development of data assimilation has been documented, starting from the early nineteenth-century works by Legendre, Gauss, and Laplace, to optimal interpolation and Kalman filtering, to modern data assimilation based on variational and ensemble methods, and finally to future methods such as particle filters. This suggests that data assimilation is not a very new concept, given that it has been of scientific and practical interest for a long time. Part of the chapter focuses on introducing the common terminologies and notation used in data assimilation, with special emphasis on observation equation, observation errors, and observation operators. Finally, a basic linear estimation problem based on least squares is presented.
Chapter 10 describes Bayesian methods for parameter estimation and updating of structural reliability in the light of observations. The chapter begins with a description of the sources and types of uncertainties. Uncertainties are categorized as aleatory or epistemic; however, it is argued that this distinction is not fundamental and makes sense only within the universe of models used for a given project. The Bayesian updating formula is then developed as the product of a prior distribution and the likelihood function, yielding the posterior (updated) distribution of the unknown parameters. Selection of the prior and formulation of the likelihood are discussed in detail. Formulations are presented for parameters in probability distribution models, as well as in mathematical models of physical phenomena. Three formulations are presented for reliability analysis under parameter uncertainties: point estimate, predictive estimate, and confidence interval of the failure probability. The discussion then focuses on the updating of structural reliability in the light of observed events that are characterized by either inequality or equality expressions of one or more limit-state functions. Also presented is the updating of the distribution of random variables in the limit-state function(s) in the light of observed events, e.g., the failure or non-failure of a system.
Organizational ethnography has been crucial for the development of the field of Routine Dynamics since the beginning. It has altered the grain size of analysis and shifted the focus from the firm and its routines to the routine and the people, actions and artefacts that bring it to life. The discovery-oriented nature of ethnographic research has and continues to challenge the conceived wisdom of routines and their role in organizations. The majority of work in Routine Dynamics relies on ethnographic approaches and sensibilities. In this chapter, I review 43 studies and the various ways in which they draw on ethnography. Despite the wide variety of settings these studies have explored and the evidentiary approaches they draw on, I argue that Routine Dynamics research can draw on more novel and innovative forms of ethnographic research. This will allow scholars to address hitherto neglected aspects of routines, such as their emotional and aesthetic qualities, new contemporary phenomena that are of societal concern, such as inequality, climate change and epidemics, and make Routine Dynamics research more practically relevant.
This chapter describes tools that have been used to measure the effectiveness of corrective feedback ranging from classic instruments such as interactive tasks, to innovative methods recently adopted from related fields like psychology and educational measurement. As part of describing these measurement tools, we also discuss how factors in their use, such as the instructions, the participants, and their roles, need to be considered when assessing the efficacy of feedback. We describe tools used in classrooms and laboratory settings, including introspective methods such as think-alouds, immediate recalls, stimulated recall, interviews, journals, blogs, and uptake sheets, as well as external measurements. We outline the use of tasks in both face-to-face and computer-mediated contexts. We conclude our chapter with a discussion of future directions in measuring the effectiveness of corrective feedback on linguistic development and pedagogical implications.
The angular power spectrum is a natural tool to analyse the observed galaxy number count fluctuations. In a standard analysis, the angular galaxy distribution is sliced into concentric redshift bins and all correlations of its harmonic coefficients between bin pairs are considered—a procedure referred to as ‘tomography’. However, the unparalleled quality of data from oncoming spectroscopic galaxy surveys for cosmology will render this method computationally unfeasible, given the increasing number of bins. Here, we put to test against synthetic data a novel method proposed in a previous study to save computational time. According to this method, the whole galaxy redshift distribution is subdivided into thick bins, neglecting the cross-bin correlations among them; each of the thick bin is, however, further subdivided into thinner bins, considering in this case all the cross-bin correlations. We create a simulated data set that we then analyse in a Bayesian framework. We confirm that the newly proposed method saves computational time and gives results that surpass those of the standard approach.
Chapter 2 discusses the structure of gaseous protoplanetary disks. It begins by explaining how observations can be used to infer disk mass, disk structure, and stellar accretion rate. The vertical structure of a gas disk in hydrostatic equilibrium is derived, and the considerations that determine the surface density and temperature profile of a passive disk are described. The concept of the condensation sequence is outlined, along with the ionization and recombination processes that determine the ionization state. Physical processes that can produce large-scale structure in disks - zonal flows, vortices, and ice lines - are discussed.