We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
The systems ecology paradigm (SEP) emerged in the late 1960s at a time when societies throughout the world were beginning to recognize that our environment and natural resources were being threatened by their activities. Management practices in rangelands, forests, agricultural lands, wetlands, and waterways were inadequate to meet the challenges of deteriorating environments, many of which were caused by the practices themselves. Scientists recognized an immediate need was developing a knowledge base about how ecosystems function. That effort took nearly two decades (1980s) and concluded with the acceptance that humans were components of ecosystems, not just controllers and manipulators of lands and waters. While ecosystem science was being developed, management options based on ecosystem science were shifting dramatically toward practices supporting sustainability, resilience, ecosystem services, biodiversity, and local to global interconnections of ecosystems. Emerging from the new knowledge about how ecosystems function and the application of the systems ecology approach was the collaboration of scientists, managers, decision-makers, and stakeholders locally and globally. Today’s concepts of ecosystem management and related ideas, such as sustainable agriculture, ecosystem health and restoration, consequences of and adaptation to climate change, and many other important local to global challenges are a direct result of the SEP.
Emerging from the warehouse of knowledge about terrestrial ecosystem functioning and the application of the systems ecology paradigm, exemplified by the power of simulation modeling, tremendous strides have been made linking the interactions of the land, atmosphere, and water locally to globally. Through integration of ecosystem, atmospheric, soil, and more recently social science interactions, plausible scenarios and even reasonable predictions are now possible about the outcomes of human activities. The applications of that knowledge to the effects of changing climates, human-caused nitrogen enrichment of ecosystems, and altered UV-B radiation represent challenges addressed in this chapter. The primary linkages addressed are through the C, N, S, and H2O cycles, and UV-B radiation. Carbon dioxide exchanges between land and the atmosphere, N additions and losses to and from lands and waters, early studies of SO2 in grassland ecosystem, and the effects of UV-B radiation on ecosystems have been mainstays of research described in this chapter. This research knowledge has been used in international and national climate assessments, for example the IPCC, US National Climate Assessment, and Paris Climate Accord. Likewise, the knowledge has been used to develop concepts and technologies related to sustainable agriculture, C sequestration, and food security.
The Square Kilometre Array will be an amazing instrument for pulsar astronomy. While the full SKA will be sensitive enough to detect all pulsars in the Galaxy visible from Earth, already with SKA1, pulsar searches will discover enough pulsars to increase the currently known population by a factor of four, no doubt including a range of amazing unknown sources. Real time processing is needed to deal with the 60 PB of pulsar search data collected per day, using a signal processing pipeline required to perform more than 10 POps. Here we present the suggested design of the pulsar search engine for the SKA and discuss challenges and solutions to the pulsar search venture.
Thirty-seven lines from a population derived from the hull-less barley cultivar, Penthouse, were grown in a replicated trial over three seasons and assessed for grain yield. Following harvest, a rapid test to measure grain dimensions was applied to all samples, to look for novel variation in grain size and shape, as a possible way of detecting mutations. A range of grain and malt quality traits was also measured in two of the seasons, to detect genotype × season interactions and determine relationships between the measured traits. There were significant differences between years for all traits and between genotypes for most. Genotype × season interaction was significant for grain dimensions and some malting traits, but a correlation between malting quality and grain dimensions was only observed in one season. Line 30 showed very high yield potential in a comparatively wet season and gave a higher alcohol yield per unit area than a hulled control variety, while lines 21 and 33 contained putative additional mutations. Line 21, previously observed to have higher enzyme activity, appeared to contain an additional dwarfing gene and was characterized by smaller grain, later ear emergence and lower yield. Line 33, with malting potential, showed considerably altered grain length to width ratio and will be further investigated as a possible globosum type.
Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.
Design.
Retrospective cohort study.
Setting.
Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.
Methods.
Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).
Results.
We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.
Conclusions.
Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.
Infect Control Hosp Epidemiol 2014;35(12):1483–1490