To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While the majority of worldwide hepatitis E viral (HEV) infections that occur in people are from contaminated water or food sources, there has also been a steadily rising number of reported cases of transfusion-transmitted HEV (TT-HEV) in blood donation recipients. For most, HEV infection is acute, self-limiting and asymptomatic. However, patients that are immunocompromised, especially transplant patients, are at much higher risk for developing chronic infections, which can progress to cirrhosis and liver failure, along with overall increased mortality. Because of the rising trend of HEV serological prevalence among the global population, and the fact that TT-HEV infection can cause serious clinical consequences among those patients most at need for blood donation, the need for screening for TT-HEV has been gaining in prominence as an important public health concern for both developing and developed countries. In the review, we summarise evidence for and notable cases of TT-HEV infections, the various aspects of HEV screening protocols and recent trends in the implementation of TT-HEV broad-based blood screening programmes.
Wavelet theory is known to be a powerful tool for compressing and processing time series or images. It consists in projecting a signal on an orthonormal basis of functions that are chosen in order to provide a sparse representation of the data. The first part of this article focuses on smoothing mortality curves by wavelets shrinkage. A chi-square test and a penalized likelihood approach are applied to determine the optimal degree of smoothing. The second part of this article is devoted to mortality forecasting. Wavelet coefficients exhibit clear trends for the Belgian population from 1965 to 2015, they are easy to forecast resulting in predicted future mortality rates. The wavelet-based approach is then compared with some popular actuarial models of Lee–Carter type estimated fitted to Belgian, UK, and US populations. The wavelet model outperforms all of them.
For a test to be useful, it must be informative; that is, it must (at least some of the time) give different results depending on what is going on. In Chapter 1, we said we would simplify (at least initially) what is going on into just two homogeneous alternatives, D+ and D−. In this chapter, we consider the simplest type of tests, dichotomous tests, which have only two possible results (T+ and T−).
A test should give the same or similar results when administered repeatedly to the same individual within a time too short for real biological variation to take place. Results should be consistent whether the test is repeated by the same observer or instrument or by different observers or instruments. This desirable characteristic of a test is called “reliability” or “reproducibility.”
We have learned how to quantify the accuracy of dichotomous (Chapter 2) and multilevel (Chapter 3) tests. In this chapter, we turn to critical appraisal of studies of diagnostic test accuracy, with an emphasis on problems with study design that affect the interpretation or credibility of the results. After a general discussion of an approach to studies of diagnostic tests, we will review some common biases to which studies of test accuracy are uniquely or especially susceptible and conclude with an introduction to systematic reviews of test accuracy studies.
While screening tests share some features with diagnostic tests, they deserve a chapter of their own because of important differences. Whereas we generally do diagnostic tests on sick people to determine the cause of their symptoms, we generally do screening tests on healthy people with a low prior probability of disease. The problems of false positives and harms of treatment loom larger. In Chapter 4, on evaluating studies of diagnostic test accuracy, we assumed that accurate diagnosis would lead to better outcomes. The benefits and harms of screening tests are so closely tied to the associated treatments that it is hard to evaluate diagnosis and treatment separately. Instead, we compare outcomes such as mortality between those who receive the screening test and those who don’t. We postponed our discussion of screening until after our discussion of randomized trials because randomized trials are a key element in the evaluation of screening tests. Finally, because decisions about screening are often made at the population level, political and other nonmedical factors are more influential. Thus, in this chapter, we focus explicitly on the question of whether doing a screening test improves health, not just on how it alters disease probabilities, and we pay particular attention to biases and nonmedical factors that can lead to excessive screening.1
The clinical characteristics of patients with COVID-19 were analysed to determine the factors influencing the prognosis and virus shedding time to facilitate early detection of disease progression. Logistic regression analysis was used to explore the relationships among prognosis, clinical characteristics and laboratory indexes. The predictive value of this model was assessed with receiver operating characteristic curve analysis, calibration and internal validation. The viral shedding duration was calculated using the Kaplan–Meier method, and the prognostic factors were analysed by univariate log-rank analysis and the Cox proportional hazards model. A retrospective study was carried out with patients with COVID-19 in Tianjin, China. A total of 185 patients were included, 27 (14.59%) of whom were severely ill at the time of discharge and three (1.6%) of whom died. Our findings demonstrate that patients with an advanced age, diabetes, a low PaO2/FiO2 value and delayed treatment should be carefully monitored for disease progression to reduce the incidence of severe disease. Hypoproteinaemia and the fever duration warrant special attention. Timely interventions in symptomatic patients and a time from symptom onset to treatment <4 days can shorten the duration of viral shedding.
In previous chapters, we discussed issues affecting evaluation and use of diagnostic tests: how to assess test reliability and accuracy, how to combine the results of tests with prior information to estimate disease probability, and how a test’s value depends on the decision it will guide and the relative cost of errors. In this chapter, we move from diagnosing prevalent disease to predicting incident outcomes. We will discuss the difference between diagnostic tests and risk predictions and then focus on evaluating predictions, specifically covering calibration, discrimination, net benefit calculations, and decision curves.
As we noted in the Preface and Chapter 1, because the purpose of doing diagnostic tests is often to determine how to treat the patient, we may need to quantify the effects of treatment to decide whether to do a test. For example, if the treatment for a disease provides a dramatic benefit, we should have a lower threshold for testing for that disease than if the treatment is of marginal or unknown efficacy. In Chapters 2, 3, and 6, we showed how the expected benefit of testing depends on the treatment threshold probability (PTT = C/[C + B]) in addition to the prior probability and test characteristics. In this chapter, we discuss how to quantify the benefits and harms of treatments (which determine C and B) using the results of randomized trials. In Chapter 9, we will extend the discussion to observational studies of treatment efficacy; in Chapter 10, we will look at screening tests themselves as treatments and how to quantify their efficacy.
In the previous two chapters, we discussed using the results of randomized trials and observational studies to estimate treatment effects. We were primarily interested in measures of effect size and in problems with design (in randomized trials) and confounding (in observational studies) that could bias effect estimates. We did not focus on whether the apparent treatment effects could be a result of chance or attempt to quantify the precision of our effect estimates. The statistics used to help us with these issues − P-values and confidence intervals – are the subject of this chapter.
We wrestled for a long time with the question of whether to include the term “evidence-based” in the title of the first edition of this book. Although both of us are firm believers in the principles and goals of evidence-based medicine (EBM), as articulated by its first proponents[1] we also knew that the term “evidence-based” would be viewed negatively by some potential readers [2–4]. We decided to keep “evidence-based” in the title and use this chapter to directly address some of the criticisms of EBM, many of which we believe have merit. We also recognize that, as elegant and satisfying as evidence-based diagnosis is, there are some very real cognitive barriers to applying it in a clinical setting. These barriers are the second topic of this chapter. Finally, we end the book with some thoughts on the future of evidence-based diagnosis and why it will be increasingly important.