To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Policy-making in local public administrations is still largely based on intuition rather than being backed up by data and evidence. The goal of this work is to introduce the methodology and software tools for contributing toward transforming the existing intuition-based paradigm of policy-making into an evidence-driven approach enabled by heterogeneous sources of data already available in the city. More specifically, methods for data collection, efficient data storage, and data analysis are implemented to measure the economic activity, assess the environmental impact and evaluate the social consequences of certain policy decisions. Subsequently, the extracted pieces of evidence are used to inform, advise, monitor, evaluate, and revise the decisions made by policy planners. Our contribution in this work is on outlining and deploying an easily extendable system architecture to harmonize and analyze heterogeneous data sources in ways that are found to be useful for policy-makers. For evaluating this architecture, we examine the case of a controlled parking system in the city of Thessaloniki and try to optimize its operation by balancing effectively between economic growth, environmental protection, and citizen satisfaction.
Determination of antibodies against ToRCH antigens at the beginning of pregnancy allows assessment of both the maternal immune status and the risks to an adverse pregnancy outcome. Age-standardised seroprevalences were determined in sera from 1009 women of childbearing age residing in Mexico, Brazil, Germany, Poland, Turkey or China using a multiparametric immunoblot containing antigen substrates for antibodies against Toxoplasma gondii, rubella virus, cytomegalovirus (CMV), herpes simplex viruses (HSV-1, HSV-2), Bordetella pertussis, Chlamydia trachomatis, parvovirus B19, Treponema pallidum and varicella zoster virus (VZV). Seroprevalences for antibodies against HSV-1 were >90% in samples from Brazil and Turkey, whereas the other four countries showed lower mean age-adjusted seroprevalences (range: 62.5–87.9%). Samples from Brazilian women showed elevated seroprevalences of antibodies against HSV-2 (40.1%), C. trachomatis (46.8%) and B. pertussis (56.6%) compared to the other five countries. Seroprevalences of anti-T. gondii antibodies (0.5%) and anti-parvovirus B19 antibodies (7.5%) were low in samples from Chinese women, compared to the other five countries. Samples from German women revealed a low age-standardised seroprevalence of anti-CMV antibodies (28.8%) compared to the other five countries. These global differences in immune status of women in childbearing age advocate country-specific prophylaxis strategies to avoid infection with ToRCH pathogens.
In this study, we explore the partial identification of nonseparable models with continuous endogenous and binary instrumental variables. We show that the structural function is partially identified when it is monotone or concave in the explanatory variable. D’Haultfœuille and Février (2015, Econometrica 83(3), 1199–1210) and Torgovitsky (2015, Econometrica 83(3), 1185–1197) prove the point identification of the structural function under a key assumption that the conditional distribution functions of the endogenous variable for different values of the instrumental variables have intersections. We demonstrate that, even if this assumption does not hold, monotonicity and concavity provide identification power. Point identification is achieved when the structural function is flat or linear with respect to the explanatory variable over a given interval. We compute the bounds using real data and show that our bounds are informative.
A perfect Kr-tiling in a graph G is a collection of vertex-disjoint copies of the clique Kr in G covering every vertex of G. The famous Hajnal–Szemerédi theorem determines the minimum degree threshold for forcing a perfect Kr-tiling in a graph G. The notion of discrepancy appears in many branches of mathematics. In the graph setting, one assigns the edges of a graph G labels from {‒1, 1}, and one seeks substructures F of G that have ‘high’ discrepancy (i.e. the sum of the labels of the edges in F is far from 0). In this paper we determine the minimum degree threshold for a graph to contain a perfect Kr-tiling of high discrepancy.
Deaths are frequently under-estimated during emergencies, times when accurate mortality estimates are crucial for emergency response. This study estimates excess all-cause, pneumonia and influenza mortality during the coronavirus disease 2019 (COVID-19) pandemic using the 11 September 2020 release of weekly mortality data from the United States (U.S.) Mortality Surveillance System (MSS) from 27 September 2015 to 9 May 2020, using semiparametric and conventional time-series models in 13 states with high reported COVID-19 deaths and apparently complete mortality data: California, Colorado, Connecticut, Florida, Illinois, Indiana, Louisiana, Massachusetts, Michigan, New Jersey, New York, Pennsylvania and Washington. We estimated greater excess mortality than official COVID-19 mortality in the U.S. (excess mortality 95% confidence interval (CI) 100 013–127 501 vs. 78 834 COVID-19 deaths) and 9 states: California (excess mortality 95% CI 3338–6344) vs. 2849 COVID-19 deaths); Connecticut (excess mortality 95% CI 3095–3952) vs. 2932 COVID-19 deaths); Illinois (95% CI 4646–6111) vs. 3525 COVID-19 deaths); Louisiana (excess mortality 95% CI 2341–3183 vs. 2267 COVID-19 deaths); Massachusetts (95% CI 5562–7201 vs. 5050 COVID-19 deaths); New Jersey (95% CI 13 170–16 058 vs. 10 465 COVID-19 deaths); New York (95% CI 32 538–39 960 vs. 26 584 COVID-19 deaths); and Pennsylvania (95% CI 5125–6560 vs. 3793 COVID-19 deaths). Conventional model results were consistent with semiparametric results but less precise. Significant excess pneumonia deaths were also found for all locations and we estimated hundreds of excess influenza deaths in New York. We find that official COVID-19 mortality substantially understates actual mortality, excess deaths cannot be explained entirely by official COVID-19 death counts. Mortality reporting lags appeared to worsen during the pandemic, when timeliness in surveillance systems was most crucial for improving pandemic response.
In his 1903 book Mankind in the Making, the British science-fiction novelist and social commentator Herbert George Wells (1866–1946) argued for a new type of political system in which society renounced any claim of absolute truths and people's ideas were based on presented facts – a system in which overall policy and public affairs in society were scientifically examined in the light of mathematical and statistical reasoning. Wells would go on to argue that
The great body of physical science, a great deal of the essential fact of financial science, and endless social and political problems are only accessible and only thinkable to those who have had a sound training in mathematical analysis, and the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex world-wide States that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write. (Wells, [1903] 2014)
Wells, who was a biologist by training and one of the top science-fiction writers of the time, lived in the age of modern scientific utopias, marked by the rise of industrialization and workers’ struggles. However, what makes Wells’ contribution so relevant today is that he was standing up against Eugenics at a time when other intellectuals, including some fellow socialists, were siding with this racist pseudoscientific idea.
Wells was not opposed to a science of heredity, nevertheless he rejected the notion of Francis Galton (1822–1911), the father of modern statistics, that the state should intervene in order to breed human beings selectively. Positive traits such as beauty, health, capacity, and genius, as well as supposed negative traits such as criminality and alcoholism, says Wells, are in fact such complex entanglements of characteristics that ignorance and doubt bar our way. Still today at the Rijksmusem Boerhaave of science and medicine in Leiden, the Netherlands, the visitors can see some drawings of a facial angle, a geometrical system invented by the Dutch scientist Petrus Camper (1722–1789) and later used to justify slavery and racism.
While wrapping up this book, we witness the global crisis set in motion by the spread of the COVID-19 virus. According to the latest figures released in May 2020 by the European Centre for Disease Prevention and Control, there were almost 300,000 deaths worldwide at that time, and the consequences on the worldwide economy seemed probably huge and long lasting, although by then still unknown in detail. What was certain then and now was that many lives were shattered and whole communities disseminated by this crisis. Indeed, by the time it was officially declared a pandemic by the World Health Organization (WHO), it was already a full-scale destructive force that resembled – at least in the eyes of those in places such as Guayaquil, Ecuador, and in Bergamo, Italy – one of the Horsemen of the Apocalypse.
In response to these events, governments and political leaders engaged at different speeds in the implementation of emergency strategies that involved all sectors of their societies. The state of emergency forced them to undertake action, although only for some of these leaders the action and sense of urgency were in fact ‘immediate’. This is because while some prioritized the health and lives of people, others instead believed that saving the economy was far more important. As these debates took place among elites, doctors, nurses, first responders and, overall, ordinary people such as postal workers, cleaners and rubbish collectors kept society afloat. The sheer number of ordinary people who suddenly became heroes but who until those days were often invisible to the public eye reminded us that society is not made by the few but by the many.
As the number of deaths climbed, fingers started to be pointed at culpables even before any peak was reached, perhaps looking to displace responsibilities. Some leaders spoke every day while others remained silent. During that leadership vacuum, fake news and the misrepresentation of numbers spread globally as fast as the pandemic did. The battle for the hearts and minds of the public soon became an information war or ‘Infodemic’, as described by the director-general of the WHO, Tedros Adhanom Ghebreyesus.
Our central question, so far, has asked about the nature of the engagement between journalists and statistical data in the pursuit of quality. In so doing, we would want also to know if the quality statistics automatically lead to quality journalism. If so, we ought to know if the information quality then translates into quality journalism. Does the nature of a statistic's source affect the news reporting? What is the purpose of statistics in news reporting? Do journalists emphasize a certain type of statistics? What statistics sources do journalists use most often? And, how does the audience engage with statistically driven stories?
To answer these and other questions, we used a mixed-method approach, in which we have triangulated qualitative and quantitative methods as this ‘is major research approach’ (Johnson, Onwuegbuzie & Turner, 2007). The importance of such a triangulation is the validity of the results that can lead to a more balanced and detailed answer to the research questions by also comparing and contrasting different accounts of the same situation (Turner & Turner, 2009). The aim was to develop a ‘practical theory’ that would help to rationalize the issue under scrutiny (Altrichter, 2010; Altrichter, Posch & Somekh, 1993). We included content analysis, semi-structured interview, close reading and focus groups, which allowed us to carry out a multilevel assessment of the data. The overall mixed method used in this research should be understood in a broader ‘cross-sectional design’ (F. L. Cook et al., 1983; Johnston & Brady, 2002), which allows a combination of quantitative and qualitative research.
Therefore, this chapter presents the analysis of the data collected followed by a discussion of the research findings that resulted from each method. It provides a detailed account of the findings, in the hope that these results will elucidate the uses of statistics in articulating the five quality dimensions in news reporting.
Overall, our findings suggest that journalists tend to use statistical information as a tool to fulfil their deontological expectations of producing quality journalism. However, as it became clear from the interviews, one of the underlying motivations seems to also be the need to achieve credibility and authority, which entails a certain degree of building up the ability to persuade by means of trust.
Over the past years, governments and corporations have come to develop media strategies to ‘manage’ the dissemination of statistics within the public. As we have discussed in this book, this was part of a long history in using numbers to assert social control and that in the twentieth century took the form of cybernetics. In more recent times, this has meant enshrining statistics and data into daily life by means of the information and communication technologies to be used by individuals, couples, families and communities. In so doing, they have dedicated immense resources towards ‘controlling’ narratives and interpretations of statistics released in the public sphere.
Moreover, mediatization practices to shape the way these numbers go out into the public are now at the centre of the controversies and issues that journalism confronts. This ‘mediatization of statistics communication’ (Lugo-Ocando & Lawson, 2017) is one which is understood to be not a policy-making process directed by the media but overall a policy-process in which publicityseeking activities and political decision-making become closely interlinked (Cater, 1965; T. Cook, 2005).
Statistics are far from being a neutral object in society and have their own politics and ideologies. They are also fundamental signifiers in the creation of social reality (Dorling & Simpson, 1999), displaying a politics of their own. Not that we ascribe agency to the output of a mathematical equation but that we see the equation itself as a human creation that has a history, a meaning and an intent. Over the years, as we have seen in this book, these numbers have gained a power of their own. From defining budgetary priorities to determining who can receive aid or buy a house, statistics and data, in more general terms, dominate human existence in many ways.
However, with the increasing presence of Big Data and controlling power of algorithms, we are entering into new uncharted territory, No longer is it just about the state or corporations controlling the production and analysis of statistics, but also about the active management of these numbers by means of mediated representation in the public domain. Recognized today as a process
The assumptions that quality can be only asserted through numbers hold both within journalism and government. Particularly, as these numbers bring a sense of impartial and objective assessment to both formulate and evaluate policies. Moreover, at the centre of this assumption is the deep-rooted belief that statistics can bring about the type of scientific-based knowledge, transparency and trust needed to implement and analyse government policy. Nowhere was this more evident than during the Tony Blair-led New Labour government in the United Kingdom (1997–2001), when evidence-based policy became central to the way politicians sold their own agendas to the public (Hope, 2004, 2005). To be sure, the Command Paper released under the title Statistics: A Matter of Trust (1998) made it clear that
Quality needs to be assured. Official statistics must be sufficiently accurate and reliable for the purposes for which they are required. The production and presentation of official statistics needs to be free from political interference, and to be seen as such, so that the objectivity and impartiality of statistics is assured. (1998, p. 5)
In this sense, the UK Statistics Authority has adopted a structure broadly similar to that of the European Code, which sets out a number of high-level principles, each of which is further amplified by a series of more detailed practices (or ‘indicators’ in the European Code). Also, the UK Code of Practice for Official Statistics and the assessment programme that follows have been informed by, and are consistent with, both the UN Fundamental Principles of Official Statistics3 and the European Statistics Code of Practice.
According to Mark Pont, member of the board of directors of the UK Statistics Authority, the European Code has proved an effective basis for the international process of ‘peer review’ and ‘the Statistics Authority believes that a similar approach will provide a sound foundation for the Statistics Authority's quality assessment function’ (Code of Practice: 8). One of the strong points of the Code is its emphasis on the role of the user, and the need for statistical producers to consider the wider use that is – or may be – made of statistics. In addition to meeting specific policy needs within government, there is increasing demand by people working in research, academia and journalism for statistics in many aspects of social and economic life.