To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
the attempt to address Hilbert's decision problem and the negative result it achieved in Church's theorem led mathematicians of the thirties to clarify what an algorithm is. They offered many definitions, among which were the Herbr and–Gödel equations, Church's lambda calculus, Turing machines, Kleene's recursive functions, and rewrite rules. Each of these puts forward a language in which to express algorithms – nowadays, we would say that each defines a “programming language.”
Time has shown these definitions to be equivalent: if an algorithm can be defined in one of these programming languages, that definition can be translated into any of the others. This equivalence of definitions ranks among the greatest successes of computability theory: it means that an absolute concept of computation has been reached independent of the accidental form assumed by this or that algorithmic language.
Yet the mathematicians of the thirties faced an obvious question. Is this it? Is this “the” notion of computation? Or might someone in the future come up with other languages capable of expressing more algorithms? Most mathematicians of the thirties rejected this possibility; to them, the concept of computation, defined by Turing machines or by lambda calculus, was the right one. This thesis is called Church's thesis, although as before several mathematicians – particularly Turing – developed similar ideas.
THE COMMON CONCEPT OF COMPUTATION
Church's thesis asserts the identity of two concepts: the concept of computation as defined by lambda calculus, Turing machines, and so forth and the “common” concept of computation. One of the reasons Church's thesis is so hard to formulate precisely is that it is unclear what the common concept of computation is. Indeed, if you formulate Church's thesis by stating that no algorithmic language that might be put forward in the future will ever be more powerful than those we already know or, in other words, that all the algorithms we may come across in the futurewill be expressible in the languages we use today, you sound more like a fortune-teller looking into a crystal ball than a mathematician formulating a scientific theory.
as this mathematical journey draws to its conclusion, let us cast a look at the unresolved problems we have encountered along the way and which may outline the panorama of research to come.
We have seen that the theory of computability allows one to show that, in all theories, there exist propositions both provable and short which have only long proofs, but that examples of such propositions are, as of today, mere artifices: the methods we know are too rudimentary to allow us to prove that real mathematical theorems such as the four-color theorem, Hales's theorem, or others have no short proofs. New methods must therefore be invented. Besides, the philosophical debate about the link between proof and explanation would be greatly clarified if one could state with certainty that a specific theorem has no short axiomatic proof.
Another question that remains unanswered to this day concerns the possibility of practicing mathematics without ever resorting to axioms. When axioms are compared to computation rules, they appear to be static objects: they are there, once and for all, as unchanging as they are true. Computation rules, on the contrary, enable mathematicians to do things – to shorten proofs, to create new ones, and so on. And, more importantly, thanks to the notion of confluence, computation rules interact with each other. As a consequence, every time one successfully replaces an axiom with a computation rule, there is cause to rejoice. Yet the fact that this is desirable does not always make it possible. In certain cases, we may have no other choice but to put up with axioms. The question is: in which cases, precisely?
Church's thesis has given us a glimpse of a new way of formulating natural laws. These would no longer be phrased as propositions but expressed by algorithms. Reformulating Newton's law in mechanics, or Ohm's law in electricity, should not pose much of a problem, but more recent theories such as quantum physics will require more work.
the idea that a proof is not constructed merely with axioms and rules of inference but also requires computation rules has come a long way: in the early seventies, this idea pervaded Martin-Löf's type theory and, since then, it also constitutes the heart of several works on the computer processing of mathematical proofs. These works study mathematical theories and proofs as objects, which they consider from the outside: in other words, these are works of logic. Mathematics, however, never evolves under the sole influence of logic. For a change to occur, something must be brought to “field” mathematics – that is, to mathematical practice.
In order to determine whether this calling in question of the axiomatic method is trivial or essential, it is important that we, too, observe it from the field. Therefore, the following chapter shall contain examples – such as the four-color theorem, Morley's theorem, and Hales' theorem– that do not deal with logic but with geometry.
THE FOUR-COLOR THEOREM
In the middle of the nineteenth century, a new mathematical problem appeared: the four-color problem. When one colors in a map, one may choose to use a different color for each region on the map. A thriftier artist may decide to use the same color twice for countries which have no common border. In 1853, this idea led Francis Guthrie to seek and find a way of coloring in a map of the counties in Great Britain using only four colors. Since sometimes four neighboring counties touch, two by two, one cannot use fewer than four colors. As a consequence, the number of colors necessary to color in this map is exactly four.
The problem of the number of colors necessary to color in a map of British counties was thus solved, but Guthrie then wondered whether this property was specific to that map, or whether it might be extended to all maps. He formulated the hypothesis that all maps could be colored in with a maximum of four colors – yet he failed to prove it.
it would not be until the early 1970s that the axiomatic method would be challenged. Then, surprisingly enough, it was called into question simultaneously and independently in several branches of mathematics and computer science. Few of the main players in this episode in the history of mathematics were aware that they were pursuing the same aim, within logic as researchers pushed forward the work of their predecessors on constructivity, within computer science, and within “real-world” mathematics. This chapter will focus on logic.
INTUITIONISTIC TYPE THEORY
In the late sixties, many breakthroughs sparked a revival of interest in constructivity. On one hand, the algorithmic interpretation of proofs was developed thanks to the work of Curry, de Bruijn, and Howard; on the other hand, William Tait, Per Martin-Löf, and Jean-Yves Girard proved cut elimination for new theories. Most important, Girard proved cut elimination for Church's type theory, a variation on set theory. It thus became possible to provide constructive mathematics with a general framework equivalent to Church's type theory or set theory. Martin-Löf offered one such framework: intuitionistic type theory.
Intuitionistic type theory was born of an ascetic approach to logic: in order to provide a minimal basis for mathematics, this theory aims not only to exclude the principle of excluded middle, but also to break free of three axioms in Church's type theory, which we will not explore in detail in this book, namely the axiom of extensionality, the axiom of choice, and the axiom of impredicative comprehension. In the early seventies, many mathematicians doubted, understandably, that a theory so weakened would be capable of expressing much at all. Thirty years later, however, we are forced to recognize that vast sections of mathematics have been successfully expressed in this theory and in some of its extensions, such as Thierry Coquand and Gérard Huet's calculus of constructions, for instance.
as we have seen, in the seventies, the idea began to germinate that a proof is not built solely with axioms and inference rules, but also with computation rules. This idea flourished simultaneously in several fields of mathematics and computer science – in Martin-Löf's type theory, in the conception of automated theorem proving programs and of proof checking programs, but also in real-world mathematics, especially in the demonstration of the four-color theorem. As is often the case, this idea did not emerge whole, complete, and simple, for it first began to bloom in specific contexts which each influenced it in their own way: the specialists of Martin-Löf's type theory viewed it as an appendix of the theory of definitions; automated theorem proving program designers considered it a tool to make automated theorem proving methods more efficient; proof checking program designers saw it as a means to skip small, simple steps in demonstrations; and mathematicians perceived it as a way of using computers to prove new theorems.
A few years after the emergence of a new idea, it is natural to question its impact and to seek a way of expressing it in the most general framework possible. This is what lead this author, along with Thérèse Hardin and Claude Kirchner, in the late nineties, to reformulate the idea according to which mathematical proofs are not constructed solely with axioms and inference rules but also with computation rules in the broadest possible framework, namely in predicate logic. This drove us to define an extension of predicate logic, called “deduction modulo,” which is similar to predicate logic in every respect but one: in this extension, a proof is built with the three aforementioned ingredients.
Reconsidering this idea in its most basic form and rejecting the sophistication of type theory and automated theorem proving in favor of the simplicity and freshness of predicate logic enabled us to undertake both unification and classification tasks.
what does the notion of constructive proof have to do with the subject of this book, namely computation? The notions of computation and algorithm did not originally play such an important part in the theory of constructivity as in, say, the theory of computability. However, behind the notion of constructive proof lay the notion of algorithm.
CUT ELIMINATION
We have seen that proofs of existence resting on the principle of excluded middle do not always contain a witness. On the other hand, a proof of existence that does not use the excluded middle always seems to contain one, either explicitly or implicitly. Is it always so, and can this be demonstrated?
Naturally, the possibility of finding a witness in a proof does not depend solely on whether the demonstration uses the excluded middle; it also depends on the axioms used. For example, an axiom of the form “there exists …” is a proof of existence in itself, yet this proof produces no witness. The question of the presence or absence of witnesses in proofs that do not use the excluded middle thus branches into as many questions as there are theories – arithmetic, geometry, Church's type theory, set theory, and, to begin with, the simplest of all theories: the axiom-free theory.
One of the first demonstrations that a proof of existence constructed without calling upon the principle of excluded middle always includes a witness (at least implicitly) rests on an algorithm put forward in 1935 by Gerhard Gentzen, called the “cut elimination algorithm.” This algorithm, unlike those we have mentioned up to this point, is not applied to numbers, functional expressions, or computation rules – it is applied to proofs.
A proof can contain convoluted arguments, which are called “cuts.”Gentzen's algorithm aims at eliminating those cuts by reorganizing the proof in a more straightforward way.
It has been hypothesized that the language of Twitter users is associated with the socioeconomic well-being those users experience in their physical com- munities (e.g., satisfaction with life in their states of residence). To test the relationship between language use and psychological experience, researchers textually processed tweets to extract mainly sentiment and subject matters (topics) and associated those two quantities with census indicators of well-being. They did so by focusing on geographically coarse-grained communities, the finest-grained of which were U.S. census areas. After briefly introducing those studies and describing the common steps they generally take, we offer a case study taken from our own work on geographically smaller communities: London census areas.
Introduction
Happiness has often been indirectly characterized by readily quantifiable economic indicators such as gross domestic product (GDP). Yet in recent years, policy makers have tried to change that and introduced indicators that go beyond merely economic considerations. In 2010, the former French president Nicolas Sarkozy intended to include well-being in France's measurement of economic progress (Stratton, 2010). The UK prime minister David Cameron has been initiating a series of policies, under the rubric “Big Society,” that seek to make society stronger by getting more people running their own affairs locally all together. The idea shared by many governments all over the world is to explore new ways of measuring community well-being and, as such, put forward policies that promote more quality of life (happiness) rather than material welfare (GDP).
Measuring the well-being of single individuals can be successfully accomplished by administering questionnaires such as the Satisfaction with Life (SWL) test, whose score effectively reflects the extent to which a person feels that his or her life is worthwhile (Diener, Diener, & Diener, 1995). To go beyond single individuals and measure the well-being of communities, one could administer SWL tests to community residents. But that would be costly and is thus done on limited population samples and once per year at best.
By
Bella Robinson, Commonwealth Scientific and Industrial Research Organisation,
Robert Power, Commonwealth Scientific and Industrial Research Organisation,
Mark Cameron, Commonwealth Scientific and Industrial Research Organisation
Twitter is a new data channel for emergency managers to source public information for situational awareness and as a means of engaging with the community during disaster response and recovery activities. Twitter has been used successfully to identify emergency events, obtain crowd sourced information as the event unfolds, provide up-to-date information to the affected community from authoritative agencies, and conduct resource planning.
Introduction
Motivation
Natural disasters have increased in severity and frequency in recent years. According to Guha-Sapir et al. (2011), in 2010, 385 natural disasters killed over 297,000 people worldwide, impacted 217 million human lives, and cost the global economy an estimated US$123.9 billion. There are numerous examples from around the world: the 2004 Indian Ocean earthquake and tsunami; the more recent 2011 Tōhoku earthquake and tsunami, which damaged the Fukushima nuclear power station; hurricanes Katrina and Sandy in 2005 and 2012 respectively; the 2010 China floods, which caused widespread devastation; and Victoria's 2009 “Black Saturday” bushfires in Australia, killing 173 people and having an estimated A$2.9 billion in total losses (Stephenson, Handmer, & Haywood, 2012).
With urban development occurring on coastlines and spreading into rural areas, houses and supporting infrastructure are expanding into high-risk regions. The growing world population is moving into areas progressively more prone to natural disasters and unpredictable weather events. These events have been increasing in frequency and severity in recent years (Hawkins et al., 2012).
It has been recognized that information published by the general public on social media is relevant to emergency managers and that social media is a useful means of providing information to communities that may be impacted by emergency events (Lindsay, 2011; Anderson, 2012). To prepare and respond to such emergency situations effectively, it is critical that emergency managers have relevant and reliable information. For example, bushfire management is typically a regional government responsibility, and each jurisdiction has its own agency that takes the lead in coordinating community preparedness and responding to bushfires when they occur.
Twitter is a social network with over 250 million active users who collectively generate more than 500 million tweets each day. In social sciences research, Twitter has earned the focus of extensive research largely due to its openness in sharing its public data. Twitter exposes an extensive application programming interfaces (APIs) that can be used to collect a wealth of social data. In this chapter, we introduce these APIs and discuss how they can be used to conduct social sciences research. We also outline some issues that arise when using these APIs, and some strategies for collecting datasets that can give insight into a particular event.
Introduction
Twitter is a rich data source that provides several forms of information generated through the interaction of its users. These data can be harnessed to accomplish a variety of personalization and prediction tasks. Recently, Twitter data have been used to predict things as diverse as election results (Tumasjan et al., 2010; c.f. Chapter 2) or the location of earthquakes (Sakaki et al., 2010; c.f. Chapter 6). Twitter currently has over 250 million active users who collectively generate more than 500 million tweets each day. This creates a unique opportunity to conduct large-scale studies on user behavior. An important step before conducting such studies is the identification and collection of data relevant to the problem.
Twitter is an online social networking platform where the registered users can create connections and share messages with other users. Messaging on Twitter is unique, as messages are required to be at most 140 characters long, and these messages are normally broadcast to all the users on Twitter. Thus, the platform provides an avenue to share content with a large and diverse population with few resources. These interactions generate different kinds of information. Information is made accessible to the public via APIs or interfaces where requests for data can be submitted. In this chapter, we introduce different forms of Twitter data and illustrate the capabilities and restrictions imposed by the API on Twitter data analysis.
the origin of mathematics is usually placed in Greece, in the fifth century B.C., when its two main branches were founded: arithmetic by Pythagoras and geometry by Thales and Anaximander. These developments were, of course, major breakthroughs in the history of this science. However, it does not go far enough back to say that mathematics has its source in antiquity. Its roots go deeper into the past, to an important period that laid the groundwork for the Ancient Greeks and that we might call the “prehistory” of mathematics. People did not wait until the fifth century to tackle mathematical problems – especially the concrete problems they faced every day.
ACCOUNTANTS AND LAND SURVEYORS
A tablet found in Mesopotamia and dating back to 2500 B.C. carries one of the oldest traces of mathematical activity. It records the solution to a mathematical problem that can be stated as follows: if a barn holds 1,152,000 measures of grain, and you have a barn's worth of grain, how many people can you give seven measures of grain to? Unsurprisingly, the result reached is 164,571 – a number obtained by dividing 1,152,000 by seven – which proves that Mesopotamian accountants knew how to do division long before arithmetic was born. It is even likely (although it is hard to know anything for certain in that field) that writing was invented in order to keep account books and that, therefore, numbers were invented before letters. Though it may be hard to stomach, we probably owe our whole written culture to a very unglamorous activity: accounting.
Mesopotamian and Egyptian accountants not only knew how to multiply and divide, but had also mastered many other mathematical operations – they were able to solve quadratic equations, for instance. As for land surveyors, they knew how to measure the areas of rectangles, triangles, and circles.
Computation is revolutionizing our world, even the inner world of the 'pure' mathematician. Mathematical methods - especially the notion of proof - that have their roots in classical antiquity have seen a radical transformation since the 1970s, as successive advances have challenged the priority of reason over computation. Like many revolutions, this one comes from within. Computation, calculation, algorithms - all have played an important role in mathematical progress from the beginning - but behind the scenes, their contribution was obscured in the enduring mathematical literature. To understand the future of mathematics, this fascinating book returns to its past, tracing the hidden history that follows the thread of computation. Along the way it invites us to reconsider the dialog between mathematics and the natural sciences, as well as the relationship between mathematics and computer science. It also sheds new light on philosophical concepts, such as the notions of analytic and synthetic judgment. Finally, it brings us to the brink of the new age, in which machine intelligence offers new ways of solving mathematical problems previously inaccessible. This book is the 2007 winner of the Grand Prix de Philosophie de l'Académie Française.
The multidisciplinary nature of learning-games development is key to successful projects. In this book, field leaders in serious games and professionals in entertainment games share practical guidelines and lessons from their own experiences researching and developing learning games. This volume includes:The key elements of design and development that require particular attention from multiple disciplines to ensure successAn overview of successful models and methods, and the trade-offs made throughout the process, to guide developmentCohesive, multidisciplinary views of the issues that arise and of the techniques applied in order to produce effective learning games grounded in specific experiences, community consensus, and analysis of successful learning games that have already been releasedThe stories behind the games, to illustrate how final design and development decisions were reached.Aimed at professionals and academics interested in developing and researching learning games, it offers a comprehensive picture of the state of the art.
More than 500 million tweets are shared on Twitter each day, each devoting 140 characters to anything from announcements of dinner plans to calls for revolutions. Given its sheer size and the fact that most of its content is public, it is not surprising that Twitter has been extensively used to study human behavior on a global scale.
This book uses thematically grouped case studies to show how Twitter data can be used to analyze phenomena ranging from the number of people infected by the flu, to national elections, to tomorrow's stock prices. The idea for the book grew out of a three-hour tutorial on “Twitter and the Real World” given in October 2013 at the Conference on Information and Knowledge Management. Expanding on the topics covered in that tutorial, the book provides a wider thematic scope of research and the most recent scientific work.
All chapters are written by leading domain experts and take the reader to the forefront of the emerging new field of computational social science, using topical applications to illustrate the possibilities, advantages, and limitations of large, semi structured social media data to gain insights into human behavior and social interaction. Although most of the authors are computer scientists, the book is intended for readers who have not been exposed to formal training on how to analyze “big data.” To this end, unnecessary implementation details are avoided. Chapter 1, “Analyzing Twitter Data,” introduces readers to the tools and skills used in these studies and gives pointers on how to take the first steps as a novice researcher.
The opening chapter lays the groundwork for the book by surveying the opportunities and challenges of a “Twitter socioscope.” The chapters that follow are written to be stand-alone case studies that can be reordered according to the reader's needs and preferences.
Twitter will undoubtedly evolve and its usage will change, but even if it comes to be replaced by “the next big thing,” the detailed time-stamped records of hundreds of millions of global users will continue to be one of the most valuable sources of data on human behavior ever assembled, and these case studies will remain useful as an introduction to the methods, research opportunities, and challenges that these data present.
once it was established that automated proof was not keeping all its promises, mathematicians conceived a less ambitious project, namely that of proof checking. When one uses an automated theorem proving program, one enters a proposition and the program attempts to construct a proof of the proposition. On the other hand, when one uses a proof-checking program, one enters both a proposition and a presumed proof of it and the program merely verifies the proof, checking it for correctness.
Although proof checking seems less ambitious than automated proof, it has been applied to more complex demonstrations and especially to real mathematical proofs. Thus, a large share of the first-year undergraduate mathematics syllabus has been checked by many of these programs. The second stage in this project was initiated in the nineties; it consisted in determining, with the benefit of hindsight, which parts of these proofs could be entrusted to the care of software and which ones required human intervention. The point of view of mathematicians of the nineties differed from that of the pioneers of automated proof: as we can see, the idea of a competition between man and machine had been replaced with that of cooperation.
One may wonder whether it really is useful to check mathematical proofs for correctness. The answer is yes, first, because even the most thorough mathematicians sometimes make little mistakes. For example, a proof-checking program revealed a mistake in one of Newton's demonstrations about how the motion of planets is subjected to the gravitational attraction of the sun. While this mistake is easily corrected and does not challenge Newton's theories in the slightest, it does confirm that mathematical publications often contain errors. More seriously, throughout history, many theorems have been given myriad false proofs: the axiom of parallels, Fermat's last theorem (according to which, if n ≥ 3, there exist no positive integers x, y and z such that xn + yn = zn) and the four-color theorem (of which more in Chapter 12) were all allegedly “solved” by crank amateurs, but also by reputable mathematicians, sometimes even by great ones.
at the beginning of the twentieth century, two theories about computation developed almost simultaneously: the theory of computability and the theory of constructivity. In a perfect world, the schools of thought that produced these theories would have recognized how their work was connected and cooperated in a climate of mutual respect. Sadly, this was not the case, and the theories emerged in confusion and incomprehension. Not until the middle of the century would the links between computability and constructivity finally be understood.
To this day, there are traces of the old strife in the sometimes excessively ideological way in which these theories are expounded, both of which deserve a dispassionate presentation. After our call for unity, we must nevertheless concede that, although both schools of thought developed concepts that turned out to be rather similar, the problems they set out to solve were quite different. Therefore, we will tackle these theories separately. Let's start with the notion of computability.
THE EMERGENCE OF NEW ALGORITHMS
As they each in turn attempted to clarify inference rules, Frege, Russell, and Hilbert contributed to the elaboration of predicate logic. Predicate logic, in keeping with the axiomatic conception of mathematics, consists of inference rules that enable proofs to be built, step by step, from axiom to theorem, without providing any scope for computation. Ignoring Euclid's algorithm, medieval arithmetic algorithms, and calculus, predicate logic signaled the return to the axiomatic vision passed down from the Greeks that turns its back on computation.
In predicate logic, as in the axiomatic conception of mathematics, a problem is formulated as a proposition, and solving the problem amounts to proving that the proposition is true (or that it is false). What is new with predicate logic is that these propositions are no longer expressed in a natural language – say, English – but in a codified language made up of relational predicate symbols, coordinating conjunctions, variables, and quantifiers.