To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is one of the most studied problems in computer science, which is just touched in basic algorithm and data structure courses offered at the undergraduate level.This chapter makes a step forward by first introducing universal hashing, which solves some negative issues that basic hash functions experience, and then moves on to describe several advanced approaches to hashing, such as perfect hashing, Cuckoo hashing, minimal ordered perfect hashing, and finally Bloom filters. The theoretical analysis and algorithm description is enriched by figures and pseudocodes, plus several running examples that drive the reader to a better understanding of these important and advanced algorithmic concepts and tools.
This chapter attacks an easy-to-state problem, which is to select a subsequence of items uniformly at random from a given input sequence. This problem is the backbone of many randomized algorithms, and admits solutions that are algorithmically challenging to design and analyze. In particular, this chapter deals with the two cases where the size of the input sequence is either known or unknown to the algorithm, and also addresses the cases where the sequence length is smaller or larger than the internal memory of the computer. This will allowus to introduce another model of computation, the streaming model.
Linear Algebra is a branch of mathematics that deals with linear equation, systems of linear equations, and their representation as functions between algebraic structures called vector spaces. Linear Algebra is an essential tool in many disciplines such as engineering, statistics, and computer science and is also central within mathematics, in areas such as analysis and geometry. Much like the definition of a field, a vector space is defined through a list of axioms, motivated by concrete observations in familiar spaces, such as the standard two-dimensional plane and the three-dimensional space. We begin this chapter by taking a closer look at real n-dimensional spaces and vectors and then move on to discussing abstract vector spaces and linear maps. Our experience with sets and functions, developed in previous chapters, as well as certain proof techniques, will prove to be useful in our discussion.
Answer set programming is a declarative logic programming paradigm geared towards solving difficult combinatorial search problems. While different logic programs can encode the same problem, their performance may vary significantly. It is not always easy to identify which version of the program performs the best. We present the system predictor (and its algorithmic backend) for estimating the grounding size of programs, a metric that can influence a performance of a system processing a program. We evaluate the impact of predictor when used as a guide for rewritings produced by the answer set programming rewriting tools projector and lpopt. The results demonstrate potential to this approach.
The digitalization of business organizations and of society in general has opened up the possibility of researching behaviours using large volumes of digital traces and electronic texts that capture behaviours and attitudes in a broad range of natural settings. How is the availability of such data changing the nature of qualitative, specifically interpretive, research and are computational approaches becoming the essence of such research? This chapter briefly examines this issue by considering the potential impacts of digital data on key themes associated with research, those of induction, deduction and meaning. It highlights some of the ‘nascent myths’ associated with the digitalization of qualitative research. The chapter concludes that while the changes in the nature of data present exciting opportunities for qualitative, interpretive researchers to engage with computational approaches in the form of mixed-methods studies, it is not believed they will become the sine qua non of qualitative information systems research in the foreseeable future.
We begin our journey by taking a closer look at some familiar notions, such as quadratic equations and inequalities. And, rather than using mechanical computations and algorithms, we focus on more fundamental questions: Where does the quadratic formula come from and how can we prove it? What are the rules that can be used with inequalities, and how can we justify them? These questions will lead us to looking at a few proofs and mathematical arguments. We highlight some of the main features of a mathematical proof, and discuss the process of constructing mathematical proofs. We also review informally the types of numbers often used in mathematics and introduce relevant terminology.
In this chapter, we introduce and discuss fundamental notions in mathematics: sets, functions, and axioms. Sets and functions show up everywhere in mathematics and science and are common tools used in mathematical arguments. Moreover, proving statements about sets and functions can further develop our proof-writing and communication skills. We also demonstrate, in Section 2.3, how axioms are used in mathematics as initial assumptions, from which other statements can be derived.
This chapter introduces the concept of ‘datafication momentum’, which is the tendency for datafication systems to receive more influence from social systems in their early stages and exert more influence on social systems in their mature stages. Due to datafication momentum, datafication systems are prone to be inscribed with the dark side of social systems in their earlier stages, and then amplify this dark side in their later stages (e.g., leading to outcomes like data-driven discrimination). The chapter calls on qualitative researchers to combat this risk with a ‘qualitative researchers as design thinkers’ mindset. In particular, it proposes ‘design forensics’ as a practice in which qualitative researchers integrate design-thinking principles with design ethnography to identify the risk of datafication and shape it to a more desirable end. The chapter introduces three design principles – empathetic datafication, datafication totality and reflective criticism – and discusses their implications for research and practice.
In this chapter, we take a step back to discuss, more generally, the language of mathematics and some proof techniques and strategies. In the previous chapters, we have seen numerous mathematical notions, theorems, proofs, and examples. As you have probably noticed, communicating mathematical arguments and ideas in a coherent and precise way is at the core of the subject.
Qualitative research provides an excellent opportunity to study digitalization. The purpose of this chapter is to explore the digitalization of government services by studying the longitudinal development data-sharing practices across different parts of government in the United Kingdom. This chapter reports on a unique, qualitative, interpretive field study based on the author’s role as a participant observer and his analysis of the discourse and contents of the various documents presented in relation to both the creation and running of data-sharing practices in the United Kingdom. The chapter finds that despite government addressing many of the concerns identified in the literature on data sharing, practical and perceptual issues remain – issues that tell us much about the state of digitalization of government services.