To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we briefly discuss the goals of cryptography (Section 1.1). In particular, we discuss the basic problems of secure encryption, digital signatures, and fault-tolerant protocols. These problems lead to the notions of pseudorandom generators and zero-knowledge proofs, which are discussed as well.
Our approach to cryptography is based on computational complexity. Hence, this introductory chapter also contains a section presenting the computational models used throughout the book (Section 1.3). Likewise, this chapter contains a section presenting some elementary background from probability theory that is used extensively in the book (Section 1.2).
Finally, we motivate the rigorous approach employed throughout this book and discuss some of its aspects (Section 1.4).
Teaching Tip. Parts of Section 1.4 may be more suitable for the last lecture (i.e., as part of the concluding remarks) than for the first one (i.e., as part of the introductory remarks). This refers specifically to Sections 1.4.2 and 1.4.3.
Cryptography: Main Topics
Historically, the term “cryptography” has been associated with the problem of designing and analyzing encryption schemes (i.e., schemes that provide secret communication over insecure communication media). However, since the 1970s, problems such as constructing unforgeable digital signatures and designing fault-tolerant protocols have also been considered as falling within the domain of cryptography. In fact, cryptography can be viewed as concerned with the design of any system that needs to withstand malicious attempts to abuse it.
This paper presents a twelve-month evaluation of a Web-based CALL Project at London Guildhall University, highlighting its rationale and objectives within the design process, the adopted methodology and its findings, shedding further light on students’ attitudes regarding CALL design, multimedia delivery and language learning practices. The evaluated CALL environment, stemming from previously identified user requirements, combines both hypertext navigation with Java-written interactive CALL exercises and provides a built-in authoring mode for increased adaptability. Techniques used involved peer evaluation and discussions in the formative stages of the design as well as user walkthroughs, questionnaires and workshops in the summative phase of the process. Data related to requirements and feedback are presented, emphasising the user-centred approach and its integration within the design process itself. Finally, changes brought about by the evaluation are shown to illustrate the influence and impact the research undertaken has had on the original concepts and the design.
In this chapter, we present examples showing you how to apply MATLAB to problems in several different disciplines. Each example is presented as a MATLAB M-book. These M-books are illustrations of the kinds of polished, integrated, interactive documents that you can create with MATLAB, as augmented by the Word interface. The M-books are:
Illuminating a Room
Mortgage Payments
Monte Carlo Simulation
Population Dynamics
Linear Economic Models
Linear Programming
The 360° Pendulum
Numerical Solution of the Heat Equation
A Model of Traffic Flow
We have not explained all the MATLAB commands that we use; you can learn about the new commands from the online help. SIMULINK is used in A Model of Traffic Flow and as an optional accessory in Population Dynamics and Numerical Solution of the Heat Equation. Running the M-book on Linear Programming also requires an M-file found (in slightly different forms) in the SIMULINK and Optimization toolboxes.
The M-books require different levels of mathematical background and expertise in other subjects. Illuminating a Room, Mortgage Payments, and Population Dynamics use only high school mathematics. Monte Carlo Simulation uses some probability and statistics; Linear Economic Models and Linear Programming, some linear algebra; The 360° Pendulum, some ordinary differential equations; Numerical Solution of the Heat Equation, some partial differential equations; and A Model of Traffic Flow, differential equations, linear algebra, and familiarity with the function ez for z a complex number.
Traditionally, logic has been regarded as the science of correct thinking or of making valid inferences. The former characterization of logic has strong psychological overtones—thinking is a psychological phenomenon—and few writers today think that logic can be a discipline that can successfully teach its students how to think, let alone how to think correctly. Furthermore, it is not obvious what “correct” thinking is. One can think “politically correct” thoughts without engaging in logic at all. We shall, at least for the moment, be well advised to leave psychology to one side, and focus on the latter characterization of logic: the science of making valid inferences.
To make an inference is to perform an act: It is to do something. But logic is not a compendium of exhortations: From “All men are mortal” and “Socrates is a man” do thou infer that Socrates is mortal! To see that this cannot be the case, note that “All men are mortal” has the implication that if Charles is a man, he is mortal, if John is a man, he is mortal, and so on, through the whole list of men, past and present, if not future. Furthermore, it is an implication of “All men are mortal” that if Fido (my dog) is a man, Fido is mortal; if Tabby is a man, Tabby is mortal, etc. And how about inferring “If Jane is a man, Jane is mortal”? As we ordinarily construe the premise, this, too is a valid inference. We cannot follow the exhortation to perform all valid inferences: There are too many, they are too boring, and that, surely, is not what logic is about.
We form beliefs about the world, from evidence and inferences made from the evidence. Belief, as opposed to knowledge, consists of defeasible information. Belief is what we think is true, and it may or may not be true in the world. On the other hand, knowledge is what we are aware of as true, and it is always true in the world.
We make decisions and act according to our beliefs, yet they are not infallible. The inferences we base our beliefs on can be deductive or uncertain, employing any number of inference mechanisms to arrive at our conclusions, for instance, statistical, nonmonotonic, or analogical. We constantly have to modify our set of beliefs as we encounter new information. A new piece of evidence may complement our current beliefs, in which case we can hold on to our original beliefs in addition to this new evidence. However, because some of our beliefs can be derived from uncertain inference mechanisms, it is inevitable that we will at some point encounter some evidence that contradicts what we currently believe. We need a systematic way of reorganizing our beliefs, to deal with the dynamics of maintaining a reasonable belief set in the face of such changes.
The state of our beliefs can be modeled by a logical theory K, a deductively closed set of formulas. If a formula φ is considered accepted in a belief set, it is included in the corresponding theory K; if it is rejected, its negation ¬φ is in K. In general the theory is incomplete.
Every morning at 2 A.M., as professors sleep and graduate students arrive to pull all-nighters, my computer diligently makes the rounds of the Websites of all major frontons, downloading the latest schedules and results and then running these files through Dario's parsing programs. After a few months of retrieval we had built a large-enough collection of jai alai data to justify some serious analysis. Our goal was to use all this data to measure the relative abilities of jai alai players and incorporate this information into our Monte Carlo simulation to make customized predictions for each match.
To get this job done, I had to bring another student on to the project, Meena Nagarajan. Meena was a different type of student than Dario. As a married woman with a young child, she realized that there are other things to life besides computers. She was returning to school to get her master's degree with the express goal of getting a lucrative job with a financial services company associated with Wall Street, as indeed she ultimately did. She realized that building a program-trading system for jai alai was a great way to learn how to build one for trading stocks, and she therefore signed on to work on the project.
Her undergraduate degree back in India was in applied mathematics; thus, she brought to the table an understanding of the meaning and limitations of statistics.
The idea behind evidential probability is a simple one. It consists of two parts: that probabilities should reflect empirical frequencies in the world, and that the probabilities that interest us—the probabilities of specific events—should be determined by everything we know about those events.
The first suggestions along these lines were made by Reichenbach [Reichenbach, 1949]. With regard to probability, Reichenbach was a strict limiting-frequentist: he took probability statements to be statements about the world, and to be statements about the frequency of one kind of event in a sequence of other events. But recognizing that what concerns us in real life is often decisions that bear on specific events—the next roll of the die, the occurrence of a storm tomorrow, the frequency of rain next month—he devised another concept that applied to particular events, that of weight. “We write P(a) = p thus admitting individual propositions inside the probability functor. The number p measures the weight of the individual proposition a. It is understood that the weight of the proposition was determined by means of a suitable reference class, …” [Reichenbach, 1949, p. 409]. Reichenbach appreciated the problem of the reference class: “… we may have reliable statistics concerning a reference class A and likewise reliable statistics for a reference class C, whereas we have insufficient statistics for the reference class A·C. The calculus of probability cannot help in such a case because the probabilities P(A, B) and P(C, B) do not determine the probability P(A · C, B)” [Reichenbach, 1949, p. 375]. The best the logician can do is to recommend gathering more data.
In this chapter we discuss pseudorandom generators. Loosely speaking, these are efficient deterministic programs that expand short, randomly selected seeds into much longer “pseudorandom” bit sequences (see illustration in Figure 3.1). Pseudorandom sequences are defined as computationally indistinguishable from truly random sequences by efficient algorithms. Hence the notion of computational indistinguishability (i.e., indistinguishability by efficient procedures) plays a pivotal role in our discussion. Furthermore, the notion of computational indistinguishability plays a key role also in subsequent chapters, in particular in the discussions of secure encryption, zero-knowledge proofs, and cryptographic protocols.
The theory of pseudorandomness is also applied to functions, resulting in the notion of pseudorandom functions, which is a useful tool for many cryptographic applications.
In addition to definitions of pseudorandom distributions, pseudorandom generators, and pseudorandom functions, this chapter contains constructions of pseudorandom generators (and pseudorandom functions) based on various types of one-way functions. In particular, very simple and efficient pseudorandom generators are constructed based on the existence of one-way permutations. We highlight the hybrid technique, which plays a central role in many of the proofs. (For the first use and further discussion of this technique, see Section 3.2.3.)
Organization. Basic discussions, definitions, and constructions of pseudorandom generators appear in Sections 3.1–3.4: We start with a motivating discussion (Section 3.1), proceed with a general definition of computational indistinguishability (Section 3.2) next present and discuss definitions of pseudorandom generators (Section 3.3), and finally present some simple constructions (Section 3.4). More general constructions are discussed in Section 3.5.
We consider a group of puppies, take what we know about that group as a premise, and infer, as a conclusion, something about the population of all puppies. Such an inference is clearly risky and invalid. It is nevertheless the sort of inference we must make and do make. Some such inferences are more cogent, more rational than others. Our business as logicians is to find standards that will sort them out.
Statistical inference includes inference from a sample to the population from which it comes. The population may be actual, as it is in public opinion polls, or hypothetical, as it is in testing an oddly weighted die (the population is then taken to be the hypothetical, population of possible tosses or possible sequences of tosses of the die). Statistical inference is a paradigm example of uncertain inference.
Statistical inference is also often taken to include the uncertain inference we make from a population to a sample, as when we infer from the fairness of a coin that roughly half of the next thousand coin tosses we make will yield heads–a conclusion that might be false. Note that this is not probabilistic inference: the inference from the same premises to the conclusion that the probability is high that roughly half of the next thousand tosses will yield heads is deductive and (given the premises) not uncertain at all.
The inference from a statistical premise about a population to a nonprobabilistic conclusion about part of that population is called direct inference. The inference from a premise about part of a population to the properties of the population as a whole is called inverse inference.
Many years passed. I received my doctorate in Computer Science from the University of Illinois with a thesis in computational geometry and found myself a faculty position in Computer Science at the State University of New York, Stony Brook.
Jai alai would have to wait awhile. As an Assistant Professor, your efforts revolve around getting tenure. Publish or perish isn't too far from the truth, but what you publish makes a big difference. I wouldn't have gotten tenure even if I had published 100 articles on jai alai because this work wouldn't (and shouldn't) carry much respect with the powers that be in academic computer science.
But 6 years later I found myself a tenured Associate Professor of Computer Science. Tenure gives you the freedom to work on whatever you want. You have to teach your classes, and you have to do your committee work, but otherwise what you do with your time is pretty much up to you. If I wanted to devote a little effort to an interesting mathematical modeling problem, well, nobody was going to stop me.
By now my parents had retired to Florida, and each winter my brother Len and I would pay them a visit. Each visit included an obligatory trip to watch jai alai, and so on January 17, 1995, we spent the evening at the Dania fronton. On arrival, our first action was, as always, to buy a Pepe's Green Card. Our second step was to convince ourselves of its infallibility.
In this chapter we describe more of MATLAB's graphics commands and the most common ways of manipulating and customizing them. You can get a list of MATLAB graphics commands by typing help graphics (for general graphics commands), help graph2d (for two-dimensional graphing), help graph3d (for three-dimensional graphing), or help specgraph (for specialized graphing commands).
We have already discussed the commands plot and ezplot in Chapter 2. We will begin this chapter by discussing more uses of these commands, as well as the other most commonly used plotting commands in two and three dimensions. Then we will discuss some techniques for customizing and manipulating graphics.
Two-Dimensional Plots
Often one wants to draw a curve in the x-y plane, but with y not given explicitly as a function of x. There are two main techniques for plotting such curves: parametric plotting and contour or implicit plotting. We discuss these in turn in the next two subsections.
Parametric Plots
Sometimes x and y are both given as functions of some parameter. For example, the circle of radius 1 centered at (0,0) can be expressed in parametric form as x = cos(2πt), y = sin(2πt) where t runs from 0 to 1. Though y is not expressed as a function of x, you can easily graph this curve with plot, as follows:
EUROCALL continued to operate from the Language Institute at the University of Hull. The former CTI Centre for Modern Languages became part of a much larger Subject Centre for Languages, Linguistics and Area Studies, in turn part of the Learning and Teaching Support Network (LTSN) funded by the UK Higher Education funding bodies. The team at Hull is responsible for aspects of the Centre’s activities relating to communication and information technologies (C&IT), and consists of June Thompson, Fred Riley and Julie Venner who serves as EUROCALL membership secretary. We were pleased to be joined in May 2000 by Janet Bartle who is the Academic Co-ordinator, C&IT for the Subject Centre.
The theory of Induction is the despair of philosophy–and yet all our activities are based upon it.
Alfred North Whitehead: Science and the Modern World, p. 35.
Introduction
Ever since Adam and Eve ate from the tree of knowledge, and thereby earned exile from Paradise, human beings have had to rely on their knowledge of the world to survive and prosper. And whether or not ignorance was bliss in Paradise, it is rarely the case that ignorance promotes happiness in the more familiar world of our experience—a world of grumbling bellies, persistent tax collectors, and successful funeral homes. It is no cause for wonder, then, that we prize knowledge so highly, especially knowledge about the world. Nor should it be cause for surprise that philosophers have despaired and do despair over the theory of induction: For it is through inductive inferences, inferences that are uncertain, that we come to possess knowledge about the world we experience, and the lamentable fact is that we are far from consensus concerning the nature of induction.
But despair is hardly a fruitful state of mind, and, fortunately, the efforts over the past five hundred years or so of distinguished people working on the problems of induction have come to far more than nought (albeit far less than the success for which they strove). In this century, the debate concerning induction has clarified the central issues and resulted in the refinement of various approaches to treating the issues. To echo Brian Skyrms, a writer on the subject [Skyrms, 1966], contemporary inductive logicians are by no means wallowing in a sea of total ignorance and continued work promises to move us further forward.
The opposing football captains watch as the coin arcs, glinting, through the air before landing on the turf at the referee's feet. Heads. Whatchamacallit U. kicks off.
For thousands of years, people have depended on devices that yield outcomes beyond the control of human agency; it has been a way of consulting the gods, or the fates. For us, the point of tossing a coin to determine who kicks off is that the outcome is a matter of chance. The probability of heads is one-half: The coin could, with equal conformity to the laws of the physical world, have landed tails.
Typically, for probability, matters are not as simple as they seem. In ancient times the outcome of chance events—the toss of a knucklebone or a coin—was often taken to reveal the will of the gods. Even today many people take the outcome of a chance event, at least if they have wagered on it, to be a matter of “luck,” where luck plays the role of the old gods, and can be cajoled, sacrificed to, encouraged with crossed fingers and rabbit's feet. In most cases, however, chance events are understood to be outside of human control, and to yield the outcomes they yield in accord with the laws of probability.
The early probabilists (Pascal, Fermat, the Bernoullis, and Laplace) believed in a deterministic world in which chance events did not really exist. Our belief in any chance event (say the outcome of the toss of a six-sided die) is less than certain only as a consequence of the limitations of our knowledge.