To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We are presently witnessing mergers and takeovers of unprecedented speed and extent between companies once thought to have national identities, or at least clearly identifiable lines of products or services. On the day this paragraph was written, the British Vodaphone Air Touch announced an Internet alliance with the French conglomerate Vivendi. The deal was conditional on Vodaphone's hostile takeover of Germany's Mannesmann and, in the end, did establish a branded multi-access portal in Europe. About a week later, the takeover of Mannesmann was official – the biggest ever, and friendly. MCI's attempted takeover of Sprint is another example of a strategically advantageous combination of different information technologies. January 2000 saw CNN, NTV, and the Deutsche Handelsblatt (a direct competitor to the Financial Times) launch a multimedia product for stock market news that is accessible via television, printed newspapers, and the World Wide Web. And so it goes. Although many differing views are held regarding the causes and consequences of these phenomena, we would probably all agree that they reflect a certain shift of emphasis from production-based economics to one grounded in the processing, marketing, and access of information. Whether the products themselves are merely “information” or systems for managing and processing vast amounts of data, information systems are seen as a crucial strategic means for organizing, improving, and maintaining more traditional production cycles.
In Protocol 2.5 (p. 21) we saw the twofold use of public-key cryptography (PKC) as a means for
making messages unintelligible before they are transmitted on an untrusted communication line; and
ensuring the authenticity of messages, or digital documents in general, by digitally signing them.
Protocol 2.5 exemplifies the dual role of private keys. They may be used to decipher a message that was encrypted with an agent's public key; on the other hand, they may be used to sign messages, and the signature can then be verified with the corresponding public key. Implementations usually employ different PKCs, or at least different parameters, for each of these functional roles of private keys. (See the exercises that follow for possible reasons.) It is beyond the scope of this text to discuss more advanced types of realizable digital signature systems, but we mention them in passing.
Protocol 4.1 (Fail-Stop Digital Signature)
A digital signature system has this property if a signer can prove that a message that was signed with her key, based on a fraudulent attack, is a fake.
Protocol 4.2 (Proxy Digital Signature)
A digital signature system has this property if a signer can give his authority to sign a message to someone else without revealing his secret signature key.
Protocol 4.3 (Designated-Confirmer Digital Signatures)
These are protocols that allow a signer to designate a confirmer, possibly herself, whose cooperation is necessary for the verification of digital signatures. This prevents the exact copying of digital signatures.
Information is meaningful only if it flows from one location to another. Such flow can take on many forms. Information may flow from a filing cabinet into somebody's brain; it may pass through various departments of a commercial or military organization; it may be input into – and transformed by – computer programs. In any event, it is of paramount importance that sensitive information not be leaked to unauthorized agents during its flow through a network or program that processes information.
In Chapter 1, we encountered public-key cryptography as a technique for guaranteeing secure flow of confidential messages (e.g., a key for the Rijndael cipher) from one agent to another through an unsecure communication channel. However, such secure information flow may be corrupted when implementing cryptographic algorithms – for example, the RSA and DES encryption modules shown in Figure 6.1 and Figure 3.4 (respectively). Clearly, it is quite straightforward and reasonably simple to write programs that provide the specified input–output functionality. Yet program variables, other programs, or other users of the operating system in which these programs run may be able to deduce information about the secret key for those public-key or symmetric cryptographic systems, either by observing run-time behavior of these implementations or by analyzing their concrete syntax.
Compilation to Boolean satisfiability has become a powerful paradigm for solving artificial intelligence problems. However, domains that require metric reasoning cannot be compiled efficiently to satisfiability even if they would otherwise benefit from compilation. We address this problem by combining techniques from the artificial intelligence and operations research communities. In particular, we introduce the LCNF (Linear Conjunctive Normal Form) representation that combines propositional logic with metric constraints. We present LPSAT (Linear Programming plus SATisfiability), an engine that solves LCNF problems by interleaving calls to an incremental Simplex algorithm with systematic satisfaction methods. We explore several techniques for enhancing LPSAT's efficiency and expressive power by adjusting the interaction between the satisfiability and linear programming components of LPSAT. Next, we describe a compiler that converts metric resource planning problems into LCNF for processing by LPSAT. Finally, the experimental section of the paper explores several optimisations to LPSAT, including learning from constraint failure and randomised cutoffs.
Maths programming (MP) and constraint programming (CP) are two techniques that are able to solve difficult industrial optimisation problems. The purpose of this paper is to compare them from an algorithmic and a modelling point of view. Algorithmic principles of each approach are described and contrasted. Some ways of combining both techniques are also introduced.
This paper presents several case studies which illustrate how constraint programming can benefit from the combination of global and local search techniques, offering a flexible and efficient platform for the design of combinatorial optimisation applications. For job-shop scheduling, we relate experiments with local search procedures that use global search to intensively explore a given neighbourhood, in the spirit of “shuffle” methods. For preemptive job-shop scheduling, two basic search strategies, Depth-First Search and Limited Discrepancy Search, are compared. For Vehicle Routing we report an Incremental Local Optimisation heuristic, combined with Limited Discrepancy Search. Finally, we show how ad hoc algebras can considerably enhance the design of heuristics based on local and global search within a constraint-programming environment. Experiments on vehicle routing will enlighten how such a language for “search and insert” control can enable automated tuning and discovery of new strategies adapted to the instances typology of the problem at stake.
In this paper we describe the framework we have developed in KIDS (Kestrel Interactive Development System) for generating efficient constraint satisfaction programs. We have used KIDS to synthesise global search scheduling programs that have proved to be dramatically faster than other programs running the same data. We focus on the underlying ideas that lead to this efficiency. The key to the efficiency is the reduction of the size of the search space by an effective representation of sets of possible solutions (solution spaces) that allows efficient constraint propagation and pruning at the level of solution spaces. Moving to a solution space representation involves a problem reformulation. Having found a solution to the reformulated problem, an extraction phase extracts solutions to the original problem. We show how constraints from the original problem can be automatically reformulated and specialised in order to derive efficient propagation code automatically. Our solution methods exploit the semi-lattice structure of our solution spaces.
Many optimisation problems involve combinatorial constraints on continuous variables. An example of a combinatorial constraint is that at most one variable in a group of nonnegative variables may be positive. Traditionally, in the mathematical programming community, such problems have been modeled as mixed-integer programs by introducing auxiliary binary variables and additional constraints. Because the number of variables and constraints becomes larger and the combinatorial structure is not used to advantage, these mixed-integer programming models may not be solved satisfactorily, except for small instances. Traditionally, constraint programming approaches to such problems keep and use the combinatorial structure, but do not use linear programming bounds in the search for an optimal solution. Here we present a branch-and-cut approach that considers the combinatorial constraints without the introduction of binary variables. We review the development of this approach and show how strong constraints can be derived using ideas from polyhedral combinatorics. To illustrate the ideas, we present a production scheduling model that arises in the manufacture of fibre optic cables.
This is the second of two special issues focusing on the integration of artificial intelligence (AI) and operations research (OR) techniques for solving hard computational problems, with an emphasis on planning and scheduling. Both the AI and the OR community have developed sophisticated techniques to tackle such challenging problems. OR has relied heavily on mathematical programming formulations such as integer and linear programming, while AI has developed constraint-based search techniques and inference methods. Recently, we have seen a convergence of ideas, drawing on the individual strengths of these paradigms.
There is a big difference between building a prototype system and a piece of production software. In his classic book The Mythical Man-Month, Fredrick Brooks estimates that it takes nine times the effort to create a complete, reliable system as opposed to an initial program that starts to do the job.
With Meena's graduation, I needed a fresh student to turn our prototype into a production system. I got to know Roger Mailler when he took CSE 214, undergraduate Data Structures, with me in the fall of 1997. Roger was the bored-looking student in the front row – too bright and knowledgeable to get very much from the course, but too disciplined to cut class or hide in the back. Roger finished first out of the 126 students in the course (by a substantial margin) and was untainted by a programming assignment cheating scandal that claimed many of his classmates.
Roger is an interesting fellow whose career path to Stony Brook followed a very non-standard course. His first attempt at college (at the Rochester Institute of Technology) was, to be charitable, unsuccessful. In one year at RIT he amassed a grade point average (GPA) of 0.96, where 4.0 is an A and 1.0 is a D. Any mammal with a pulse ought to be able to do better. Indeed, this is the lowest GPA I've ever seen sustained over a full academic year because students capable of such performance usually manage to get themselves expelled before the year is out.
Classical logic—including first order logic, which we studied in Chapter 2—is concerned with deductive inference. If the premises are true, the conclusions drawn using classical logic are always also true. Although this kind of reasoning is not inductive, in the sense that any conclusion we can draw from a set of premises is already “buried” in the premises themselves, it is nonetheless fundamental to many kinds of reasoning tasks. In addition to the study of formal systems such as mathematics, in other domains such as planning and scheduling a problem can in many cases also be constrained to be mainly deductive.
Because of this pervasiveness, many logics for uncertain inference incorporate classical logic at the core. Rather than replacing classical logic, we extend it in various ways to handle reasoning with uncertainty. In this chapter, we will study a number of these formalisms, grouped under the banner nonmonotonic reasoning. Monotonicity, a key property of classical logic, is given up, so that an addition to the premises may invalidate some previous conclusions. This models our experience: the world and our knowledge of it are not static; often we need to retract some previously drawn conclusion on learning new information.
Logic and (Non)monotonicity
One of the main characteristics of classical logic is that it is monotonic, that is, adding more formulas to the set of premises does not invalidate the proofs of the formulas derivable from the original premises alone. In other words, a formula that can be derived from the original premises remains derivable in the expanded premise set.
This first volume contains only material on the basic tools of modern cryptography, that is, one-way functions, pseudorandomness, and zero-knowledge proofs. These basic tools are used in the construction of the basic applications (to be covered in the second volume). The latter will cover encryption, signatures, and general cryptographic protocols. In this appendix we provide brief summaries of the treatments of these basic applications.
Encryption: Brief Summary
Both private-key and public-key encryption schemes consist of three efficient algorithms: key generation, encryption, and decryption. The difference between the two types of schemes is reflected in the definition of security: The security of a public-key encryption scheme should also hold when the adversary is given the encryption key, whereas that is not required for private-key encryption schemes. Thus, public-key encryption schemes allow each user to broadcast its encryption key, so that any other user can send it encrypted messages (without needing to first agree on a private encryption key with the receiver). Next we present definitions of security for private-key encryption schemes. The public-key analogies can be easily derived by considering adversaries that get the encryption key as additional input. (For private-key encryption schemes, we can assume, without loss of generality, that the encryption key is identical to the decryption key.)
Definitions
For simplicity, we consider only the encryption of a single message; however, this message can be longer than the key (which rules out information-theoretic secrecy [200]). We present two equivalent definitions of security.
In this chapter, we offer advice for dealing with some common problems that you may encounter. We also list and describe the most common mistakes that MATLAB users make. Finally, we offer some simple but useful techniques for debugging your M-files.
Common Problems
Problems manifest themselves in various ways: Totally unexpected or plainly wrong output appears; MATLAB produces an error message (or at least a warning); MATLAB refuses to process an input line; something that worked earlier stops working; or, worst of all, the computer freezes. Fortunately, these problems are often caused by several easily identifiable and correctable mistakes. What follows is a description of some common problems, together with a presentation of likely causes, suggested solutions, and illustrative examples. We also refer to places in the book where related issues are discussed.
Here is a list of the problems:
wrong or unexpected output,
syntax error,
spelling error,
error messages when plotting,
a previously saved M-file evaluates differently, and
computer won't respond.
Wrong or Unexpected Output
There are many possible causes for this problem, but they are likely to be among the following:
CAUSE: Forgetting to clear or reset variables.
SOLUTION: Clear or initialize variables before using them, especially in a long session.