To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We have presented sample λProlog programs to illustrate various computations throughout this book. Being able to execute and experiment with those programs should help the reader understand the λProlog programming language and the logic underlying it. To that end, this appendix presents a short introduction to the Teyjus implementation of λProlog. This system can be freely downloaded over the web. The various programs presented in the earlier chapters are also available in electronic form from the website associated with this book.
An overview of the Teyjus system
The Teyjus implementation of λProlog is based on two components. One component is the emulator of an abstract or virtual machine that has an instruction set and runtime system that realizes all the high-level computations implicit in a λProlog program. The second component is a compiler that translates λProlog programs into the instructions of the abstract machine.
Another important aspect of the Teyjus system is that it uses the modules language discussed in Chapter 6. A programmer therefore, must, organize the kind and type declarations and the clauses into modules and then attach signatures to such modules in order to mediate their external view. The compiler is responsible for taking a given module of λProlog code, certifying its internal consistency, ensuring that it satisfies its associated signature, and finally, translating it into a byte-code form. This byte-code form consists of a “header” part containing constant and type names and other related data structures as well as a sequence of instructions that can be run on the virtual machine once it has understood the header information. A critical part of the emulator is a loader that can read in such byte-code files and put the emulator in a state where it is ready to respond to user queries. The other part of the emulator is, of course, a byte-code interpreter that steps through instructions in the manner called for by the user input.
Chapter 1 discussed the use of first-order terms to represent data. This chapter describes logic programming over such representations using a typed variant of first-order Horn clauses. We begin this presentation by developing a view of logic programming that will allow us to introduce extensions smoothly in later chapters, leading eventually to the full set of logical features that underlie the λProlog language. From this perspective, we will take this paradigm of programming to have two defining characteristics. First, languages within the paradigm provide a relational approach to programming. In particular, relations over data descriptions are defined or axiomatized through formulas that use logical connectives and quantifiers. Second, the paradigm views computation as a search process. In the approach underlying λProlog, this view is realized by according to each logical symbol a fixed search-related interpretation. These interpretations lead, in turn, to specific programming capabilities.
The first two sections that follow provide a more detailed exposition of a general framework for logic programming along the lines just sketched. The rest of the chapter is devoted to presenting first-order Horn clauses as a specific elaboration of this framework.
First-order formulas
The first step toward allowing for the description of relations over objects represented by first-order terms is to ease a restriction on signatures: We permit the target types of constants to be ο. Constants that have this type are called relation or predicate symbols. Well-formed first-order expressions are otherwise constructed in the same fashion as that described in Section 1.3. Expressions that have the type ο in this setting are referred to as first-order atomic formulas.
Formal systems in computer science frequently involve specifications of computations over syntactic structures such as λ-terms, π-calculus expressions, first-order formulas, types, and proofs. This book is concerned, in part, with using higher-order logic to express such specifications. Properties are often associated with expressions by formal systems via syntax-based inference rules. Examples of such descriptions include presentations of typing and operational semantics. Logic programming, with its orientation around rule-based specifications, provides a natural framework for encoding and animating these kinds of descriptions. Variable binding is integral to most syntactic expressions, and its presence typically translates into side conditions accompanying inference rules. While many of the concepts related to binding, such as variable renaming, substitution, and scoping, are logically well understood, their treatment at a programming level is surprisingly difficult. We show here that a programming language based on a simply typed version of higher-order logic provides an elegant approach to performing computations over structures embodying binding.
The agenda just described has a prerequisite: We must be able to make sense of a higher-order logic as a programming language. This is a nontrivial task that defines a second theme that permeates this book. Usual developments of logic programming are oriented around formulas in clausal form with resolution as the sole inference rule. Sometimes a semantics-based presentation is also used, expanding typically into the idea of minimal (Herbrand) models.
This chapter considers the encoding of a process calculus within a higher-order logic programming language. Process calculi have been proposed in the literature as a means for modeling concurrent systems. The π-calculus in particular makes use of a sophisticated binding mechanism to encode communication between processes. Our goal here is to show that such binding mechanisms can be treated naturally using λ-tree syntax in λProlog. Since we do not discuss the π-calculus itself in any detail, a reader probably would need a prior exposure to this calculus to best appreciate the nuances of our encodings. However, our primary focus is on showing how a presentation of a formal system can be transformed into a complete and logically precise description in λProlog and how such a description can be used computationally. Thus a reader who has understood the earlier chapters also should be able to follow our development and perhaps will learn something about the π-calculus from it.
The first two sections of this chapter describe an abstract syntax representation for processes in the π-calculus and the specification of the standard transition relation over such processes. A highlight of this specification is that the transition rules are encoded in a completely logical fashion through the use of λ-tree syntax: The usual side conditions involving names are captured completely using binders and their mobility. Sections 11.3 and 11.4 discuss how our encoding can be used in analyzing computational behavior. This discussion also illuminates shortcomings of the logic programming setting in specifying what is known as the must behavior of processes. The last section further illustrates our approach to abstract syntax by showing the translation of a mapping of the λ -calculus under a call-by-name evaluation semantics into the π -calculus.
The previous chapters have dealt with logic programming in the context of first-order logic. We are now interested in moving the discussion to the setting of a higher-order logic. The particular logic that we will use for this purpose is one based on the simply typed λ-calculus, generalized to allow for a form of polymorphic typing. This underlying calculus has several nontrivial computational characteristics that themselves merit discussion. We undertake this task in this chapter, delaying the presentation of the higher-order logic and the logic programming language based on it until Chapter 5.
The first two sections of this chapter describe the syntax of the simply typed λ-calculus and an equality relation called λ-conversion that endows the expressions of this calculus with a notion of functionality. The λ-conversion operation brings with it considerable computational power. We discuss this aspect in Section 4.3. In the logic programming setting, λ-conversion will not be deployed directly as a computational device but instead will be used indirectly in the course of solving unification problems between λ-terms. A discussion of this kind of unification, commonly called higher-order unification, is the focus of the second half of this chapter. Section 4.4 presents a general format for such problems, introduces terminology relating to them, and tries to develop intuitions about the solutions to these problems. Section 4.5 begins to develop the structure for a procedure that might be used to solve higher-order unification problems; this discussion is incomplete and meant only as a prelude to the more detailed treatment of higher-order unification that appears in Chapter 8.
The treatment of programs as objects is a theme common to systems such as interpreters, compilers, and program transformers. These systems typically use an abstract representation of programs that they then manipulate in accordance with the syntax-directed operational semantics of the underlying programming language. The λProlog language can capture such representation and manipulation of programs in a succinct and declarative manner. We illustrate this strength of λProlog by considering various computations over programs in a simple but representative functional language. In the first section we describe this language through its λ-tree syntax; we assume that the reader is sufficiently familiar with functional programming notions to be able to visualize a corresponding concrete syntax. In Section 10.2we present two different specifications of evaluation with respect to this language. In Section 10.3 we consider the encoding of some transformations on programs that are driven by an analysis of their syntactic structure.
The miniFP programming language
The functional programming language that we use in this illustration is called miniFP. While miniFP is a typed language, in its encoding we initially treat its programs as being untyped: We later introduce a language of types and consider a program to be proper only if a type can be associated with it.
The core of the language of program expressions, then, is the untyped λ-calculus. We use the type tm for these expressions, and we encode them in the manner described in Section 7.1.2 for this calculus, with the difference that we use the symbol @ instead of app to represent the application of two expressions, and we write @ as an infix operator.
In a two-person red-and-black game, each player wants to maximize the probability of winning the entire fortune of his opponent by gambling repeatedly with suitably chosen stakes. We find the multiplicativity (including submultiplicative and supermultiplicative) of the win probability function is important for the profiles (bold, timid) or (bold, bold) to be a Nash equilibrium. Surprisingly, a Nash equilibrium condition for the profile (bold, any strategy) is also given in terms of multiplicativity. Finally, we search for some suitable conditions such that the profile (timid, timid) is also a Nash equilibrium.
In this paper, we study a traffic intersection with vehicle-actuated traffic signal control. Traffic lights stay green until all lanes within a group are emptied. Assuming general renewal arrival processes, we derive exact limiting distributions of the delays under heavy traffic (HT) conditions. Furthermore, we derive the light traffic (LT) limit of the mean delays for intersections with Poisson arrivals, and develop a heuristic adaptation of this limit to capture the LT behavior for other interarrival-time distributions. We combine the LT and HT results to develop closed-form approximations for the mean delays of vehicles in each lane. These closed-form approximations are quite accurate, very insightful, and simple to implement.
In this paper, we study some stochastic comparisons of the maxima in two multiple-outlier geometric samples based on the likelihood ratio order, hazard rate order, and usual stochastic order. We establish a sufficient condition on parameter vectors for the likelihood ratio ordering to hold. For the special case when n = 2, it is proved that the p-larger order between the two parameter vectors is equivalent to the hazard rate order as well as usual stochastic order between the two maxima. Some numerical examples are presented for illustrating the established results.
We examine the problem of optimally maintaining a stochastically degrading system using preventive and reactive replacements. The system's rate of degradation is modulated by an exogenous stochastic environment process, and the system fails when its cumulative degradation level first reaches a fixed deterministic threshold. The objective is to minimize the total expected discounted cost of preventively and reactively replacing such a system over an infinite planning horizon. To this end, we present and analyze a Markov decision process model. It is shown that, for each environment state, there exists an optimal threshold-type replacement policy. Additionally, empirical evidence suggests that, when the environment process is monotone, and the state-dependent degradation rates are totally ordered, the optimal threshold is monotone. Lastly, we derive closed-form bounds on the optimal thresholds.
A source of light is placed d inches apart from the center of a detection bar of length L ≥ d. The source spins very rapidly, while shooting beams of light according to, say, a Poisson process with rate λ. The positions of the beams, relative to the center of the bar, are recorded for those beams that actually hit the bar. Which law best describes the time-average position of the beams that hit the bar given a fixed but long time horizon t? The answer is given in this paper by means of a uniform weak convergence result in L, d as t → ∞. Our approximating law includes as particular cases the Cauchy and Gaussian distributions.
In this paper, a new sufficient condition for comparing linear combinations of independent gamma random variables according to star ordering is given. This unifies some of the newly proved results on this problem. Equivalent characterizations between various stochastic orders are established by utilizing the new condition. The main results in this paper generalize and unify several results in the literature including those of Amiri, Khaledi, and Samaniego [2], Zhao [18], and Kochar and Xu [9].
We consider a memoryless loss system with servers = {1, …, J}, and with customer types = {1, …, I}. Servers are multi-type: server j works at rate μj, and can serve a subset of customer types C(j). An arriving customer will go to the longest idling server which can serve him, or be lost. We obtain a simple explicit steady-state distribution for this system, and calculate various performance measures of this system in steady state. We provide some illustrative examples. We compare this system with a similar system discussed recently by Adan, Hurkens, and Weiss [1]. We also show that this system is insensitive, the results hold also for general service time distributions.
We consider a generalized memoryless property which relates to Cantor's second functional equation, study its properties and demonstrate various examples.