To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Say that two sentences are factually equivalent when they describe the same facts or situations, understood as worldly items, i.e. as bits of reality rather than as representations of reality. The notion of factual equivalence is certainly of central interest to philosophical semantics, but it plays a role in a much wider range of philosophical areas. What is the logic of factual equivalence? This paper attempts to give a partial answer to this question, by providing an answer the following, more specific question: Given a standard propositional language with negation, conjunction and disjunction as primitive operators, which sentences of the language should be taken to be factually equivalent by virtue of their logical form? The system for factual equivalence advocated in this paper is a proper fragment of the first-degree system for the logic of analytic equivalence put forward in the late seventies by R. B. Angell. I provide the system with two semantics, both formulated in terms of the notion of a situation’s being fittingly described by a linguistic item. In the final part of the paper I argue, contra a view I defended in my “Grounding and Truth-Functions” (2010), that the logic for factual equivalence I advocate here should be preferred to Angell’s logic if one wishes to follow the general conception of the relationships between factual equivalence and the notion of grounding put forward in the 2010 paper.
This paper gives a complete characterisation of type isomorphism definable by terms of a λ-calculus with intersection and union types. Unfortunately, when union is considered the Subject Reduction property does not hold in general. However, it is well known that in the λ-calculus, independently of the considered type system, the isomorphism between two types can be realised only by invertible terms. Notably, all invertible terms are linear terms. In this paper, the isomorphism of intersection and union types is investigated using a relevant type system for linear terms enjoying the Subject Reduction property. To characterise type isomorphism, a similarity between types and a type reduction are introduced. Types have a unique normal form with respect to the reduction rules and two types are isomorphic if and only if their normal forms are similar.
My HOPE that this SPES series reaches completion has supported me over many years. These years have been devoted both to fixing the details of the operative scheme based on Spear's Theorem, which allows one to set a Buchberger Theory over each effective associative ring and of which I have been aware since my 1988 preprint “Seven variations on standard bases” and to satisfy my horror vacui by including all the relevant results of which I have been aware.
My horror vacui had the negative aspect of making the planned third book grow too much, forcing me to split it into two separate volumes. As a consequence the structure I planned 12 years ago and which anticipated a Hegelian (or Dante–like) trilogy, whose central focus was the Gröbnerian technology discussed in Volume II, was quite deformed and the result appears as a (Wagner-like?) tetralogy.
This volume contains Part six, Algebraic Solving, and is where I complete the task set out in Part one by discussing all the recent approaches. These are mainly based on the results discussed in Volume II, which allow one to effectively manipulate the roots of a polynomial equation system, thus fulfilling the aim of “solving” as set out in Volume I according to the Kronecker–Duval Philosophy: Trinks’ Algorithm, the Gianni–Kalkbrener Theorem, the Stetter Algorithm, Dixon's resultant, the Cardinal–Mourrain Algorithm, Lazard's Solver, Rouillier's Rational Univariate Representation, the TERA Kronecker package.
Macaulay's Matrix and u-resultant, a historical tour of elimination from Bézout to Dixon, who was the last student of Cayley, the Lagrange resolvent and the investigation of it performed by Valibuze and Arnaudies are also covered.
In Barendregt (1984), Corrado Böhm conjectured that every adequate (Barendregt 1984, 6.4.2 (ii)) numeral system of normal combinators has normal successor, predecessor and zero test. In this note, we give a counterexample to this conjecture. Our example is shown to have no normal zero test. Böhm has informed us that Intrigila (1994) has given an example with no normal successor. Our strategy, in terms of the ant-lion paradigm, is to pry open the trap so wide that it enters its active state before its jaws are shut.
Increases in the use of automated theorem-provers have renewed focus on the relationship between the informal proofs normally found in mathematical research and fully formalised derivations. Whereas some claim that any correct proof will be underwritten by a fully formal proof, sceptics demur. In this paper I look at the relevance of these issues for formalism, construed as an anti-platonistic metaphysical doctrine. I argue that there are strong reasons to doubt that all proofs are fully formalisable, if formal proofs are required to be finitary, but that, on a proper view of the way in which formal proofs idealise actual practice, this restriction is unjustified and formalism is not threatened.
Agent communication languages (ACLs) are fundamental mechanisms that enable agents in multi-agent systems to talk, communicate with each other in order to satisfy their individual and social goals in a cooperative and competitive manner. Social approaches are advocated to overcome the shortcomings of ACL semantics delineated by using mental approaches in the figure of agents’ mental notions. Over the last two decades, social commitments have been the subject of considerable research in some of those social approaches as they provide a powerful representation for modeling and reasoning upon multi-agent interactions in the form of mutual contractual obligations. They particularly provide a declarative, flexible, verifiable, and social semantics for ACL messages while respecting agents’ autonomy, heterogeneity, and openness.
In this manuscript, we go through prominent and predominate proposals in the literature to explore the state of the art on how temporal logics can be devoted to define a formal semantics for ACL messages in terms of social commitments and associated actions. We explain each proposal and point out if and how it meets seven crucial criteria, four of them introduced by Munindar P. Singh to have a well-defined semantics for ACL messages. Far from deciding the best proposal, our aim is to present the advantages (strengths) and limitations of those proposals to designers and developers using a concrete running example and to compare between them, so that they can make the best choice with regard to their needs. We explore and evaluate current specification languages and different verification techniques that have been discussed within those proposals to, respectively, specify and verify commitment-based protocols. We also investigate logical languages of actions advocated to specify, model, and execute commitment-based protocols in other contributed proposals. Finally, we suggest some solutions that can contribute to address the identified limitations.
This article presents an overview of student difficulties in an introductory functional programming (FP) course taught in Haskell. The motivation for this study stems from our belief that many student difficulties can be alleviated by understanding the underlying causes of errors and by modifying the educational approach and, possibly, the teaching language accordingly. We analyze students' exercise submissions and categorize student errors according to compiler error messages and then manually according to the observed underlying cause. Our study complements earlier studies on the topic by applying computer and manual analysis while focusing on providing descriptive statistics of difficulties specific to FP languages. We conclude that the majority of student errors, regardless of cause, are reported by three different compiler error messages that are not well understood by students. In addition, syntactic features, such as precedence, the syntax of function application, and deeply nested statements, cause difficulties throughout the course.
This third volume of four finishes the program begun in Volume 1 by describing all the most important techniques, mainly based on Gröbner bases, which allow one to manipulate the roots of the equation rather than just compute them. The book begins with the 'standard' solutions (Gianni–Kalkbrener Theorem, Stetter Algorithm, Cardinal–Mourrain result) and then moves on to more innovative methods (Lazard triangular sets, Rouillier's Rational Univariate Representation, the TERA Kronecker package). The author also looks at classical results, such as Macaulay's Matrix, and provides a historical survey of elimination, from Bézout to Cayley. This comprehensive treatment in four volumes is a significant contribution to algorithmic commutative algebra that will be essential reading for algebraists and algebraic geometers.
This special issue of Mathematical Structures in Computer Science is devoted to the fourteenth Italian Conference on Theoretical Computer Science (ICTCS) held at University of Palermo, Italy, from 9th to 11th September 2013. ICTCS is the conference of the Italian Chapter of the European Association for Theoretical Computer Science and covers a wide spectrum of topics in Theoretical Computer Science, ranging from computational complexity to logic, from algorithms and data structure to programming languages, from combinatorics on words to distributed computing. For this reason, the contributions here included come from very different areas of Theoretical Computer Science. In fact this special issue is motivated by the desire to give people who have presented their ideas at the 14th ICTCS the opportunity to publish papers on their work. Submitted papers have been subject to a careful and severe reviewing process and 11 of them were selected for this special issue.
With this comprehensive guide you will learn how to apply Bayesian machine learning techniques systematically to solve various problems in speech and language processing. A range of statistical models is detailed, from hidden Markov models to Gaussian mixture models, n-gram models and latent topic models, along with applications including automatic speech recognition, speaker verification, and information retrieval. Approximate Bayesian inferences based on MAP, Evidence, Asymptotic, VB, and MCMC approximations are provided as well as full derivations of calculations, useful notations, formulas, and rules. The authors address the difficulties of straightforward applications and provide detailed examples and case studies to demonstrate how you can successfully use practical Bayesian inference methods to improve the performance of information systems. This is an invaluable resource for students, researchers, and industry practitioners working in machine learning, signal processing, and speech and language processing.
In a data-driven society, individuals and companies encounter numerous situations where private information is an important resource. How can parties handle confidential data if they do not trust everyone involved? This text is the first to present a comprehensive treatment of unconditionally secure techniques for multiparty computation (MPC) and secret sharing. In a secure MPC, each party possesses some private data, while secret sharing provides a way for one party to spread information on a secret such that all parties together hold full information, yet no single party has all the information. The authors present basic feasibility results from the last 30 years, generalizations to arbitrary access structures using linear secret sharing, some recent techniques for efficiency improvements, and a general treatment of the theory of secret sharing, focusing on asymptotic results with interesting applications related to MPC.
Written for mathematicians working with the theory of graph spectra, this book explores more than 400 inequalities for eigenvalues of the six matrices associated with finite simple graphs: the adjacency matrix, Laplacian matrix, signless Laplacian matrix, normalized Laplacian matrix, Seidel matrix, and distance matrix. The book begins with a brief survey of the main results and selected applications to related topics, including chemistry, physics, biology, computer science, and control theory. The author then proceeds to detail proofs, discussions, comparisons, examples, and exercises. Each chapter ends with a brief survey of further results. The author also points to open problems and gives ideas for further reading.
We propose a language-independent word normalisation method and exemplify it on modernising historical Slovene words. Our method relies on character-level statistical machine translation (CSMT) and uses only shallow knowledge. We present relevant data on historical Slovene, consisting of two (partially) manually annotated corpora and the lexicons derived from these corpora, containing historical word–modern word pairs. The two lexicons are disjoint, with one serving as the training set containing 40,000 entries, and the other as a test set with 20,000 entries. The data spans the years 1750–1900, and the lexicons are split into fifty-year slices, with all the experiments carried out separately on the three time periods. We perform two sets of experiments. In the first one – a supervised setting – we build a CSMT system using the lexicon of word pairs as training data. In the second one – an unsupervised setting – we simulate a scenario in which word pairs are not available. We propose a two-step method where we first extract a noisy list of word pairs by matching historical words with cognate modern words, and then train a CSMT system on these pairs. In both sets of experiments, we also optionally make use of a lexicon of modern words to filter the modernisation hypotheses. While we show that both methods produce significantly better results than the baselines, their accuracy and which method works best strongly correlates with the age of the texts, meaning that the choice of the best method will depend on the properties of the historical language which is to be modernised. As an extrinsic evaluation, we also compare the quality of part-of-speech tagging and lemmatisation directly on historical text and on its modernised words. We show that, depending on the age of the text, annotation on modernised words also produces significantly better results than annotation on the original text.
Van Wamelen [Math. Comp. 68 (1999) no. 225, 307–320] lists 19 curves of genus two over $\mathbf{Q}$ with complex multiplication (CM). However, for each curve, the CM-field turns out to be cyclic Galois over $\mathbf{Q}$, and the generic case of a non-Galois quartic CM-field did not feature in this list. The reason is that the field of definition in that case always contains the real quadratic subfield of the reflex field.
We extend Van Wamelen’s list to include curves of genus two defined over this real quadratic field. Our list therefore contains the smallest ‘generic’ examples of CM curves of genus two.
We explain our methods for obtaining this list, including a new height-reduction algorithm for arbitrary hyperelliptic curves over totally real number fields. Unlike Van Wamelen, we also give a proof of our list, which is made possible by our implementation of denominator bounds of Lauter and Viray for Igusa class polynomials.
We study elliptic curves over quadratic fields with isogenies of certain degrees. Let $n$ be a positive integer such that the modular curve $X_{0}(n)$ is hyperelliptic of genus ${\geqslant}2$ and such that its Jacobian has rank $0$ over $\mathbb{Q}$. We determine all points of $X_{0}(n)$ defined over quadratic fields, and we give a moduli interpretation of these points. We show that, with a finite number of exceptions up to $\overline{\mathbb{Q}}$-isomorphism, every elliptic curve over a quadratic field $K$ admitting an $n$-isogeny is $d$-isogenous, for some $d\mid n$, to the twist of its Galois conjugate by a quadratic extension $L$ of $K$. We determine $d$ and $L$ explicitly, and we list all exceptions. As a consequence, again with a finite number of exceptions up to $\overline{\mathbb{Q}}$-isomorphism, all elliptic curves with $n$-isogenies over quadratic fields are in fact $\mathbb{Q}$-curves.
Let $G$ be a compact connected Lie group with a maximal torus $T$. In the context of Schubert calculus we present the integral cohomology $H^{\ast }(G/T)$ by a minimal system of generators and relations.
We show how to efficiently evaluate functions on Jacobian varieties and their quotients. We deduce an algorithm to compute $(l,l)$ isogenies between Jacobians of genus two curves in quasi-linear time in the degree $l^{2}$.