To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This volume contains nine survey articles based on the invited lectures given at the 24th British Combinatorial Conference, held at Royal Holloway, University of London in July 2013. This biennial conference is a well-established international event, with speakers from around the world. The volume provides an up-to-date overview of current research in several areas of combinatorics, including graph theory, matroid theory and automatic counting, as well as connections to coding theory and Bent functions. Each article is clearly written and assumes little prior knowledge on the part of the reader. The authors are some of the world's foremost researchers in their fields, and here they summarise existing results and give a unique preview of cutting-edge developments. The book provides a valuable survey of the present state of knowledge in combinatorics, and will be useful to researchers and advanced graduate students, primarily in mathematics but also in computer science and statistics.
Nominal sets provide a promising new mathematical analysis of names in formal languages based upon symmetry, with many applications to the syntax and semantics of programming language constructs that involve binding, or localising names. Part I provides an introduction to the basic theory of nominal sets. In Part II, the author surveys some of the applications that have developed in programming language semantics (both operational and denotational), functional programming and logic programming. As the first book to give a detailed account of the theory of nominal sets, it will be welcomed by researchers and graduate students in theoretical computer science.
In this paper, I provide a thorough discussion and reconstruction of Bernard Bolzano’s theory of grounding and a detailed investigation into the parallels between his concept of grounding and current notions of normal proofs. Grounding (Abfolge) is an objective ground-consequence relation among true propositions that is explanatory in nature. The grounding relation plays a crucial role in Bolzano’s proof-theory, and it is essential for his views on the ideal buildup of scientific theories. Occasionally, similarities have been pointed out between Bolzano’s ideas on grounding and cut-free proofs in Gentzen’s sequent calculus. My thesis is, however, that they bear an even stronger resemblance to the normal natural deduction proofs employed in proof-theoretic semantics in the tradition of Dummett and Prawitz.
A property of finite graphs is called non-deterministically testable if it has a ‘certificate’ such that once the certificate is specified, its correctness can be verified by random local testing. In this paper we study certificates that consist of one or more unary and/or binary relations on the nodes, in the case of dense graphs. Using the theory of graph limits, we prove that non-deterministically testable properties are also deterministically testable.
Balakrishnan and Zhao does an excellent job in this issue at reviewing the recent advances on stochastic comparison between order statistics from independent and heterogeneous observations with proportional hazard rates, gamma distribution, geometric distribution, and negative binomial distributions, the relation between various stochastic order and majorization order of concerned heterogeneous parameters is highlighted. Some examples are presented to illustrate main results while pointing out the potential direction for further discussion.
The traveling salesman problem (TSP) is one of the most fundamental optimizationproblems. We consider the β-metric traveling salesman problem(Δβ-TSP), i.e., the TSPrestricted to graphs satisfying the β-triangle inequalityc({v,w}) ≤ β(c({v,u}) + c({u,w})),for some cost function c and any three vertices u,v,w.The well-known path matching Christofides algorithm (PMCA) guarantees an approximationratio of 3β2/2 and is the best known algorithm for theΔβ-TSP, for 1 ≤ β ≤ 2. Weprovide a complete analysis of the algorithm. First, we correct an error in the originalimplementation that may produce an invalid solution. Using a worst-case example, we thenshow that the algorithm cannot guarantee a better approximation ratio. The example canalso be used for the PMCA variants for the Hamiltonian path problem with zero and oneprespecified endpoints. For two prespecified endpoints, we cannot reuse the example, butwe construct another worst-case example to show the optimality of the analysis also inthis case.
We provide an algorithm for listing all minimal 2-dominating sets of a tree of ordern in time 𝒪(1.3248n). This implies that every tree has at most1.3248n minimal 2-dominating sets. We also show that thisbound is tight.
We discuss how much space is sufficient to decide whether a unary given numbern is a prime. We show thatO(log log n) space is sufficient for a deterministicTuring machine, if it is equipped with an additional pebble movable along the input tape,and also for an alternating machine, if the space restriction applies only to itsaccepting computation subtrees. In other words, the language is a prime is inpebble–DSPACE(log log n) and also inaccept–ASPACE(log log n). Moreover, if the givenn is composite, such machines are able to find a divisor ofn. Since O(log log n) space is toosmall to write down a divisor, which might requireΩ(log n) bits, the witness divisor is indicated by theinput head position at the moment when the machine halts.
We give several new applications of the wreath product of forest algebras to the study oflogics on trees. These include new simplified proofs of necessary conditions fordefinability in CTL and first-order logic with the ancestor relation; asequence of identities satisfied by all forest languages definable inPDL; and new examples of languages outside CTL, alongwith an application to the question of what properties are definable in bothCTL and LTL.
This paper is a sequel to Beall (2011), in which I both give and discuss the philosophical import of a ‘classical collapse’ result for the propositional (multiple-conclusion) logic LP+. Feedback on such ideas prompted a spelling out of the first-order case. My aim in this paper is to do just that: namely, explicitly record the first-order result(s), including the collapse results for K3+ and FDE+.
The purpose of this paper is to investigate categoricity arguments conducted in second order logic and the philosophical conclusions that can be drawn from them. We provide a way of seeing this result, so to speak, through a first order lens divested of its second order garb. Our purpose is to draw into sharper relief exactly what is involved in this kind of categoricity proof and to highlight the fact that we should be reserved before drawing powerful philosophical conclusions from it.
This study designed and developed a web-based reading strategy training program and investigated students’ use of its features and EFL teachers’ and students’ perceptions of the program. The recent proliferation of online reading materials has made information easily available to L2 readers; however, L2 readers’ ability to deal with them requires the development of specific reading strategies. The researcher therefore constructed a web-based strategy training program on the basis of L2 reading strategy research and pedagogy. The program offers four types of reading strategy functions (Global, Problem-solving, Support, and Socio-affective) through 15 strategy buttons: Keyword, Preview, Prediction, Outline, Summary, Semantic Mapping, Pronunciation, Speed Reading, Dictionary, Translation, Grammar, Highlight, Notebook, Music Box, and My Questions. Forty college teachers and thirty-two EFL students in Taiwan were invited to use and evaluate this program. The researcher tracked students’ use of the functions, and teachers and students completed a survey and written reflections that documented their perceptions of the program. Both groups gave positive feedback on the program's user-friendly interface design and the effectiveness of its strategy function keys for enhancing reading comprehension and motivating learning. They also thought highly of the site's extensive offerings of reading opportunities supported by effective reading aids and a computerized classroom management system, features not available in large traditional classes. There was, however, a gap between what teachers thought and what students did. The teachers thought highly of Global strategies, whereas students regarded Support strategies as more useful. The low-proficiency group's heavy use of Support strategies explained this gap. The high-proficiency group's more frequent use of Global strategies echoed teachers’ preference for teaching Global strategies. This connection suggests that teachers should provide more explicit training to encourage all students to use Global strategies for overall textual understanding.
The deductive method ruled mathematics for the last 2500 years, now it is the turn of the inductive method. We make a start by using the C-finite ansatz to enumerate tilings of skinny place regions, inspired by a Mathematics Magazine Problem proposed by Donald Knuth.
to be described below. In fact, more accurately, this article accompanies these packages, written by DZ and the many output files, discovering and proving deep enumeration theorems, done by SBE, that are linked to from the webpage of this article: http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/ritsuf.html.
How it all started: April 5, 2012
During one of the Rutgers University Experimental Mathematics Seminar dinners, the name of Don Knuth came up, and two of the participants, David Nacin, who was on sabbatical from William Patterson University, and first-year graduate student Patrick Devlin, mentioned that they recently solved a problem that Knuth proposed in Mathematics Magazine [5]. The problem was:
1868. Proposed by Donald E. Knuth, Stanford University, Stanford, California.
Let n ≥ 2 be an integer. Remove the central (n − 2)2 squares from an (n + 2) × (n + 2) array of squares. In how many ways can the remaining squares be covered with 4n dominoes?
Many combinatorial problems can be formulated as “Can I transform configuration 1 into configuration 2, if only certain transformations are allowed?”. An example of such a question is: given two k-colourings of a graph, can I transform the first k-colouring into the second one, by recolouring one vertex at a time, and always maintaining a proper k-colouring? Another example is: given two solutions of a SAT-instance, can I transform the first solution into the second one, by changing the truth value one variable at a time, and always maintaining a solution of the SAT-instance? Other examples can be found in many classical puzzles, such as the 15-Puzzle and Rubik's Cube.
In this survey we shall give an overview of some older and some more recent work on this type of problem. The emphasis will be on the computational complexity of the problems: how hard is it to decide if a certain transformation is possible or not?
Introduction
Reconfiguration problems are combinatorial problems in which we are given a collection of configurations, together with some transformation rule(s) that allows us to change one configuration to another. A classic example is the so-called 15-puzzle (see Figure 1): 15 tiles are arranged on a 4 × 4 grid, with one empty square; neighbouring tiles can be moved to the empty slot. The normal aim is, given an initial configuration, to move the tiles to the position with all numbers in order (right-hand picture in Figure 1). Readers of a certain age may remember Rubik’s cube and its relatives as examples of reconfiguration puzzles (see Figure 2).
In 1982 Truemper gave a theorem that characterizes graphs whose edges can be labeled so that all chordless cycles have prescribed parities. The characterization states that this can be done for a graph G if and only if it can be done for all induced subgraphs of G that are of a few specific types, that we will call Truemper configurations. Truemper was originally motivated by the problem of obtaining a co-NP characterization of bipartite graphs that are signable to be balanced (i.e. bipartite graphs whose node-node incidence matrices are balanceable matrices).
The configurations that Truemper identified in his theorem ended up playing a key role in understanding the structure of several seemingly diverse classes of objects, such as regular matroids, balanceable matrices and perfect graphs. In this survey we view all these classes, and more, through the excluded Truemper configurations, focusing on the algorithmic consequences, trying to understand what structurally enables efficient recognition and optimization algorithms.
Introduction
Optimization problems such as coloring a graph, or finding the size of a largest clique or stable set are NP-hard in general, but become polynomially solvable when some configurations are excluded. On the other hand they remain difficult even when seemingly quite a lot of structure is imposed on an input graph. For example, determining whether a graph is 3-colorable remains NP-complete for triangle-free graphs with maximum degree 4 [92].