To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Alright, so now we've got this beautiful theory of quantum mechanics, and the possibly-even-more-beautiful theory of computational complexity. Clearly, with two theories this beautiful, you can't just let them stay single – you have to set them up, see if they hit it off, etc.
And that brings us to the class BQP: Bounded-Error Quantum Polynomial-Time. We talked in Chapter 7 about BPP, or Bounded-Error Probabilistic Polynomial-Time. Informally, BPP is the class of computational problems that are efficiently solvable in the physical world if classical physics is true. Now we ask, what problems are efficiently solvable in the physical world if (as seems more likely) quantum physics is true?
To me it’s sort of astounding that it took until the 1990s for anyone to really seriously ask this question, given that all the tools for asking it were in place by the 1960s or even earlier. It makes you wonder, what similarly obvious questions are there today that no one’s asking?
Last chapter, we talked about whether quantum states should be thought of as exponentially long vectors, and I brought up class BQP/qpoly and concepts like quantum advice. Actually, I’d say that the main reason why I care is something I didn't mention last time, which is that it relates to whether we should expect quantum computing to be fundamentally possible or not. There are people, like Leonid Levin and Oded Goldreich, who just take it as obvious that quantum computing must be impossible. Part of their argument is that it's extravagant to imagine a world where describing the state of 200 particles takes more bits then there are particles in the universe. To them, this is a clear indication something is going to break down. So part of the reason that I like to study the power of quantum proofs and quantum advice is that it helps us answer the question of whether we really should think of a quantum state as encoding an exponential amount of information.
So, on to the Eleven Objections.
Works on paper, not in practice.
Violates Extended Church–Turing Thesis.
Not enough “real physics.”
Small amplitudes are unphysical.
Exponentially large states are unphysical.
Quantum computers are just souped-up analog computers.
Quantum computers aren't like anything we've ever seen before.
Quantum mechanics is just an approximation to some deeper theory.
Decoherence will always be worse than the fault-tolerance threshold.
We don't need fault-tolerance for classical computers.
Errors aren't independent.
What I did is to write out every skeptical argument against the possibility of quantum computing that I could think of. We'll just go through them, and make commentary along the way. Let me just start by saying that my point of view has always been rather simple: it's entirely conceivable that quantum computing is impossible for some fundamental reason. If so, then that's by far the most exciting thing that could happen for us. That would be much more interesting than if quantum computing were possible, because it changes our understanding of physics. To have a quantum computer capable of factoring 10000-digit integers is the relatively boring outcome – the outcome that we'd expect based on the theories we already have.
Some reasons to regard the cumulative hierarchy of sets as potential rather than actual are discussed. Motivated by this, a modal set theory is developed which encapsulates this potentialist conception. The resulting theory is equi-interpretable with Zermelo Fraenkel set theory but sheds new light on the set-theoretic paradoxes and the foundations of set theory.
I'm going to talk about the title question, but first, a little digression. In science, there's this traditional hierarchy where you have biology on top, and chemistry underlies it, and then physics underlies chemistry. If the physicists are in a generous mood, they'll say that math underlies physics. Then, computer science is over somewhere with soil engineering or some other nonscience.
Now, my point of view is a bit different: computer science is what mediates between the physical world and the Platonic world. With that in mind, “computer science” is a bit of a misnomer; maybe it should be called “quantitative epistemology.” It's sort of the study of the capacity of finite beings such as us to learn mathematical truths. I hope I’ve been showing you some of that.
How do we reconcile this with the notion that any actual implementation of a computer must be based on physics? Wouldn’t the order of physics and CS be reversed?
Well, by similar logic one could say that any mathematical proof has to be written on paper, and therefore physics should go below math in the hierarchy. Or one could say that math is basically a field that studies whether particular kinds of Turing machine will halt or not, and so CS is the ground that everything else sits on. Math is then just the special case where the Turing machines enumerate topological spaces or do something else that mathematicians care about. But then, the strange thing is that physics, especially in the form of quantum probability, has lately been seeping down the intellectual hierarchy, contaminating the “lower” levels of math and CS. This is how I’ve always thought about quantum computing: as a case of physics not staying where it’s supposed to in the intellectual hierarchy! If you like, I’m professionally interested in physics precisely to the extent that it seeps down into the “lower” levels, which are supposed to be the least arbitrary ones, and forces me to rethink what I thought I understood about those levels.
We've seen that if we want to make progress in complexity, then we need to talk about asymptotics: not which problems can be solved in 10000 steps, but for which problems can instances of size n be solved in cn2 steps as n goes to infinity? We met TIME(f(n)), the class of all problems solvable in O(f(n)) steps, and SPACE(f(n)), the class of all problems solvable using O(f(n)) bits of memory.
But if we really want to make progress, then it's useful to take an even coarser-grained view: one where we distinguish between polynomial and exponential time, but not between O(n2) and O(n3) time. From this remove, we think of any polynomial bound as “fast,” and any exponential bound as “slow.”
Now, I realize people will immediately object: what if a problem is solvable in polynomial time, but the polynomial is nn50 000? Or what if a problem takes exponential time, but the exponential is 1.000 000 01n? My answer is pragmatic: if cases like that regularly arose in practice, then it would’ve turned out that we were using the wrong abstraction. But so far, it seems like we’re using the right abstraction. Of the big problems solvable in polynomial time – matching, linear programming, primality testing, etc. – most of them really do have practical algorithms. And of the big problems that we think take exponential time – theorem-proving, circuit minimization, etc. – most of them really don't have practical algorithms. So, that’s the empirical skeleton holding up our fat and muscle.
Straightforwardly and strictly intuitionistic inferences show that the Brouwer– Heyting–Kolmogorov (BHK) interpretation, in the presence of a formulation of the recognition principle, entails the validity of the Law of Testability: that the form ¬ f V ¬¬ f is valid. Therefore, the BHK and recognition, as described here, are inconsistent with the axioms both of intuitionistic mathematics and of Markovian constructivism. This finding also implies that, if the BHK and recognition are suitably formulated, then Brouwer’s original weak counterexample reasoning was fallacious. The results of the present article extend and refine those of McCarty, C. (2012). Antirealism and Constructivism: Brouwer’s Weak Counterexamples. The Review of Symbolic Logic. First View. Cambridge University Press.
This paper presents a classification-based approach to noun–noun compound interpretation within the statistical learning framework of kernel methods. In this framework, the primary modelling task is to define measures of similarity between data items, formalised as kernel functions. We consider the different sources of information that are useful for understanding compounds and proceed to define kernels that compute similarity between compounds in terms of these sources. In particular, these kernels implement intuitive notions of lexical and relational similarity and can be computed using distributional information extracted from text corpora. We report performance on classification experiments with three semantic relation inventories at different levels of granularity, demonstrating in each case that combining lexical and relational information sources is beneficial and gives better performance than either source taken alone. The data used in our experiments are taken from general English text, but our methods are also applicable to other domains and potentially to other languages where noun–noun compounding is frequent and productive.
The cutwidth is an important graph-invariant in circuit layout designs. The cutwidth of agraph G is the minimum value of the maximum number of overlap edges whenG is embedded into a line. A caterpillar is a tree which yields a pathwhen all its leaves are removed. An iterated caterpillar is a tree which yields acaterpillar when all its leaves are removed. In this paper we present an exact formula forthe cutwidth of the iterated caterpillars.
Pursuit-evasion games, such as the game of Revolutionaries and Spies, are a simplified model for network security. In the game we consider in this paper, a team of r revolutionaries tries to hold an unguarded meeting consisting of m revolutionaries. A team of s spies wants to prevent this forever. For given r and m, the minimum number of spies required to win on a graph G is the spy number σ(G,r,m). We present asymptotic results for the game played on random graphs G(n,p) for a large range of p = p(n), r=r(n), and m=m(n). The behaviour of the spy number is analysed completely for dense graphs (that is, graphs with average degree at least n1/2+ε for some ε > 0). For sparser graphs, some bounds are provided.
In this work, an optimal maneuverability strategy for car-like unmanned vehicles operating in restricted environments is presented. The maneuverability strategy is based on a path planning algorithm that uses the environment information to plan a safe, feasible and optimum path for the unmanned mobile robot. The environment information is obtained by means of a simultaneous localization and mapping (SLAM) algorithm. The SLAM algorithm uses the sensors' information to build a map of the surrounding environment. A Monte Carlo sampling technique is used to find an optimal and safe path within the environment based on the SLAM information. The objective of the planning is to safely reach a desired orientation in a bounded space. Theoretical demonstrations and real-time experimental results (in indoor and outdoor environments) are also presented in this work.
This paper investigates task allocation for multiple robots by applying the game theory-based negotiation approach. Based on the initial task allocation using a contract net-based approach, a new method to select the negotiation robots and construct the negotiation set is proposed by employing the utility functions. A negotiation mechanism suitable for the decentralized task allocation is also presented. Then, a game theory-based negotiation strategy is proposed to achieve the Pareto-optimal solution for the task reallocation. Extensive simulation results are provided to show that the task allocation solutions after the negotiation are better than the initial contract net-based allocation. In addition, experimental results are further presented to show the effectiveness of the approach presented.
A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.
The truly world-wide reach of the Web has brought with it a new realisation of the enormous importance of usability and user interface design. In the last ten years, much has become understood about what works in search interfaces from a usability perspective, and what does not. Researchers and practitioners have developed a wide range of innovative interface ideas, but only the most broadly acceptable make their way into major web search engines. This book summarizes these developments, presenting the state of the art of search interface design, both in academic research and in deployment in commercial systems. Many books describe the algorithms behind search engines and information retrieval systems, but the unique focus of this book is specifically on the user interface. It will be welcomed by industry professionals who design systems that use search interfaces as well as graduate students and academic researchers who investigate information systems.
Richard Bird takes a radical approach to algorithm design, namely, design by calculation. These 30 short chapters each deal with a particular programming problem drawn from sources as diverse as games and puzzles, intriguing combinatorial tasks, and more familiar areas such as data compression and string matching. Each pearl starts with the statement of the problem expressed using the functional programming language Haskell, a powerful yet succinct language for capturing algorithmic ideas clearly and simply. The novel aspect of the book is that each solution is calculated from an initial formulation of the problem in Haskell by appealing to the laws of functional programming. Pearls of Functional Algorithm Design will appeal to the aspiring functional programmer, students and teachers interested in the principles of algorithm design, and anyone seeking to master the techniques of reasoning about programs in an equational style.
Aggregation is the process of combining several numerical values into a single representative value, and an aggregation function performs this operation. These functions arise wherever aggregating information is important: applied and pure mathematics (probability, statistics, decision theory, functional equations), operations research, computer science, and many applied fields (economics and finance, pattern recognition and image processing, data fusion, etc.). This is a comprehensive, rigorous and self-contained exposition of aggregation functions. Classes of aggregation functions covered include triangular norms and conorms, copulas, means and averages, and those based on nonadditive integrals. The properties of each method, as well as their interpretation and analysis, are studied in depth, together with construction methods and practical identification methods. Special attention is given to the nature of scales on which values to be aggregated are defined (ordinal, interval, ratio, bipolar). It is an ideal introduction for graduate students and a unique resource for researchers.
This is the first comprehensive introduction to Support Vector Machines (SVMs), a generation learning system based on recent advances in statistical learning theory. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc., and are now established as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and its applications. The concepts are introduced gradually in accessible and self-contained stages, while the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally, the book and its associated web site will guide practitioners to updated literature, new applications, and on-line software.