To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we consider Kitaev's honeycomb lattice model (Kitaev, 2006). This is an analytically tractable spin model that gives rise to quasiparticles with Abelian as well as non-Abelian statistics. Some of its properties are similar to the fractional quantum Hall effect, which has been studied experimentally in great detail even though it evades exact analytical treatment (Moore and Read, 1991). Due to its simplicity, the honeycomb lattice model is likely to be the first topological spin model to be realised in the laboratory, e.g., with optical lattice technology (Micheli et al., 2006). Understanding its properties can facilitate its physical realisation and can provide a useful insight into the mechanisms underlining topological insulators and the fractional quantum Hall effect.
The honeycomb lattice model comprises interacting spin-½ particles arranged on the sites of a honeycomb lattice. It is remarkable that such a simple model can support a rich variety of topological behaviours. For certain values of its couplings, Abelian anyons emerge that behave like the toric code anyons. For another coupling regime, non-Abelian anyons emerge that correspond to the Ising anyons. The latter are manifested as vortex-like configurations of the original spin model that can effectively be described by Majorana fermions. These are fermionic fields that are antiparticles of themselves. They were first introduced in the context of high-energy physics (Majorana, 1937) and become increasingly important in the analysis of solid state phenomena (Wilczek, 2009).
Topological quantum computation encodes and manipulates information by exclusively employing anyons. To study the computational power of anyons we plan to look into their fusion and braiding properties in a systematic way. This will allow us to identify a Hilbert space, where quantum information can be encoded fault-tolerantly. We also identify unitary evolutions that serve as logical gates. It is an amazing fact that fundamental properties, such as particle statistics, can be employed to perform quantum computation. As we shall see below, the resilience of these intrinsic particle properties against environmental perturbations is responsible for the fault-tolerance of topological quantum computation.
Anyons are physically realised as quasiparticles in topological systems. Most of the quasiparticle details are not relevant for the description of anyons. This provides an additional resilience of topological quantum computation against errors in the control of the quasiparticles. In particular, the principles of topological quantum computation are independent of the underlying physical system. We therefore do not discuss its properties in this chapter. The abstraction might create a conceptual vacuum as many intrinsic properties of the system might appear to be absent. For example, we shall not be concerned with the trapping and transport of anyons or with geometrical characteristics of their evolutions. In this chapter we treat anyons as classical fundamental particles, with internal quantum degrees of freedom, much like the spin. Moreover, we assume that we have complete control over the topological system, in terms of initial-state preparation and final-state identification.
To perform topological quantum computation we first need to experimentally realise anyons in a topological system. These systems are characterised by intriguing non-local quantum correlations that give rise to the anyonic statistics. What are the diagnostic tools we have to identify if a given system is indeed topological or not? Different phases of matter are characterised by their symmetries. This information is captured by order parameters. Usually, order parameters are defined in terms of local operators that can be measured in the laboratory. For example, the magnetisation of a spin system is given as the expectation value of a single spin with respect to the ground state. Such local properties can describe fascinating physical phenomena efficiently, such as ferromagnetism, and can pinpoint quantum phase transitions.
But what about topological systems? Experimentally, we usually identify the topological character of systems, such as the fractional quantum Hall liquids, by probing the anyonic properties of their excitations (Miller et al., 2007). However, topological order should be a characteristic of the ground state (Thouless et al., 1982; Wen, 1995). The natural question arises: is it possible to identify a property of the ground state of a system that implies anyonic excitations? The theoretical background that made it possible to answer this question came from entropic considerations of simple topological models. Hamma et al. (2005) studied the entanglement entropy of the toric code ground state and noticed an unusual behaviour.
The birth of topological quantum computation took place when Alexei Kitaev (2003) made the ingenious step of turning a quantum error correcting code into a many-body interacting system. In particular, he defined a Hamiltonian whose eigenstates are also states of a quantum error correcting code. Beyond the inherited error correcting characteristics, topological systems protect the encoded information with the presence of the Hamiltonian that energetically penalises transformations between states. This opens the door for employing a large variety of many-body effects to combat errors.
Storing or manipulating information with a real physical system is naturally subject to errors. To obtain a reliable outcome from a computation we need to be certain that the processed information remains resilient to errors at all times. To overcome errors we need to detect and correct them. The error detection process is based on an active monitoring of the system and the possibility of identifying errors without destroying the encoded information. Error correction employs the error detection outcome and performs the appropriate steps to correct it, thus reconstructing the original information.
Classical error correction uses redundancy to spread information in many copies so that errors can be detected, for example by majority voting, and then corrected. Similarly, quantum error correction aims to detect and correct errors of stored quantum information. Quantum states cannot be cloned (Wootters and Zurek, 1982), so the repetition encoding cannot be employed.
The study of anyonic systems as computational means has led to the exciting discovery of a new quantum algorithm. This algorithm provides a novel paradigm that fundamentally differs from searching (Grover, 1996) and factoring (Shor, 1997) algorithms. It is based on the particular behaviour of anyons and its goal is to evaluate Jones polynomials (Jones, 1985, 2005). These polynomials are topological invariants of knots and links, i.e., they depend on the global characteristics of their strands and not on their local geometry. The Jones polynomials were first connected to topological quantum field theories by Edward Witten (1989). Since then they have found applications in various areas of research, such as biology for DNA reconstruction (Nechaev, 1996) and statistical physics (Kauffman, 1991).
The best known classical algorithm for the exact evaluation of the Jones polynomial demands exponential resources (Jaeger et al., 1990). Employing anyons involves only a polynomial number of resources to produce an approximate answer to this problem (Freedman et al., 2003b). Evaluating Jones polynomials by manipulating anyons resembles an analogue computer. Indeed, the idea is equivalent to the classical setup, where a wire is wrapped several times around a solenoid that confines magnetic flux. By measuring the current that runs through the wire one can obtain the number of times the wire was wrapped around the solenoid, i.e., their linking number. Similarly, by creating anyons and spanning links with their worldlines we are able to extract the Jones polynomials of these links (Kauffman and Lomanaco, 2006).
In the previous chapters we introduced anyons and their properties, we presented how to perform topological quantum computation and studied several examples of topological models. There is a wide variety of research topics concerned with topological quantum computation. Among the many open questions, two have a singular importance. The first natural question is: which physical systems can support non-Abelian anyons? Realising non-Abelian anyons in the laboratory is of fundamental and practical interest. Such exotic statistical behaviour has not yet been encountered in nature. The physical realisation of non-Abelian anyons would be the first step towards the identification of a technological platform for the realisation of topological quantum computation. The second question concerns the efficiency of topological systems in combating errors. It has been proven that the effect of coherent environmental errors in the form of local Hamiltonian perturbations can be suppressed efficiently without degrading the topologically encoded information (Bravyi et al., 2010). Nevertheless, there is no mechanism that can protect topological order from incoherent probabilistic errors. Topological systems nevertheless constitute a rich and versatile medium that allows imaginative proposals to be developed (Chesi et al., 2010; Hamma et al., 2009).
Regarding the first question, we can identify two main categories of physical proposals for the realisation of two-dimensional topological systems: systems that are defined on the continuum and discrete systems defined on a lattice. It is natural to ask, which are the most promising architectures to realise in the laboratory?
Symmetries play a central role in physics. They dictate what one can change in a physical system without affecting any of its properties. You might have encountered symmetries like translational symmetry, where a system remains unchanged if it is spatially translated by an arbitrary distance. A system with rotational symmetry, however, is invariant under rotations. Some symmetries, like the ones mentioned above, give information about the structure of the system. Others have to do with the more fundamental physical framework that we adopt. An example for this is the invariance under Lorentz transformations in relativistic physics.
Other types of symmetries can be even more subtle. For example, it is rather self-evident that physics should remain unchanged if we exchange two identical point-like particles. Nevertheless, this fundamental property that we call statistical symmetry gives rise to rich and beautiful physics. In three spatial dimensions it dictates the existence of bosons and fermions. These are particles with very different quantum mechanical properties. Their wave function acquires a +1 or a -1 phase, respectively, whenever two particles are exchanged. A direct consequence of this is that bosons can actually occupy the same state. In contrast, fermions can only be stacked together with each particle occupying a different state.
When one considers two spatial dimensions, a wide variety of statistical behaviours is possible. Apart from bosonic and fermionic behaviours, arbitrary phase factors, or even non-trivial unitary evolutions, can be obtained when two particles are exchanged (Leinaas and Myrheim, 1977).
A completely different interpretation of the measurement problem, one which many professional scientists have found attractive if only because of its mathematical elegance, was first suggested by Hugh Everett III in 1957 and is known variously as the ‘relative state’, ‘many-worlds’ or ‘branching-universe’ interpretation. This viewpoint gives no special role to the conscious mind and to this extent the theory is completely objective, but we shall see that many of its other consequences are in their own way just as revolutionary as those discussed in the previous chapter.
The essence of the many-worlds interpretation can be illustrated by again considering the example of the 45° polarised photon approaching the H/V detector. Remember what we demonstrated in Chapters 2 and 4: from the wave point of view a 45° polarised light wave is equivalent to a superposition of a horizontally polarised wave and a vertically polarised wave. If we were able to think purely in terms of waves, the effect of the H/V polariser on the 45° polarised wave would be simply to split the wave into these two components. These would then travel through the H and V channels respectively, half the original intensity being detected in each. In contrast, photons cannot be split, but they can be considered to be in a superposition state until a measurement ‘collapses’ the system into one or other of its possible outcomes.
Quantum physics is the theory that underlies nearly all our current understanding of the physical universe. Since its invention some sixty years ago the scope of quantum theory has expanded to the point where the behaviour of subatomic particles, the properties of the atomic nucleus and the structure and properties of molecules and solids are all successfully described in quantum terms. Yet, ever since its beginning, quantum theory has been haunted by conceptual and philosophical problems which have made it hard to understand and difficult to accept.
As a student of physics some twenty-five years ago, one of the prime fascinations of the subject to me was the great conceptual leap quantum physics required us to make from our conventional ways of thinking about the physical world. As students we puzzled over this, encouraged to some extent by our teachers who were nevertheless more concerned to train us how to apply quantum ideas to the understanding of physical phenomena. At that time it was difficult to find books on the conceptual aspects of the subject - or at least any that discussed the problems in a reasonably accessible way. Some twenty years later when I had the opportunity of teaching quantum mechanics to undergraduate students, I tried to include some references to the conceptual aspects of the subject and, although there was by then a quite extensive literature, much of this was still rather technical and difficult for the non-specialist.
My aims in preparing this second edition have been to simplify and clarify the discussion, wherever this could be done without diluting the content, and to update the text in the light of developments during the last 17 years. The discussion of non-locality and particularly the Bell inequalities in Chapter 3 is an example of both of these. The proof of Bell's theorem has been considerably simplified, without, I believe, damaging its validity, and reference is made to a number of important experiments performed during the last decade of the twentieth century. I am grateful to Lev Vaidman for drawing my attention to the unfairness of some of my criticisms of the ‘many worlds’ interpretation, and to him and Simon Saunders for their attempts to lead me to an understanding of how the problem of probabilities is addressed in this context. Chapter 6 has been largely rewritten in the light of these, but I am sure that neither of the above will wholeheartedly agree with my conclusions.
Chapter 7 has been revised to include an account of the influential spontaneous-collapse model developed by G. C. Ghiradi, A. Rimini and T. Weber. Significant recent experimental work in this area is also reviewed. There has been considerable progress on the understanding of irreversibility, which is discussed in Chapters 8, 9 and 10. Chapter 9, which emphasised ideas current in the 1980s, has been left largely alone, but the new Chapter 10 deals with developments since then.
The 1935 paper by Einstein, Podolski and Rosen represented the culmination of a long debate that had begun soon after quantum theory was developed in the 1920s. One of the main protagonists in this discussion was Niels Bohr, a Danish physicist who worked in Copenhagen until, like so many other European scientists of his time, he became a refugee in the face of the German invasion during the Second World War. As we shall see, Bohr's views differed strongly from those of Einstein and his co-workers on a number of fundamental issues, but it was his approach to the fundamental problems of quantum physics that eventually gained general, though not universal, acceptance. Because much of Bohr's work was done in that city, his ideas and those developed from them have become known as the ‘Copenhagen interpretation’. In this chapter we shall discuss the main ideas of this approach. We shall try to appreciate its strengths as well as attempting to understand why some believe that there are important questions left unanswered.
When Einstein said that ‘God does not play dice’, Bohr is said to have replied ‘Don't tell God what to do!’ The historical accuracy of this exchange may be in doubt, but it encapsulates the differences in approach of the two men. Whereas Einstein approached quantum physics with doubts, and sought to reveal its incompleteness by demonstrating its lack of consistency with our everyday ways of thinking about the physical universe, Bohr's approach was to accept the quantum ideas and to explore their consequences for our everyday ways of thinking.
The last two chapters have described two extreme views of the quantum measurement problem. On the one hand it was suggested that the laws of quantum physics are valid for all physical systems, but break down in the assumed non-physical conscious mind. On the other hand the many-worlds approach assumes that the laws of physics apply universally and that a branching of the universe occurs at every measurement or measurement-like event. However, although in one sense these represent opposite extremes, what both approaches have in common is a desire to preserve quantum theory as the one fundamental universal theory of the physical universe, able to explain equally well the properties of atoms and subatomic particles on the one hand and detectors, counters and cats on the other. In this chapter we explore an alternative possibility, that quantum theory may have to be modified before it can explain the behaviour of large-scale macroscopic objects as well as microscopic systems. We will require that any such modification preserves the principle of weak reductionism discussed towards the end of Chapter 5.
The first point to be made is that the problems we have been discussing seem to make very little difference in practice. As we emphasised in Chapter 1, quantum physics has been probably the most successful theory of modern science. Wherever it can be tested, be it in the exotic behaviour of fundamental particles or the operation of the silicon chip, quantum predictions have always been in complete agreement with experimental results.
‘God’, said Albert Einstein, ‘does not play dice’. This famous remark by the author of the theory of relativity was not intended as an analysis of the recreational habits of a supreme being but expressed his reaction to the new scientific ideas, developed in the first quarter of the twentieth century, which are now known as quantum physics. Before we can fully appreciate why one of the greatest scientists of modern times should have been led to make such a comment, we must first try to understand the context of scientific and philosophical thought that had become established by the end of the nineteenth century and what it was about the ‘new physics’ that presented such a radical challenge to this consensus.
What is often thought of as the modern scientific age began in the sixteenth century, when Nicholas Copernicus proposed that the motion of the stars and planets should be described on the assumption that it is the sun, rather than the earth, which is the centre of the solar system. The opposition, not to say persecution, that this idea encountered from much of the establishment of that time is well known, but this was unable to prevent a revolution in thinking whose influence has continued to the present day. From that time on, the accepted test of scientific truth has increasingly been observation and experiment rather than religious or philosophical dogma.
Irreversibility, strengthened by the idea of strong mixing, has been discussed in the last two chapters. We reached the conclusion that, once such processes have been involved in a quantum measurement, it is in principle impossible to perform an interference experiment that would demonstrate the continued existence of a superposition. It is then ‘safe’ to assume that the system has ‘really’ collapsed into a state corresponding to one of the possible measurement outcomes. Does this mean that the measurement problem has been solved? Clearly it has for all practical purposes, as has been pointed out several times in earlier chapters. But it may still not be sufficient to provide a completely satisfactory solution in principle: in particular, we note that we have still not properly addressed the question of actualisation outlined at the end of the first section of Chapter 8.
In the present chapter we discuss an interpretation of quantum physics that was developed during the last 15 or so years of the twentieth century and is based on the idea of describing quantum processes in terms of ‘consistent histories’. As we shall see, the resulting theory has much in common with the Copenhagen interpretation discussed in Chapter 4 and when applied to measurement it connects with the viewpoint, discussed in the last chapter, in which irreversible processes are to be taken as the primary reality.