To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Compressed sensing is an exciting, rapidly growing field, attracting considerable attention in electrical engineering, applied mathematics, statistics and computer science. This book provides the first detailed introduction to the subject, highlighting theoretical advances and a range of applications, as well as outlining numerous remaining research challenges. After a thorough review of the basic theory, many cutting-edge techniques are presented, including advanced signal modeling, sub-Nyquist sampling of analog signals, non-asymptotic analysis of random matrices, adaptive sensing, greedy algorithms and use of graphical models. All chapters are written by leading researchers in the field, and consistent style and notation are utilized throughout. Key background information and clear definitions make this an ideal resource for researchers, graduate students and practitioners wanting to join this exciting research area. It can also serve as a supplementary textbook for courses on computer vision, coding theory, signal processing, image processing and algorithms for efficient data processing.
As one of the most comprehensive machine learning texts around, this book does justice to the field's incredible richness, but without losing sight of the unifying principles. Peter Flach's clear, example-based approach begins by discussing how a spam filter works, which gives an immediate introduction to machine learning in action, with a minimum of technical fuss. Flach provides case studies of increasing complexity and variety with well-chosen examples and illustrations throughout. He covers a wide range of logical, geometric and statistical models and state-of-the-art topics such as matrix factorisation and ROC analysis. Particular attention is paid to the central role played by features. The use of established terminology is balanced with the introduction of new and useful concepts, and summaries of relevant background material are provided with pointers for revision if necessary. These features ensure Machine Learning will set a new standard as an introductory textbook.
How does Google sell ad space and rank webpages? How does Netflix recommend movies and Amazon rank products? How can you influence people on Facebook and Twitter and can you really reach anyone in six steps? Why doesn't the internet collapse under congestion and does it have an Achilles' heel? Why are you charged per gigabyte for mobile data and how can Skype and BitTorrent be free? How are cloud services so scalable and why is WiFi slower at hotspots than at home? Driven by twenty real-world questions about our networked lives, this book explores the technology behind the multi-trillion dollar internet and wireless industries. Providing easily understandable answers for the casually curious, alongside detailed explanations for those looking for in-depth discussion, this thought-provoking book is essential reading for students in engineering, science and economics, for network industry professionals and anyone curious about how technological and social networks really work.
A systematic, unified treatment of orthogonal transform methods for signal processing, data analysis and communications, this book guides the reader from mathematical theory to problem solving in practice. It examines each transform method in depth, emphasizing the common mathematical principles and essential properties of each method in terms of signal decorrelation and energy compaction. The different forms of Fourier transform, as well as the Laplace, Z-, Walsh–Hadamard, Slant, Haar, Karhunen–Loève and wavelet transforms, are all covered, with discussion of how each transform method can be applied to real-world experimental problems. Numerous practical examples and end-of-chapter problems, supported by online Matlab and C code and an instructor-only solutions manual, make this an ideal resource for students and practitioners alike.
Discover what is involved in designing the world's most popular and advanced consumer product to date - the phone in your pocket. With this essential guide you will learn how the dynamics of the market, and the pace of technology innovation, constantly create new opportunities which design teams utilize to develop new products that delight and surprise us. Explore core technology building blocks, such as chipsets and software components, and see how these components are built together through the design lifecycle to create unique handset designs. Learn key design principles to reduce design time and cost, and best practice guidelines to maximize opportunities to create a successful product. A range of real-world case studies are included to illustrate key insights. Finally, emerging trends in the handset industry are identified, and the global impact those trends could have on future devices is discussed.
With ever-increasing demands on capacity, quality of service, speed, and reliability, current Internet systems are under strain and under review. Combining contributions from experts in the field, this book captures the most recent and innovative designs, architectures, protocols, and mechanisms that will enable researchers to successfully build the next-generation Internet. A broad perspective is provided, with topics including innovations at the physical/transmission layer in wired and wireless media, as well as the support for new switching and routing paradigms at the device and sub-system layer. The proposed alternatives to TCP and UDP at the data transport layer for emerging environments are also covered, as are the novel models and theoretical foundations proposed for understanding network complexity. Finally, new approaches for pricing and network economics are discussed, making this ideal for students, researchers, and practitioners who need to know about designing, constructing, and operating the next-generation Internet.
AND SO WE HAVE come to the end of our journey through the ‘making sense of data’ landscape. We have seen how machine learning can build models from features for solving tasks involving data. We have seen how models can be predictive or descriptive; learning can be supervised or unsupervised; and models can be logical, geometric, probabilistic or ensembles of such models. Now that I have equipped you with the basic concepts to understand the literature, there is a whole world out there for you to explore. So it is only natural for me to leave you with a few pointers to areas you may want to learn about next.
One thing that we have often assumed in the book is that the data comes in a form suitable for the task at hand. For example, if the task is to label e-mails we conveniently learn a classifier from data in the form of labelled e-mails. For tasks such as class probability estimation I introduced the output space (for the model) as separate from the label space (for the data) because the model outputs (class probability estimates) are not directly observable in the data and have to be reconstructed. An area where the distinction between data and model output is much more pronounced is reinforcement learning. Imagine you want to learn how to be a good chess player. This could be viewed as a classification task, but then you require a teacher to score every move.
TWO HEADS ARE BETTER THAN ONE – a well-known proverb suggesting that two minds working together can often achieve better results. If we read ‘features’ for ‘heads’ then this is certainly true in machine learning, as we have seen in the preceding chapters. But we can often further improve things by combining not just features but whole models, as will be demonstrated in this chapter. Combinations of models are generally known as model ensembles. They are among the most powerful techniques in machine learning, often outperforming other methods. This comes at the cost of increased algorithmic and model complexity.
The topic of model combination has a rich and diverse history, to which we can only partly do justice in this short chapter. The main motivations came from computational learning theory on the one hand, and statistics on the other. It is a well-known statistical intuition that averaging measurements can lead to a more stable and reliable estimate because we reduce the influence of random fluctuations in single measurements. So if we were to build an ensemble of slightly different models from the same training data, we might be able to similarly reduce the influence of random fluctuations in single models. The key question here is how to achieve diversity between these different models. As we shall see, this can often be achieved by training models on random subsets of the data, and even by constructing them from random subsets of the available features.
TREE MODELS ARE among the most popular models in machine learning. For example, the pose recognition algorithm in the Kinect motion sensing device for the Xbox game console has decision tree classifiers at its heart (in fact, an ensemble of decision trees called a random forest about which you will learn more in Chapter 11). Trees are expressive and easy to understand, and of particular appeal to computer scientists due to their recursive ‘divide-and-conquer’ nature.
In fact, the paths through the logical hypothesis space discussed in the previous chapter already constitute a very simple kind of tree. For instance, the feature tree in Figure 5.1 (left) is equivalent to the path in Figure 4.6 (left) on p.117. This equivalence is best seen by tracing the path and the tree from the bottom upward.
The left-most leaf of the feature tree represents the concept at the bottom of the path, covering a single positive example.
The next concept up in the path generalises the literal Length = 3 into Length = [3,5] by means of internal disjunction; the added coverage (one positive example) is represented by the second leaf from the left in the feature tree.
By dropping the condition Teeth = few we add another two covered positives.
Dropping the ‘Length’ condition altogether (or extending the internal disjunction with the one remaining value ‘4’) adds the last positive, and also a negative.