To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is solely dedicated to reinforcement learning (RL), one of the three main learning paradigms covered in the book (together with regression and classification). The goal of RL is for an agent to learn from and respond to its environment modeled as a Markov decision process (MDP), by following a set of policies to take the best action at each state of the MDP, in order to receive the maximum total accumulated reward. The utmost goal is to come up with the optimal policy in terms of the best action to take at each state. Different from all optimization problems previously considered for maximizing (or minimizing) certain objective functions, RL achieves its goal by the general method of dynamic programming (while linear and quadratic programmings are for constrained optimization), which solves a complex problem by breaking it up and solving a set of subproblems recursively. Specifically, the main method for RL is the Q-learning algorithm which finds the optimal policy that takes the best action selected based on the expected values of the total reward at all states and all actions at each state. Toward to end of the chapter, various more advanced versions of RL are briefly discussed based on some previously learned methods such as neural networks and deep learning.
The goal of this chapter is to prepare for the future discussion of various artificial neural network (ANN) learning algorithms by introducing some basic concepts in neural networks and some biologically inspired examples the Habbian and Hopfield networks to illustrate how an ANN based on some simple learning rule can achieve meaningful results, although they are not actually widely used in machine learning practice. Specifically, the behavior of the Hebbian learning network mimics the associative nature of brain, as a simple model of associative memory, and the Hopfield network further shows how a pattern can be stored and then recalled based on a noisy and imcomplete copy of itself, a function that is commenly demonstratedof the brain.
Confidently analyze, interpret and act on financial data with this practical introduction to the fundamentals of financial data science. Master the fundamentals with step-by-step introductions to core topics will equip you with a solid foundation for applying data science techniques to real-world complex financial problems. Extract meaningful insights as you learn how to use data to lead informed, data-driven decisions, with over 50 examples and case studies and hands-on Matlab and Python code. Explore cutting-edge techniques and tools in machine learning for financial data analysis, including deep learning and natural language processing. Accessible to readers without a specialized background in finance or machine learning, and including coverage of data representation and visualization, data models and estimation, principal component analysis, clustering methods, optimization tools, mean/variance portfolio optimization and financial networks, this is the ideal introduction for financial services professionals, and graduate students in finance and data science.
Applications of cryptography are plenty in everyday life. This guidebook is about the security analysis or 'cryptanalysis' of the basic building blocks on which these applications rely. Rather than covering a variety of techniques at an introductory level, this book provides a comprehensive and in-depth treatment of linear cryptanalysis. The subject is introduced from a mathematical point of view, providing an overview of the most influential papers on linear cryptanalysis and placing them in a consistent framework based on linear algebra. A large number of examples and exercises are included, drawing upon practice as well as theory. The book is accessible to students with no prior knowledge of cryptography. It covers linear cryptanalysis starting from the basics, including linear approximations and trails, correlation matrices, automatic search, key-recovery techniques, up to advanced topics, such as multiple and multidimensional linear cryptanalysis, zero-correlation approximations, and the geometric approach.
This chapter introduces linear cryptanalysis from the point of view that historically led to its discovery. This “original” description has the advantage of being concrete, but it is not very effective. However, it raises important questions that motivate later chapters.
The main extensions of linear cryptanalysis were introduced in previous chapters; they are multiple, multidimensional, and zero-correlation linear cryptanalysis. However, these are far from the only extensions proposed in the literature. This chapter is a tour of some of the most important proposals. Most of the extensions of linear cryptanalysis discussed in this chapter are partly conjectural: they show how certain combinatorial properties might be used to attack cryptographic primitives, but do not provide a clear way to analyze or find these properties. Chapter 11 returns to this issue.
This appendix collects some important facts about the normal distribution. These results are used throughout this book, and in particular in Chapters 4, 6, and 7.
In Chapter 1, we estimated the correlations of linear approximations by finding a suitable linear trail and applying the piling-up lemma, but this approach relied on an unjustified independence assumption. This chapter puts the piling-up lemma and linear cryptanalysis in general on a more solid theoretical foundation. This is achieved by using the theory of correlation matrices. Daemen proposed these matrices in 1994 to simplify the description of linear cryptanalysis.
In the previous chapters, and in Chapters 4 and 6 in particular, we already encountered methods for testing hypotheses. We used these statistical tests to determine if a given empirical correlation corresponds to the real key, or to an incorrect key. This chapter takes a more systematic look at statistical testing and derives methods that are—in some particular sense—best possible.
In this chapter, we rebuild the theory of linear cryptanalysis one last time. One of the reasons for doing this was already mentioned in Chapter 9: there are various combinatorial properties that might be useful, but for which there are no analytic methods. However, before attempting to address this issue, we must take a step back and try to improve our understanding of linear cryptanalysis.
In Chapter1, it was explained how linear approximations can be used to set up key-recovery attacks using Matsui’s Algorithm 1 or 2. This chapter takes a closer look at Algorithm 2 and its improvements. The most important improvement, and the main topic of this chapter, is the “fast Fourier transformation method.”
Chapter 11 reconstructs the theory of linear cryptanalysis from a more general point of view. To do this, we need to cover some mathematical ground. We first discuss linear algebra over the field of complex numbers, and then turn to the Fourier analysis of functions on a finite Abelian group. Both of these topics play a central role in Chapter 11.
Determining the effectiveness of linear cryptanalysis is an application of statistical theory. In this chapter, we review some basic concepts from statistics and discuss how they are used to estimate the cost of linear attacks, and Matsui’s second algorithm in particular.
Traditionally, linear cryptanalysis exploits linear approximations with atypically high absolute correlation. In this chapter, we discuss instead how linear approximations with correlation zero can be used. This variant of linear cryptanalysis is called zero-correlation linear cryptanalysis.