To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Markov decision processes (MDPs) are at the core of reinforcement learning theory. Similar to Markov chains, MDPs involve an underlying Markovian process that evolves from one state to another, with the probability of visiting a new state being dependent on the most recent state. Different from Markov chains, MDPs involve both agents and actions taken by these agents. As a result, the next state is dependent on which action was chosen at the state preceding it. MDPs therefore provide a powerful framework to explore state spaces and to learn from actions and rewards.
In the feedforward networks and convolutional neural networks (CNNs) studied in the previous chapters, the training data was assumed to be static, with no sequential relation among the samples. Using the data, we were able to train the networks to perform reliable classification tasks. There are many applications, however, where the input data will be sequential in nature, with one sample following another in some ordered manner, as happens with words in a sentence.
The material in the last three chapters focused on the use of neural network structures for the solution of inference (regression and classification) problems. In this chapter, we use the same networks to develop two generative methods whose purpose is to generate samples from the same underlying distribution as the training data.
We studied in Chapters 29 and 30 the mean‐square error (MSE) criterion in some detail, and applied it to the problem of inferring an unknown (or hidden) variable from the observation of another variable when are related by means of a linear regression model or a state‐space model.
The mean-square-error (MSE) criterion (27.17) is one notable example of the Bayesian approach to statistical inference. In the Bayesian approach, both the unknown quantity, , and the observation, , are treated as random variables and an estimator for is sought by minimizing the expected value of some loss function denoted by . In the previous chapter, we focused exclusively on the quadratic loss for scalar . In this chapter, we consider more general loss functions, which will lead to other types of inference solutions such as the mean-absolute error (MAE) and the maximum a-posteriori (MAP) estimators. We will also derive the famed Bayes classifier as a special case when the realizations for are limited to the discrete values .
Where, indeed, do cultural concepts come from? Whorf was the first to propose a general way of understanding the emergence of cultural concepts, so he will be our main guide as we build on our earlier lectures to see how cultural categories are formed from the confluence of grammatical structure, denotational domains, and the sociocultural practice of using textualized language within broader historical process. We’ll also draw on the work of Hilary Putnam in the philosophy of language as well as examples from the sociocultural anthropology of Stanley Tambiah so as to generate our own account of cultural conceptualization.
In the last lecture, we focused on ritual and ritualized uses of language that seem to bring into being (that is, indexically entail) certain contextual conditions. They do this as a function of the occurrence of some formulaic (that is, densely and rigidly metricalized) linguistic form-tokens. We might say that such form-tokens render salient and explicit a current contextual focus of the emerging interactional text. By interactional text, here we mean the social coordination through which the pantomime of interaction is interpreted by participants along dimensions of social identity and eventhood. In this way, analyzing discourse as the mediator of social life rests on understanding how both big and little pieces of denotational text come to serve as the effective signals of who – as sociological types – the interactants “are” or “seem to be” at every phase of interaction and how they perform social acts. When “appropriate,” such social acts are licensed by who/what the interactants are, and when “inappropriate,” they challenge or make a bid for re-definition of self and/or other(s).
Identifies the major features of major depressive episodes, dysthymic episodes, manic episodes, and hypomanic episodes. Describes the essential features of major depressive disorder and persistent depressive disorder. Describes the essential features of bipolar I and bipolar II disorder. Describes the essential features of premenstrual dysphoric disorder, disruptive mood dysregulation disorder, and prolonged grief disorder. Describes the models and treatments for mood disorders.
Describes the symptoms associated with psychotic disorders. Compares the positive and negative symptoms of psychosis. Summarizes the epidemiology, diagnostic criteria, and clinical features of the psychotic disorders. Discusses current theories of the etiology of psychotic disorders. Describes common side effects of antipsychotic medications. Discusses the psychosocial treatments of psychotic disorders.