We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cortical excitability has been proposed as a novel neurophysiological marker of neurodegeneration in Alzheimer’s dementia (AD). However, the link between cortical excitability and structural changes in AD is not well understood.
Objective:
To assess the relationship between cortical excitability and motor cortex thickness in AD.
Methods:
In 62 participants with AD (38 females, mean ± SD age = 74.6 ± 8.0) and 47 healthy control (HC) individuals (26 females, mean ± SD age = 71.0 ± 7.9), transcranial magnetic stimulation resting motor threshold (rMT) was determined, and T1-weighted MRI scans were obtained. Skull-to-cortex distance was obtained manually for each participant using MNI coordinates of the motor cortex (x = −40, y = −20, z = 52).
Results:
The mean skull-to-cortex distances did not differ significantly between participants with AD (22.9 ± 4.3 mm) and HC (21.7 ± 4.3 mm). Participants with AD had lower motor cortex thickness than healthy individuals (t(92) = −4.4, p = <0.001) and lower rMT (i.e., higher excitability) than HC (t(107) = −2.0, p = 0.045). In the combined sample, rMT was correlated positively with motor cortex thickness (r = 0.2, df = 92, p = 0.036); however, this association did not remain significant after controlling for age, sex and diagnosis.
Conclusions:
Patients with AD have decreased cortical thickness in the motor cortex and higher motor cortex excitability. This suggests that cortical excitability may be a marker of neurodegeneration in AD.
This study aimed to investigate the influence of feelings of guilt among cancer patients on their health behavior, with a specific focus on the use of complementary and alternative medicine (CAM).
Methods
A multicentric cross-sectional study was conducted, involving 162 oncological patients, assessing sociodemographic variables, feelings of guilt, patient activation, self-efficacy, and CAM usage. The Shame-Guilt-Scale was employed to measure guilt, with subscales including punitive guilt, self-criticism (actions), moral perfectionism, and empathy-reparation. To assess patient activation and self-efficacy, we used the German Version of the Patient Activation Measure 13 and the Short Scale for Measuring General Safe-efficacy Beliefs, respectively. To evaluate CAM-usage, we used a standardized instrument from the working group Prevention and Integrative Oncology of the German Cancer Society. Statistical analyses, including regression models, were employed to examine potential associations.
Results
Female gender was associated with more frequent CAM usage. Regarding holistic and mind-body-methods, younger patients more often used these methods. No significant association was found between feelings of guilt and CAM usage. Patients experienced guilt most strongly related to empathy and reparation for their own actions.
Significance of results
Our results do not support the hypothesis of a direct link between guilt and CAM usage. Guilt may be an important aspect in psychological support for cancer patients, yet, with respect to counselling on CAM, it does not play an important part to understand patients’ motivations.
A formal framework for measuring change in sets of dichotomous data is developed and implications of the principle of specific objectivity of results within this framework are investigated. Building upon the concept of specific objectivity as introduced by G. Rasch, three equivalent formal definitions of that postulate are given, and it is shown that they lead to latent additivity of the parametric structure. If, in addition, the observations are assumed to be locally independent realizations of Bernoulli variables, a family of models follows necessarily which are isomorphic to a logistic model with additive parameters, determining an interval scale for latent trait measurement and a ratio scale for quantifying change. Adding the further assumption of generalizability over subsets of items from a given universe yields a logistic model which allows a multidimensional description of individual differences and a quantitative assessment of treatment effects; as a special case, a unidimensional parameterization is introduced also and a unidimensional latent trait model for change is derived. As a side result, the relationship between specific objectivity and additive conjoint measurement is clarified.
The partial credit model is considered under the assumption of a certain linear decomposition of the item × category parameters δih into “basic parameters” αj. This model is referred to as the “linear partial credit model”. A conditional maximum likelihood algorithm for estimation of the αj is presented, based on (a) recurrences for the combinatorial functions involved, and (b) using a “quasi-Newton” approach, the so-called Broyden-Fletcher-Goldfarb-Shanno (BFGS) method; (a) guarantees numerically stable results, (b) avoids the direct computation of the Hesse matrix, yet produces a sequence of certain positive definite matrices Bk, k = 1, 2, ..., converging to the asymptotic variance-covariance matrix of the \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\hat \alpha _j $$\end{document}. The practicality of these numerical methods is demonstrated both by means of simulations and of an empirical application to the measurement of treatment effects in patients with psychosomatic disorders.
Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called “unconditional” (UML) and the “conditional” (CML) maximum-likelihood estimation equations in the dichotomous Rasch model are given. The basic critical condition is essentially the same for UML and CML estimation. For complete data matrices A, it is formulated both as a structural property of A and in terms of the sufficient marginal sums. In case of incomplete data, the condition is equivalent to complete connectedness of a certain directed graph. It is shown how to apply the results in practical uses of the Rasch model.
Two linearly constrained logistic models which are based on the well-known dichotomous Rasch model, the ‘linear logistic test model’ (LLTM) and the ‘linear logistic model with relaxed assumptions’ (LLRA), are discussed. Necessary and sufficient conditions for the existence of unique conditional maximum likelihood estimates of the structural model parameters are derived. Methods for testing composite hypotheses within the framework of these models and a number of typical applications to real data are mentioned.
The polytomous unidimensional Rasch model with equidistant scoring, also known as the rating scale model, is extended in such a way that the item parameters are linearly decomposed into certain basic parameters. The extended model is denoted as the linear rating scale model (LRSM). A conditional maximum likelihood estimation procedure and a likelihood-ratio test of hypotheses within the framework of the LRSM are presented. Since the LRSM is a generalization of both the dichotomous Rasch model and the rating scale model, the present algorithm is suited for conditional maximum likelihood estimation in these submodels as well. The practicality of the conditional method is demonstrated by means of a dichotomous Rasch example with 100 items, of a rating scale example with 30 items and 5 categories, and in the light of an empirical application to the measurement of treatment effects in a clinical study.
This paper discusses a new form of specifying and normalizing a Linear Logistic Test Model (LLTM) as suggested by Bechger, Verstralen, and Verhelst (Psychometrika, 2002). It is shown that there are infinitely many ways to specify the same normalization. Moreover, the relationship between some of their results and equivalent previous results in the literature is clarified, and it is shown that the goals of estimating and testing a single element of the weight matrix, for which they propose new methods, can be reached by means of simple, well-known tools already implemented in published LLTM software.
The paper addresses three neglected questions from IRT. In section 1, the properties of the “measurement” of ability or trait parameters and item difficulty parameters in the Rasch model are discussed. It is shown that the solution to this problem is rather complex and depends both on general assumptions about properties of the item response functions and on assumptions about the available item universe. Section 2 deals with the measurement of individual change or “modifiability” based on a Rasch test. A conditional likelihood approach is presented that yields (a) an ML estimator of modifiability for given item parameters, (b) allows one to test hypotheses about change by means of a Clopper-Pearson confidence interval for the modifiability parameter, or (c) to estimate modifiability jointly with the item parameters. Uniqueness results for all three methods are also presented. In section 3, the Mantel-Haenszel method for detecting DIF is discussed under a novel perspective: What is the most general framework within which the Mantel-Haenszel method correctly detects DIF of a studied item? The answer is that this is a 2PL model where, however, all discrimination parameters are known and the studied item has the same discrimination in both populations. Since these requirements would hardly be satisfied in practical applications, the case of constant discrimination parameters, that is, the Rasch model, is the only realistic framework. A simple Pearson x2 test for DIF of one studied item is proposed as an alternative to the Mantel-Haenszel test; moreover, this test is generalized to the case of two items simultaneously studied for DIF.
The LLRA (linear logistic model with relaxed assumptions; Fischer, 1974, 1977a, 1977b, 1983a) was developed, within the framework of generalized Rasch models, for assessing change in dichotomous item score matrices between two points in time; it allows to quantify change on latent trait dimensions and to explain change in terms of treatment effects, treatment interactions, and a trend effect. A remarkable feature of the model is that unidimensionality of the item set is not required. The present paper extends this model to designs with any number of time points and even with different sets of items presented on different occasions, provided that one unidimensional subscale is available per latent trait. Thus unidimensionality assumptions within subscales are combined with multidimensionality of the item set. Conditional maximum likelihood methods for parameter estimation and hypothesis testing are developed, and a necessary and sufficient condition for unique identification of the model, given the data, is derived. Finally, a sample application is presented.
Objectives: Patients with mild cognitive impairment (MCI) employ compensatory cognitive processes to maintain independence in day-to-day functioning as compared to patients with Alzheimer’s dementia (AD). The dorsolateral prefrontal cortex (DLFPC) supports cognitive compensation in normal aging and MCI. Using Paired Associative Stimulation combined with Electroencephalography (PAS-EEG) we have previously shown that patients with AD have impaired DLPFC plasticity compared to healthy control (HC) individuals. The aim of this study is to examine whether DLPFC plasticity in individuals with MCI is preserved compared to those with AD and HC, serving as a potential mechanism underlying cognitive compensation in MCI.
Methods: We analyzed a combined cross-sectional data of 47 AD, 16 MCI, and 40 HC participants from three different studies that assessed their DLPFC plasticity using PAS-EEG. PAS-EEG assesses DLPFC plasticity via the induction of Long Term Potentiation (LTP)-like activity, thereby referred to as PAS-LTP. Using multiple regression, we compared PAS-LTP in MCI to PAS-LTP in AD and HCs, after adjusting for age andgender.
Results: Among the 47 participants with AD (mean [SD] age = 75.3 [7] years), 29 were women and 18 were men; among the 16 participants with MCI (mean [SD] age = 74.8 [6] years), 11 were women and 5 were men; and among the 40 HCs (mean [SD] age = 76.4 [5.1] years), 22 were women and 18 were men. After adjusting for age and gender, there was an impact of diagnostic group on PAS-LTP [F (2,95) = 4.19, p = 0.018, between-group comparison η2 = 0.81]. Post-hoc comparisons showed that participants with MCI had a higher PAS-LTP (mean [SD] = 1.31 [0.49]) than those with AD (mean [SD] = 1.09 [0.28]) (Bonferroni corrected p = 0.042) but not different from PAS-LTP in HCs (mean [SD] = 1.25 [0.33]) (Bonferroni corrected p = 1.0).
Conclusions: Our findings indicate that plasticity is preserved in the DLPFC among individuals with MCI, supporting the hypothesis that DLPFC plasticity contributes to cognitive compensation towards delaying progression to AD. Thus, further enhancement of longer preservation of DLPFC plasticity in individuals with MCI could further delay the onset of AD in this population.
Introducing the fundamentals of digital communication with a robust bottom-up approach, this textbook is designed to equip senior undergraduate and graduate students in communications engineering with the core skills they need to assess, compare, and design state-of-the-art digital communication systems. Delivering a fast, concise grounding in key algorithms, concepts, and mathematical principles, this textbook provides all the mathematical tools for understanding state-of-the-art digital communications. The authors prioritise readability and accessibility, to quickly get students up to speed on key topics in digital communication, and includes all relevant derivations. Presenting over 70 carefully designed multi-part end-of-chapter problems with over 360 individual questions, this textbook gauges student understanding and translates knowledge to real-world problem solving. Accompanied online by interactive visualizations of signals, downloadable Matlab code, and solutions for instructors.
Intersymbol interference (ISI) occurs for linear dispersive channels (i.e., channels where the transfer function is not flat within the transmission band). Hence, an obvious strategy to avoid ISI is to divide the transmission band into a large number of subbands, which are used individually in parallel. If these bands are small enough, such fluctuations of the channel transfer function can be ignored and no linear distortions occur that would have to be equalized. In this chapter, we study this idea in the particular form of orthogonal frequency-division multiplexing (OFDM). It is shown that even starting from the frequency-division multiplexing idea, the key principle behind OFDM is blockwise transmission and the use of suitable transformations at transmitter and receiver. We analyze OFDM in detail and show how the resulting parallel data transmission can be used in an optimum way. OFDM is compared with the equalization schemes discussed in the previous chapter, and incorporated in the unified description framework.
In carrier-modulated (digital) communication, the transmit signal has spectral components in a band around a so-called carrier frequency. Here, a baseband transmit signal is upconverted to obtain the radio-frequency (RF) transmit signal and the RF receive signal is downconverted to obtain the baseband receive signal. The processing of transmit and receive signals is done as far as possible in the baseband domain. The aim of the chapter is to develop a mathematically precise compact representation of real-valued RF signals independent of the actual center frequency (or carrier frequency) by equivalent complex baseband (ECB) signals. In addition, transforms of corresponding systems and stochastic processes into the ECB domain and back are covered in detail. Conditions for wide-sense stationary and cyclic-stationary stochastic processes in the EBC domain are discussed.
In digital frequency modulation, in particular frequency-shift keying (FSK), information is represented solely by the instantaneous frequency, whereas the amplitude of the ECB signal and thus the envelope of the RF signal are constant. Therefore, efficient power amplification is possible, an important advantage of digital frequency modulation. Even though the frequency and phase of a carrier signal are tightly related (the instantaneous frequency is given by the derivative of the phase), differentially encoded PSK and FSK fall into different families. Moreover, in FSK, the continuity of the carrier phase plays an important role, resulting in continuous-phase FSK (CPFSK). A generalization of CPFSK leads to continuous-phase modulation (CPM), similar to the generalization of MSK to Gaussian MSK discussed in Chapter 4. A brief introduction to CPM is presented and we especially enlighten the inherent coding of CPFSK and CPM. For the characterization and analysis, the general signal space concept derived in Chapter 6 is applied.
An overview of digital communications techniques is given. The notions of source, transmitter, channel, receiver, and sink are explained. Examples of digital communication schemes and respective applications are given. The main quantities and performance measures are introduced and summarized. The fundamental trade-off between both power efficiency and bandwidth efficiency is characterized.