We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Processing and extracting actionable information, such as fault or anomaly indicators originating from vibration telemetry, is both challenging and critical for an accurate assessment of mechanical system health and subsequent predictive maintenance. In the setting of predictive maintenance for populations of similar assets, the knowledge gained from any single asset should be leveraged to provide improved predictions across the entire population. In this paper, a novel approach to population-level health monitoring is presented adopting a transfer learning approach. The new methodology is applied to monitor multiple rotating plant assets in a power generation scenario. The focus is on the detection of statistical anomalies as a means of identifying deviations from the typical operating regime from a time series of telemetry data. This is a challenging task because the machine is observed under different operating regimes. The proposed methodology can effectively transfer information across different assets, automatically identifying segments with common statistical characteristics and using them to enrich the training of the local supervised learning models. The proposed solution leads to a substantial reduction in mean square error relative to a baseline model.
Pulmonary atresia with intact ventricular septum is a rare congenital cardiac lesion with significant anatomical heterogeneity. Surgical planning of borderline cases remains challenging and is primarily based on echocardiography. The aim was to identify echocardiographic parameters that correlate with surgical outcome and to develop a discriminatory calculator.
Methods:
Retrospective review of all pulmonary atresia with intact ventricular septum cases at a statewide tertiary paediatric cardiac centre was performed between 2004 and 2020. Demographic, clinical, and echocardiographic data were collected. Logistic regression was used to develop a discriminatory tool for prediction of biventricular repair.
Results:
Forty patients were included. Overall mortality was 27.5% (n = 11) and confined to patients managed as univentricular (11 vs 0, p = 0.027). Patients who underwent univentricular palliation were more likely to have an associated coronary artery abnormality (17 vs 3, p = 0.001). Fifteen surviving patients (51.7%) achieved biventricular circulation while 14 (48.3%) required one-and-a-half or univentricular palliation. Nineteen patients (47.5%) underwent percutaneous pulmonary valve perforation. No patients without tricuspid regurgitation achieved biventricular repair. The combination of tricuspid valve/mitral valve annulus dimension ratio and right ventricle/left ventricle length ratio identified biventricular management with a sensitivity of 93% and specificity of 96%. An online calculator has been made available.
Conclusion:
Pulmonary atresia with intact ventricular septum is a challenging condition with significant early and interstage morbidity and mortality risk. Patient outcomes were comparable to internationally reported data. Right ventricle/left ventricle length and tricuspid valve/mitral valve annulus dimension ratios identified a biventricular pathway with a high level of sensitivity and specificity. Absent tricuspid regurgitation was associated with a univentricular outcome.
Diagnosis of acute ischemia typically relies on evidence of ischemic lesions on magnetic resonance imaging (MRI), a limited diagnostic resource. We aimed to determine associations of clinical variables and acute infarcts on MRI in patients with suspected low-risk transient ischemic attack (TIA) and minor stroke and to assess their predictive ability.
Methods:
We conducted a post-hoc analysis of the Diagnosis of Uncertain-Origin Benign Transient Neurological Symptoms (DOUBT) study, a prospective, multicenter cohort study investigating the frequency of acute infarcts in patients with low-risk neurological symptoms. Primary outcome parameter was defined as diffusion-weighted imaging (DWI)-positive lesions on MRI. Logistic regression analysis was performed to evaluate associations of clinical characteristics with MRI-DWI-positivity. Model performance was evaluated by Harrel’s c-statistic.
Results:
In 1028 patients, age (Odds Ratio (OR) 1.03, 95% Confidence Interval (CI) 1.01–1.05), motor (OR 2.18, 95%CI 1.27–3.65) or speech symptoms (OR 2.53, 95%CI 1.28–4.80), and no previous identical event (OR 1.75, 95%CI 1.07–2.99) were positively associated with MRI-DWI-positivity. Female sex (OR 0.47, 95%CI 0.32–0.68), dizziness and gait instability (OR 0.34, 95%CI 0.14–0.69), normal exam (OR 0.55, 95%CI 0.35–0.85) and resolved symptoms (OR 0.49, 95%CI 0.30–0.78) were negatively associated. Symptom duration and any additional symptoms/symptom combinations were not associated. Predictive ability of the model was moderate (c-statistic 0.72, 95%CI 0.69–0.77).
Conclusion:
Detailed clinical information is helpful in assessing the risk of ischemia in patients with low-risk neurological events, but a predictive model had only moderate discriminative ability. Patients with clinically suspected low-risk TIA or minor stroke require MRI to confirm the diagnosis of cerebral ischemia.
Operationalization guidance is needed to support health technology assessment (HTA) bodies considering implementing lifecycle HTA (LC-HTA) approaches. The 2022 Health Technology Assessment International (HTAi) Global Policy Forum (GPF) established a Task Force to develop a position paper on LC-HTA. In its first paper, the Task Force established a definition and framework for LC-HTA in order to tailor it to specific decision problems. This second paper focused on the provision of practical operational guidance to implement LC-HTA. Detailed descriptions of the three LC-HTA operational steps are provided (defining the decision problem, sequencing of HTA activities, and developing optimization criteria) and accompanied by worked examples and an operationalization checklist with 20 different questions for HTA bodies to consider when developing an LC-HTA approach. The questions were designed to be applicable across different types of HTA and scenarios, and require adaptation to local jurisdictions, remits, and context.
The 2022 Health Technology Assessment International (HTAi) Global Policy Forum (GPF) established the goal of developing a position statement and framework for lifecycle HTA (LC-HTA), through a Task Force leveraging multi-stakeholder monthly discussions and GPF member input. The Task Force developed a working definition: LC-HTA is a systematic process utilizing sequential HTA activities to inform decision making where the evidence base, the health technology itself, or the context in which it is applied, has a potential to meaningfully change at different points in its LC. Four key scenarios were identified where it was considered that an LC-HTA approach would add sufficient value to HTA bodies and their key stakeholders to justify the additional resource burden. Based on the four scenarios, a high-level LC-HTA framework was developed consisting of (i) defining the decision problem, (ii) sequencing of HTA activities, and (iii) developing optimization criteria. Subsequently, the Task Force developed operationalization guidance for LC-HTA in a companion paper.
Taking a step-by-step approach to modelling neurons and neural circuitry, this textbook teaches students how to use computational techniques to understand the nervous system at all levels, using case studies throughout to illustrate fundamental principles. Starting with a simple model of a neuron, the authors gradually introduce neuronal morphology, synapses, ion channels and intracellular signalling. This fully updated new edition contains additional examples and case studies on specific modelling techniques, suggestions on different ways to use this book, and new chapters covering plasticity, modelling extracellular influences on brain circuits, modelling experimental measurement processes, and choosing appropriate model structures and their parameters. The online resources offer exercises and simulation code that recreate many of the book's figures, allowing students to practice as they learn. Requiring an elementary background in neuroscience and high-school mathematics, this is an ideal resource for a course on computational neuroscience.
Intracellular molecular signalling plays a crucial role in modulating ion channel dynamics, synaptic plasticity and, ultimately, the behaviour of the whole cell. In this chapter, we investigate ways of modelling intracellular signalling systems. We focus on calcium, as it plays an extensive role in many cell functions. Included are models of intracellular buffering systems, ionic pumps and calcium-dependent processes. This leads us to outline other intracellular signalling pathways involving more complex enzymatic reactions and cascades. We introduce the well-mixed approach to modelling these pathways and explore its limitations. Rule-based modelling can be used when full specification of a signalling network is infeasible. When small numbers of molecules are involved, stochastic approaches are necessary and we consider both population-based and particle-based methods for stochastic modelling. Movement of molecules through diffusion must be considered in spatially inhomogeneous systems.
This chapter introduces the physical principles underlying the models of electrical activity of neurons. Starting with the neuronal cell membrane, we explore how its permeability to different ions and the maintenance by ionic pumps of concentration gradients across the membrane underpin the resting membrane potential. We show how these properties can be represented by an equivalent electrical circuit, which allows us to compute the response of the membrane potential over time to input current. We conclude by describing the integrate-and-fire neuron model, which is based on the equivalent electrical circuit.
So far, we have been discussing how to model accurately the electrical and chemical properties of neurons and how these cells interact within the networks of cells forming the nervous system. The existence of a correct structure is essential for proper functioning of the nervous system, and we now discuss modelling of the development of the nervous system. Most existing models of developmental processes are not as widely accepted as, for example, the Hodgkin–Huxley model of nerve impulse propagation. They are designed on the basis of usually unverified assumptions to test a particular theory for neural development. Our aim is to cast light on the different types of issues that arise when constructing a model of development through discussing several case examples of models applied to particular neural developmental phenomena. We look at models constructed at the levels of individual neurons and of ensembles of nerve cells.
When modelling networks of neurons, generally it is not possible to represent each neuron of the real system in the model. It is therefore essential to carry out appropriate simplifications for which many design questions have to be asked. These concern how each neuron should be modelled, the number of neurons in the model network and how the neurons should interact. To illustrate how these questions are addressed, networks using various types of model neuron are described. In some cases, the properties of each model neuron are represented directly in the model, and in others the averaged properties of a population of neurons. We then look at several large-scale models intended to model specific brain areas. In some of these models, the neurons are based on the neurons reconstructed from extensive anatomical and physiological measurements. The advantages and disadvantages of these different types of models are discussed.
This chapter covers a spectrum of models for both chemical and electrical synapses. Different levels of detail are delineated in terms of model complexity and suitability for different situations. These range from empirical models of voltage waveforms to more detailed kinetic schemes, and to complex stochastic models, including vesicle recycling and release. Simple static models that produce the same postsynaptic response for every presynaptic action potential are compared with more realistic models incorporating short-term dynamics that produce facilitation and depression of the postsynaptic response. Different postsynaptic receptor-mediated excitatory and inhibitory chemical synapses are described. Electrical connections formed by gap junctions are considered.
Modelling a neural system involves the selection of the mathematical form of the model’s components, such as neurons, synapses and ion channels, plus assigning values to the model’s parameters. This may involve matching to the known biology, fitting a suitable function to data or computational simplicity. Only a few parameter values may be available through existing experimental measurements or computational models. It will then be necessary to estimate parameters from experimental data or through optimisation of model output. Here we outline the many mathematical techniques available. We discuss how to specify suitable criteria against which a model can be optimised. For many models, ranges of parameter values may provide equally good outcomes against performance criteria. Exploring the parameter space can lead to valuable insights into how particular model components contribute to particular patterns of neuronal activity. It is important to establish the sensitivity of the model to particular parameter values.
The nervous system consists of not only neurons, but also of other cell types such as glial cells. They can be modelled using the same principles as for neurons. The extracellular space (ECS) contains ions and molecules that affect the activity of both neurons and glial cells, as does the transport of signalling molecules, oxygen and cell nutrients in the irregular ECS landscape. This chapter shows how to model such diffusive influences involving both diffusion and electrical drift. This formalism also explains the formation of dense nanometre-thick ion layers around membranes (Debye layers). When ion transport in the ECS stems from electrical drift only, this formalism reduces to the volume conductor theory, which is commonly used to model electrical potentials around cells in the ECS. Finally, the chapter outlines how to model ionic and molecular dynamics not only in the ECS, but also in the entire brain tissue comprising neurons, glial cells and blood vessels.
In this book, we have aimed to explain the principles of computational neuroscience by showing how the underlying mechanisms are being modelled, together with presenting critical accounts of examples of their use. In some chapters, we have placed the modelling work described in its historical context where we felt this would be interesting and useful. We now make some brief comments about where the field of computational neuroscience came from and where it might be going.