To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Latent Position Model (LPM) is a popular approach for the statistical analysis of network data. A central aspect of this model is that it assigns nodes to random positions in a latent space, such that the probability of an interaction between each pair of individuals or nodes is determined by their distance in this latent space. A key feature of this model is that it allows one to visualize nuanced structures via the latent space representation. The LPM can be further extended to the Latent Position Cluster Model (LPCM), to accommodate the clustering of nodes by assuming that the latent positions are distributed following a finite mixture distribution. In this paper, we extend the LPCM to accommodate missing network data and apply this to non-negative discrete weighted social networks. By treating missing data as “unusual” zero interactions, we propose a combination of the LPCM with the zero-inflated Poisson distribution. Statistical inference is based on a novel partially collapsed Markov chain Monte Carlo algorithm, where a Mixture-of-Finite-Mixtures (MFM) model is adopted to automatically determine the number of clusters and optimal group partitioning. Our algorithm features a truncated absorb-eject move, which is a novel adaptation of an idea commonly used in collapsed samplers, within the context of MFMs. Another aspect of our work is that we illustrate our results on 3-dimensional latent spaces, maintaining clear visualizations while achieving more flexibility than 2-dimensional models. The performance of this approach is illustrated via three carefully designed simulation studies, as well as four different publicly available real networks, where some interesting new perspectives are uncovered.
The generalized Gompertz distribution—an extension of the standard Gompertz distribution as well as the exponential distribution and the generalized exponential distribution—offers more flexibility in modeling survival or failure times as it introduces an additional parameter, which can account for different shapes of hazard functions. This enhances its applicability in various fields such as actuarial science, reliability engineering and survival analysis, where more complex survival models are needed to accurately capture the underlying processes. The effect of heterogeneity has generated increased interest in recent times. In this article, multivariate chain majorization methods are exploited to develop stochastic ordering results for extreme-order statistics arising from independent heterogeneous generalized Gompertz random variables with increased degree of heterogeneity.
The chapter begins with discussion of intelligence in simple unicellular organisms followed by that of animals with complex nervous systems. Surprisingly, even organisms that do not have a central brain can navigate their complex environments, forage, and learn. In organisms with central nervous system, neurons and synapses in the brain provide elementary basis of intelligence and memory. Neurons generate action potentials that represent information. Synapses hold memory and control the signal transmission between neurons. A key feature of biological neural circuits is plasticity, that is, their ability to modify the circuit properties based both on stimuli and time intervals between them. This represents one form of learning. The biological brain is not static but continuously evolves based on the experience. The field of AI seeks to learn from biological neural circuitry, emulate aspects of intelligence and learning and attempts to build physical devices and algorithms that can demonstrate features of animal intelligence. Neuromorphic computing therefore requires a paradigm shift in design of semiconductors as well as algorithm foundations that are not necessarily built for perfection, rather for learning.
This chapter explores fundamental analytical techniques in data science, distinguishing between data analysis (backward-looking) and data analytics (forward-looking prediction).
Six key analysis categories are covered:
Descriptive Analysis examines current data through statistical measures (mean, median, mode) and visualizations to understand "what is happening."
Diagnostic Analytics investigates "why something happened" using correlation analysis, emphasizing the distinction between correlation and causation.
Predictive Analytics forecasts future outcomes using historical data and regression analysis.
Prescriptive Analytics determines optimal courses of action by analyzing potential decisions.
Exploratory Analysis discovers unknown relationships through visualization when questions aren’t predetermined.
Mechanistic Analysis examines exact variable changes and their effects.
The chapter emphasizes statistical literacy as essential for data scientists, covering key concepts like variable types, frequency distributions, measures of centrality and dispersion, and regression modeling. Hands-on examples demonstrate applications across business, healthcare, and social sciences.
This chapter focuses on applying data science and machine learning techniques to real-world problems using R. It covers four main applications: clinical data analysis, social media data collection and analysis, and large-scale data processing.
The chapter begins with exploring clinical data from a dermatology study, demonstrating visual exploration, gradient descent regression, random forest classification, and k-means clustering techniques. It then transitions to social media analysis, specifically working with Reddit APIs to collect and analyze posts, examining relationships between variables like post length, scores, and upvotes.
The YouTube section covers API authentication and data collection for video statistics analysis. Finally, the Yelp analysis demonstrates big data processing techniques, exploring user behavior patterns through correlation analysis, regression modeling, and clustering of review data.
The chapter emphasizes practical API usage, data visualization, statistical testing, and the importance of understanding both the problem and data before analysis.
This chapter focuses on using Python for statistical analysis in data science. It begins with statistics essentials, teaching how to calculate descriptive statistics like mean, median, variance, and standard deviation using NumPy. The chapter covers data visualization techniques using Matplotlib to create histograms, bar charts, and scatterplots for exploring data patterns. Key topics include importing data using Pandas DataFrames, performing correlation analysis to measure relationships between variables, and conducting statistical inference through hypothesis testing. Students learn to implement t-tests for comparing means between two groups and ANOVA for comparing multiple groups. The chapter emphasizes practical applications through hands-on examples, from analyzing family age data to comparing exam scores across different classes. These statistical techniques form the foundation for more advanced data science work, enabling students to extract meaningful insights from datasets and make data-driven decisions.
This chapter offers an in-depth discussion of various nanoelectronic and nanoionic synapses along with the operational mechanisms, capabilities and limitations, and directions for further advancements in this field. We begin with overarching mechanisms to design artificial synapses and learning characteristics for neuromorphic computing. Silicon-based synapses using digital CMOS platforms are described followed by emerging device technologies. Filamentary synapses that utilize nanoscale conducting pathways for forming and breaking current shunting routes within two-terminal devices are then discussed. This is followed by ferroelectric devices wherein polarization states of a switchable ferroelectric layer are responsible for synaptic plasticity and memory. Insulator–metal transition-based synapses are described wherein a sharp change in conductance of a layer due to external stimulus offers a route for compact synapse design. Organic materials, 2D van der Waals, and layered semiconductors are discussed. Ionic liquids and solid gate dielectrics for multistate memory and learning are presented. Photonic and spintronic synapses are then discussed in detail.
This chapter provides a comprehensive introduction to supervised learning techniques for classification problems. It begins with logistic regression for binary classification, explaining the sigmoid function and gradient ascent optimization. The chapter then covers softmax regression for multi-class problems, followed by k-nearest neighbors (kNN) as an intuitive distance-based classifier.
Decision trees are explored in detail, including entropy, information gain, and the ID3 algorithm, along with derived decision rules and association rules. Random forests are presented as an ensemble method that addresses overfitting by combining multiple decision trees.
The chapter covers Naive Bayes classification based on Bayes’ theorem, despite its "naive" independence assumption. Finally, Support Vector Machines (SVMs) are introduced for both linear and non-linear classification using maximum margin hyperplanes.
Each technique includes hands-on R programming examples with real datasets, practical applications, and exercises to reinforce learning concepts.
This chapter explores fundamental analytical techniques in data science, distinguishing between data analysis (backward-looking) and data analytics (forward-looking prediction).
Six key analysis categories are covered:
Descriptive Analysis examines current data through statistical measures (mean, median, mode) and visualizations to understand "what is happening."
Diagnostic Analytics investigates "why something happened" using correlation analysis, emphasizing the distinction between correlation and causation.
Predictive Analytics forecasts future outcomes using historical data and regression analysis.
Prescriptive Analytics determines optimal courses of action by analyzing potential decisions.
Exploratory Analysis discovers unknown relationships through visualization when questions aren’t predetermined.
Mechanistic Analysis examines exact variable changes and their effects.
The chapter emphasizes statistical literacy as essential for data scientists, covering key concepts like variable types, frequency distributions, measures of centrality and dispersion, and regression modeling. Hands-on examples demonstrate applications across business, healthcare, and social sciences.
This paper investigates the negation-free fragment of the bi-connexive logic 2C, called 2C$_-$, from the perspective of bilateralist proof-theoretic semantics (PTS). It is argued that eliminating primitive negation has two important conceptual consequences. First, it requires a reconceptualization of contradictory logics: in a bilateralist framework, contradiction need not be understood in terms of negation inconsistency, but rather as the coexistence of proofs and refutations for certain formulas within a non-trivial system. Second, it challenges the standard definition of connexive logics, which typically rely on negation-based schemata. Instead, a rule-based conception of connexivity, grounded in bilateralist PTS, is proposed. This reconception avoids dependence on the validation of specific formula schemata and thereby also dependence on negation. The paper also addresses the issue of proof–refutation duality in the absence of strong negation, which can be formalized and recovered at a meta-level by extending the system with a two-sorted typed $\lambda $-calculus.
This chapter focuses on applying data science and machine learning techniques to real-world problems using Python. It covers four main applications: clinical data analysis, social media data collection and analysis, and large-scale data processing.
The chapter begins with exploring clinical data from a dermatology study, demonstrating visual exploration, gradient descent regression, random forest classification, and k-means clustering techniques. It then transitions to social media analysis, specifically working with Reddit APIs to collect and analyze posts, examining relationships between variables like post length, scores, and upvotes.
The YouTube section covers API authentication and data collection for video statistics analysis. Finally, the Yelp analysis demonstrates big data processing techniques, exploring user behavior patterns through correlation analysis, regression modeling, and clustering of review data.
The chapter emphasizes practical API usage, data visualization, statistical testing, and the importance of understanding both the problem and data before analysis.