To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Social and developmental psychology are often viewed as distinct subdisciplines, each with its own theories and methodologies. However, this book seeks to bridge that divide by proposing an integrative framework that considers various levels of analysis, from the individual to the societal. It emphasizes the interplay of fundamental concepts such as intra- and inter-group conflict and change across these levels. By revisiting and renewing foundational theories of development, the book introduces the concept of 'genetic social psychology.' This approach is applied to the complex case of the Cyprus conflict, as well as other conflict and post-conflict scenarios, uncovering transformative possibilities for both theory and practice. Ultimately, this work advocates for a broader, more cohesive understanding of psychological processes in social contexts, addressing contemporary challenges and enhancing our grasp of human behavior.
Understanding how conversation is produced, represented in memory, and utilized in daily social interaction is crucial to comprehending how human communication occurs and how it might be modeled. This book seeks to take a step toward this goal by providing a comprehensive, interdisciplinary overview of conversation memory research and related phenomena that transcends the foundations of cognitive psychology. It covers a wide range of conversation memory topics, including theoretical approaches, representation in long-term memory, gender, race, and ethnicity effects, methodological issues, conversation content, social cognition, lifespan development, nonverbal correlates, personality and individual differences, disability, and conversation memory applications. Featuring new content reflecting the historical development of the conversation memory field alongside an extensive reference list, the book provides a complete, single-source reference work for conversational remembering research that should be of interest across disciplines.
The book provides a detailed analysis of important work in queer and trans studies over the past thirty years. Stretching from early figures (such as Eve Kosofsky Sedgwick, Judith Butler, Cathy Cohen, José Muñoz, and Sandy Stone) to the most recent scholarship, it offers a rich account of these fields' major ideas and contributions while indicating how they have evolved. Centering race and empire, the book offers extended discussion of work in Black, Indigenous, Latinx, and Asian American studies as well as engaging the Global South. The Introduction further addresses historical considerations of sexuality and gender identity, and queer and trans temporalities, while also providing a robust account of social and political movements that preceded the emergence of queer and trans studies as scholarly fields. Accessible for those unfamiliar with these areas of study, it is also a great resource for those already working in them.
Over the past fifteen years, there has been a growing interest in altering legal rules to redistribute wealth, with many scholars believing that neoclassical economic theory is biased against redistribution. Yet a growing number of progressive scholars are pushing back against this view. Toward an Inframarginal Revolution offers a fresh perspective on the redistribution of wealth by legal scholars who argue that the neoclassical concept of the gains from trade provides broad latitude for redistribution that will not harm efficiency. They show how policymakers can redistribute wealth via taxation, price regulation, antitrust, consumer law, and contract law by focusing on the prices at which inframarginal units of production change hands. Progressive and eye-opening, this volume uses conservative economic concepts to make a compelling case for radically redistributing wealth. This title is part of the Flip it Open Programme and may also be available open access. Check our website Cambridge Core for details.
Kant had thoughts on language, but his account of language is not explicit and cannot be found in any dedicated section of his works, so it needs to be philosophically reconstructed. The chapters in this volume investigate Kant's views on language from unique perspectives. They demonstrate that Kant's notions of thinking, knowing, communicating, and acting have implications for the philosophy of language: from the problem of empirical concept-formation to the categorial structure of experience, from the exhibition of aesthetic ideas to the role of analogies and metaphors, from poetry as the art of language to the moral relevance of rhetoric and the problem of persuasion, and from the source of Kant's philosophical vocabulary to the role of language in defining 'humanity'. The volume offers a new and distinctive interpretive context in which Kant's approach to language can be critically appreciated.
In a 1962 meeting at the White House, Iran's last monarch, Mohammad Reza Pahlavi, complained to US President John F. Kennedy 'America treats Turkey as a wife, and Iran as a concubine.' Taking this protest as a critical starting point, this book examines the transnational history of comparisons between Türkiye and Iran from Cold War-era modernization theory to post-9/11 studies of 'moderate Islam'. Perin E. Gürel explores how US policymakers and thought leaders strategically used comparisons to advance shifting agendas, while stakeholders in Türkiye and Iran responded by anticipating, manipulating, and reshaping US-driven narratives. Juxtaposing dominant US-based comparisons with representations originating from Iran and Türkiye, Gürel's interdisciplinary and multilingual research uncovers unexpected twists: comparisons didn't always reinforce US authority but often reflected and encouraged the rise of new ideologies. This book offers fresh insight into the complexities of US-Middle Eastern relations and the enduring impact of comparativism on international relations.
Now in its fourth edition, this best-selling, highly praised text has been fully revised and updated with expanded sections on propensity analysis, sensitivity analysis, and emulation trials. As before, it focuses on easy-to follow explanations of complicated multivariable techniques including logistic regression, proportional hazards analysis, and Poisson regression. The perfect introduction for medical researchers, epidemiologists, public health practitioners, and health service researchers, this book describes how to preform and interpret multivariable analysis, using plain language rather than mathematical formulae. It takes advantage of the availability of user-friendly software that allow novices to conduct complex analysis without programming experience; ensuring that these analyses are set up and interpreted correctly. Numerous tables, graphs, and tips help to demystify the process of performing multivariable analysis. The text is illustrated with many up-to-date examples from the published literature that enable readers to model their analyses after well conducted research, increasing chances of top-tier publication.
In this book, Jonathan Valk asks a deceptively simple question: What did it mean to be Assyrian in the second millennium bce? Extraordinary evidence from Assyrian society across this millennium enables an answer to this question. The evidence includes tens of thousands of letters and legal texts from an Assyrian merchant diaspora in what is now modern Turkey, as well as thousands of administrative documents and bombastic royal inscriptions associated with the Assyrian state. Valk develops a new theory of social categories that facilitates an understanding of how collective identities work. Applying this theoretical framework to the so-called Old and Middle Assyrian periods, he pieces together the contours of Assyrian society in each period, as revealed in the abundance of primary evidence, and explores the evolving construction of Assyrian identity as well. Valk's study demonstrates how changing historical circumstances condition identity and society, and that the meaning we assign to identities is ever in flux.
How do languages capture and represent the sounds of the world? Is this a universal phenomenon? Drawing from data taken from 124 different languages, this innovative book offers a detailed exploration of onomatopoeia, that are imagic icons of sound events. It provides comprehensive analysis from both theoretical and empirical perspectives, and identifies the prototypical semiotic, phonological, morphological, syntactic, word-formation, and socio-pragmatic features of onomatopoeia. Supported with numerous examples from the sample languages, the book highlights the varied scope of onomatopoeia in different languages, its relationship to ideophones and interjections, and the role of sound symbolism, particularly phonesthemes, in onomatopoeia-formation. It introduces an onomasiological model of onomatopoeia-formation, identifies onomatopoeic patterns, and specifies the factors affecting the similarities and differences between onomatopoeias standing for the same sound event. Filling a major gap in language studies, it is essential reading for researchers and students of phonology, morphology, semiotics, poetics, and linguistic typology.
Ethnic majorities and minorities are produced over time by the same processes that define national borders and create national institutions. Minority Identities in Nigeria traces how western Niger Delta communities became political minorities first, through colonial administrative policies in the 1930s; and second, by embracing their minority status to make claims for resources and representation from the British government in the 1940s and 50s. This minority consciousness has deepened in the post-independence era, especially under the pressures of the crude oil economy. Blending discussion of local and regional politics in the Niger Delta with the wider literature on developmental colonialism, decolonization, and nationalism, Oghenetoja Okoh offers a detailed historical analysis of these communities. This study moves beyond a singular focus on the experience of crude oil extraction, exploring a longer history of state manipulation and exploitation in which minorities are construed as governable citizens.
• To understand the working principle of support vector machine (SVM).
• To comprehend the rules for identification of correct hyperplane.
• To understand the concept of support vectors, maximized margin, positive and negative hyperplanes.
• To apply an SVM classifier for a linear and non-linear dataset.
• To understand the process of mapping data points to higher dimensional space.
• To comprehend the working principle of the SVM Kernel.
• To highlight the applications of SVM.
10.1 Support Vector Machines
Support vector machines (SVMs) are supervised machine learning (ML) models used to solve regression and classification problems. However, it is widely used for solving classification problems. The main goal of SVM is to segregate the n-dimensional space into labels or classes by defining a decision boundary or hyperplanes. In this chapter, we shall explore SVM for solving classification problems.
10.1.1 SVM Working Principle
SVM Working Principle | Parteek Bhatia, https://youtu.be/UhzBKrIKPyE
To understand the working principle of the SVM classifier, we will take a standard ML problem where we want a machine to distinguish between a peach and an apple based on their size and color.
Let us suppose the size of the fruit is represented on the X-axis and the color of the fruit is on the Y-axis. The distribution of the dataset of apple and peach is shown in Figure 10.1.
To classify it, we must provide the machine with some sample stock of fruits and label each of the fruits in the stock as an “apple” or “peach”. For example, we have a labeled dataset of some 100 fruits with corresponding labels, i.e., “apple” or “peach”. When this data is fed into a machine, it will analyze these fruits and train itself. Once the training is completed, if some new fruit comes into the stock, the machine will classify whether it is an “apple” or a “peach”.
Most of the traditional ML algorithms would learn by observing the perfect apples and perfect peaches in the stock, i.e., they will train themselves by observing the ideal apples of stock (apples which are very much like apples in terms of their size and color) and the perfect peaches of stock (peaches which are very much like peaches in terms of their size and color). These standard samples are likely to be found in the heart of stock. The heart of the stock is shown in Figure 10.2.
After careful study of this chapter, students should be able to do the following:
LO1: Identify stress concentration in machine members.
LO2: Explain stress concentration from the theory of elasticity approach.
LO3: Calculate stress concentration due to a circular hole in a plate.
LO4: Analyze stress concentration due to an elliptical hole in a plate.
LO5: Evaluate notch sensitivity.
LO6: Create designs for reducing stress concentration.
9.1 INTRODUCTION [LO1]
Stresses given by relatively simple equations in the strength of materials for structures or machine members are based on the assumed continuity of the elastic medium. However, the presence of discontinuity destroys the assumed regularity of stress distribution in a member and a sudden increase in stresses occurs in the neighborhood of the discontinuity. In developing machines, it is impossible to avoid abrupt changes in cross-sections, holes, notches, shoulders, etc. Abrupt changes in cross-section also occur at the roots of gear teeth and threads of bolts. Some examples are shown in Figure 9.1.
Any such discontinuity acts as a stress raiser. Ideally, discontinuity in materials such as non-metallic inclusions in metals, casting defects, residual stresses from welding may also act as stress raisers. In this chapter, however, we shall consider only the geometric discontinuity that arises from design considerations of structures or machine parts.
Many theoretical methods and experimental techniques have been developed to determine stress concentrations in different structural and mechanical systems. In order to understand the concept, we shall begin with a plate with a centrally located hole. The plate is subjected to uniformly distributed tensile loading at the ends, as shown in Figure 9.2.
This chapter develops new measures of American economic and security hierarchy using a Bayesian latent measurement model. It discusses the challenges in measuring hierarchy and the advantages of the latent variable approach. The chapter details the construction of the measures, incorporating various indicators such as trade dependence, foreign aid, alliances, and troop deployments. It validates the measures by examining their relationship with key outcomes and comparing them to existing data. The new measures provide a foundation for testing the book’s arguments.
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
After careful study of this chapter, students should be able to do the following:
LO1: Identify the difference between engineering mechanics and the theory of elasticity approach.
LO2: Explain yielding and brittle fracture.
LO3: Describe the stress–strain behavior of common engineering materials.
LO4: Compare hardness, ductility, malleability, toughness, and creep.
LO5: Explain different hardness measurement techniques.
1.1 INTRODUCTION [LO1]
Mechanics is one of the oldest physical sciences, dating back to the times of Aristotle and Archimedes. The subject deals with force, displacement, and motion. The concepts of mechanics have been used to solve many mechanical and structural engineering problems through the ages. Because of its intriguing nature, many great scientists including Sir Isaac Newton and Albert Einstein delved into it for solving intricate problems in their own fields.
Engineering mechanics and mechanics of materials developed over centuries with a few experiment-based postulates and assumptions, particularly to solve engineering problems in designing machines and structural parts. Problems are many and varied. However, in most cases, the requirement is to ensure sufficient strength, stiffness, and stability of the components, and eventually those of the whole machine or structure. In order to do this, we first analyze the forces and stresses at different points in a member, and then select materials of known strength and deformation behavior, to withstand the stress distribution with tolerable deformation and stability limits. The methodology has now developed to the extent of coding that takes into account the whole field stress, strain, deformation behaviors, and material characteristics to predict the probability of failure of a component at the weakest point. Inputs from the theory of elasticity and plasticity, mathematical and computational techniques, material science, and many other branches of science are needed to develop such sophisticated coding.
The theory of elasticity too developed but as an applied mathematics topic, and engineers took very little notice of it until recently, when critical analyses of components in high-speed machinery, vehicles, aerospace technology, and many other applications became necessary. The types of problems considered in both the elementary strength of material and the theory of elasticity are similar, but the approaches are different. The strength of the materials approach is generally simple. Here the emphasis is on finding practical solutions to a problem with simplifying assumptions.
After careful study of this chapter, students should be able to do the following:
LO1: Describe constitutive equations.
LO2: Relate the elastic constants.
LO3: Recognize boundary value problems.
LO4: Explain St. Venant's principle.
LO5: Describe the principle of superposition.
LO6: Illustrate the uniqueness theorem.
LO7: Develop stress function approach.
4.1 CONSTITUTIVE EQUATIONS [LO1]
So far, we have discussed the strain and stress analysis in detail. In this chapter, we shall link the stress and strain by considering the material properties in order to completely describe the elastic, plastic, elasto-plastic, visco-elastic, or other such deformation characteristics of solids. These are known as constitutive equations, or in simpler terms the stress–strain relations. There are endless varieties of materials and loading conditions, and therefore development of a general form of constitutive equation may be challenging. Here we mainly consider linear elastic solids along with their mechanical properties and deformation behavior.
Fundamental relation between stress and strain was first given by Robert Hooke in 1676 in the most simplified manner as, “Force varies as the stretch”. This implies a load–deflection relation that was later interpreted as a stress–strain relation. Following this, we can write P = kδ, where P is the force, δ is the stretch or elongation, and k is the spring constant. This can also be written for linear elastic materials as σ = E∈, where σ is the stress, ∈ is the strain, and E is the modulus of elasticity. For nonlinear elasticity, we may write in a simplistic manner σ = E∈n, where n ≠ 1.
Hooke's Law based on this fundamental relation is given as the stress–strain relation, and in its most general form, stresses are functions of all the strain components as shown in equation (4.1.1).
This chapter explores the question of whether the epistemology of the secret of international law and the necessities it puts in place can be resisted. No definite answer to that question is sought here and only tentative reflections on the possibility of resisting the epistemology of the secret are provided in the following paragraphs. This chapter proceeds as follows. This chapter starts by elaborating on why it matters to spare no effort to resist the epistemology of the secret and rein in its consequences. The chapter then recalls that a mere termination or discontinuation of the epistemology of the secret, of its necessities, and of all the literary, hermeneutical, critical, economic, and ideological attitudes it entails is an impossibility. Resistance, it is subsequently argued, can only take the form of an act of obnubilation, a notion whose concrete implications for international legal thought and practice are subsequently spelled out.
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.