To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter revisits Charles Tilly’s bellicist theory of state-building, arguing that while war is a focal point, a broader array of external factors plays a crucial role in shaping developmental paths. It emphasizes the importance of hierarchical relationships within the international system in understanding state-building processes. The chapter examines foreign-imposed state-building and argues that the effectiveness of external state-building efforts depends on the goals and authority of external actors. It concludes that incorporating hierarchy is essential for a comprehensive explanation of state-building.
This chapter examines the intended and unintended consequences of American hierarchy on partner states. It analyzes the impact of increased state capacity resulting from American economic hierarchy on civil conflict, human rights, democratization, and inequality. The results suggest that economic hierarchy reduces conflict, human rights abuses, and promotes democracy primarily through direct effects rather than via increased state capacity. However, both economic and security hierarchy exacerbate political inequalities. The chapter highlights the complex implications of American hierarchy.
This chapter introduces the book’s main argument that American economic hierarchy has enhanced property rights and state capacity in partner states over the past forty years, challenging the conventional view that United States’ involvement undermines state-building. It outlines the conceptual framework, focusing on extractive capacity and hierarchy as key concepts. The chapter previews the argument, highlights the book’s contributions to system-level theories, state-building research, and international development literature, and outlines the plan for the book.
• To know the limitations of traditional neural networks for image recognition.
• To understand the working principles of convolutional neural network (CNN).
• To understand the architecture of CNN.
• To know the importance of convolution layer, max pooling, flattening, and full connection layer of CNN model.
• To understand the process of training a CNN model.
• To decide the optimal number of epochs to train a neural network.
18.1 Image Recognition
Using a convolutional neural network (CNN), technological development in image recognition has revolutionized far beyond our imagination. Let us consider the comic scene shown in Figure 18.1, as it provides interesting insights into the development of image recognition and depicts a decade-back possible scenario. Here, a manager asks his computer programmer to “Develop an app which can check whether the user is in a national park or not, when he clicks some photo!” Being an easy and feasible task, the computer programmer responds that the task is merely of few hours. But, the manager's curiosity goes up, and he asks the programmer further to check whether the image is of a bird or not? Surprisingly, the programmer responds, “I need a research team and five years for this task.”
This surprised the manager as he expected it to be an easy task. But the programmer who has a knack in the field knows that it is one of the complex problems to be addressed in computer science.
In the last decade, we have provided solutions to many complex problems in the field of computers. But, for the last 50 years, we have been struggling to solve the problems in image recognition. However, thanks to the efforts of researchers and computer scientists across the globe, we can solve these problems now. Even a three-year-old child can identify a bird's photo, but identifying a way by which computers can do the same task was not a cake-walk; hence, it took almost 50 years!
We have finally found a promising approach for object recognition using deep CNN in recent years. In this chapter, we will discuss the working principle and concepts of CNN, a deep neural network approach to solving the problem of image recognition.
After careful study of this chapter, students should be able to do the following:
LO1: Describe strain energy in different loading conditions.
LO2: Explain the principle of superposition and reciprocal relations.
LO3: Apply the first theorem of Castigliano.
LO4: Analyze the theorem of virtual work.
LO5: Apply the dummy load method.
LO6: Analyze the theorem of virtual work.
12.1 INTRODUCTION [LO1]
There are in general two approaches to solving equilibrium problems in solid mechanics: Eulerian and Lagrangian. The first approach deals with vectors such as force and moments, and considers the static equilibrium and compatibility equations to solve the problems. In the second approach, scalars such as work and energy are used, and here solutions to problems are based on the principle of conservation of energy. There are many situations where the second approach is more advantageous, and here some powerful methods, such as the method of virtual work, based on this approach, are used.
Eulerian and Lagrangian approaches to solving solid mechanics problems are much more involved. However, here we have chosen to describe these in a simplified manner, which is suitable as a prologue to the present discussion on energy methods.
In mechanics, energy is defined as the capacity to do work, and this may exist in different forms. We are concerned here with elastic strain energy, which is a form of potential energy stored in a body on which some work is done by externally applied forces. Here it is assumed that the material remains elastic when work has been done so that all the energy is recoverable and no permanent deformation occurs. This means that strain energy U = work done. If the load is applied gradually in straining, the material load–extension graph is as shown in Figure 12.1, and we may write U = ½ Pδ.
The hatched portion of the load–extension graph represents the strain energy and the unhatched portion ABD represents the complementary energy that is utilized in some advanced energy methods of solution.
• To comprehend the concept of multiple linear regression.
• To understand the process of handling nominal or categorical attributes and the concept of label encoding, one-hot encoding, and dummy variables.
• To build the multiple regression model.
• To understand the need, concept, and the process to calculate the P-value.
• To comprehend various variable selection methods.
• To comprehend the concept of polynomial linear regression.
• To understand the importance of the degree of independent variables.
7.1 Introduction to Multiple Linear Regression
In simple linear regression, we have one dependent variable and only one independent variable. As we discussed in the previous chapter, the stipend of a research scholar is dependent on his years of research experience.
But most of the time, the dependent variable is influenced by more than one independent variable. When the dependent variable depends on more than one independent variable, it is known as multiple linear regression.
Figure 7.1 indicates the difference between simple and multiple regressions mathematically.
In Figure 7.1(b), b0 is the constant while x1, x2, and xn are independent variables on which the dependent variable y depends. You can point out that mathematically multiple linear regression is derived similarly to simple linear regression. The major difference is that in multiple linear regression, we have multiple independent variables, as it is from x1 to xn instead of only one independent variable, x1, in the case of simple linear regression. It is also important to note that we have b1 to bn as the coefficients of these independent variables, respectively.
The price prediction of a house can be viewed as a multiple linear regression problem, where factors such as plot size, number of bedrooms, and location play a significant role in determining the price. Unlike simple linear regression, which relies solely on plot size, multiple linear regression considers various features to accurately estimate the house price.
Let us understand this concept further with a real-life example. Consider a case where a venture capitalist has hired a data scientist to analyze different companies’ data to predict their profit. Identifying the company having maximum profit will help the venture capitalist to select the company he could invest in the near future to earn maximum profit.
• To understand the conceptual framework for the implementation of recurrent neural network (RNN) using the long short-term memory model.
• To perform data pre-processing on the time series data.
• To install and import TensorFlow, Keras, and other desired packages.
• To build the architecture of RNN.
• To learn the procedure for compiling RNN.
• To perform a fit operation on the compiled RNN model.
• To prepare the test dataset in desired data structure.
• To perform visualization and analysis of the results.
21.1 Introduction
In recent years, a recurrent neural network (RNN) has become one of the most prominent neural networks for predicting values based on time series data. The time series is a collection of data points ordered over even intervals in time. Hence, time series analysis is useful when we want to study some parameter changes over time. In RNNs, the output from the previous steps is fed into the input of the current state. Thus, RNN could predict the next letter of any word, the next word of the sentence, and the diesel prices, stock prices, or weather. In all these tasks, there is a need to remember the previous values, and consequently, the output at t+1 will be dependent on the output at the time t interval. In this chapter, we will implement an RNN to predict the trend of diesel prices.
We will implement an RNN through long short-term memory (LSTM) units. We will use stacked layers of LSTM, as RNNs suffer from the issue of vanishing gradient and exploding gradient. Whereas LSTM encompasses gates through which they can regulate the flow of information. Because of this added advantage, LSTMs are commonly used for implementing RNNs.
21.2 Implementation of RNN in Python
Problem Statement: In this experiment, our primary objective is to predict the trend of diesel prices in Delhi, i.e., we wish to predict that the diesel prices in Delhi will witness an upward or downward trend for the near future. It is difficult to predict the exact future price of diesel on a particular day, as our dataset for this problem is quite small. Hence, using an RNN model (implemented through stacked layers of LSTM), we will predict an approximate value of future diesel prices in Delhi, which helps to predict the trend accurately (upward or downward).
The concluding chapter reflects on the future of American hierarchy and state development in light of the book’s findings. It discusses potential changes in American economic priorities and the rise of new hierarchies in the international system. The chapter explores the implications for partner states and highlights the need for further research on the role of nonstate actors, such as firms and international organizations. It also considers the normative implications of the book’s findings and underscores the importance of understanding the complex effects of hierarchy on state-building.
This chapter spells out the notion of the espistemology of the secret. It unpacks the two main components of the epistemology of the secret of international law: the necessary presence of hidden, unknown, invisible content in the texts, practices, actors, effects, representations, past, etc. of international law (what is called in this book the necessity of secret content) and the necessity for international lawyers to reveal such hidden, unknown, invisible content (what is called in this book the necessity of revelation). The chapter distinguishes the epistemology of the secret of international law from the hermeneutics of suspicion, the idea of an ideology of secretism and the idea of an economy of secrets.
• To understand about different steps to perform for the implementation of classifier algorithms in Python.
• To import the desired libraries and dataset.
• To split the dataset into training and testing datasets.
• To perform feature scaling on the data.
• To build the different classifications models and predict the test set results.
• To make the confusion matrix for the result analysis.
• To visualize the training and testing set result.
• To implement important classification algorithms like decision tree, random forest, naive Bayes, k-NN, logistic regression, and support vector machine.
11.1 Introduction to Classification Algorithms and Steps for Its Implementation
In this chapter, we will implement several classification algorithms of machine learning (ML) in Python. For the same, we will consider the Purchase Alexa Dataset, which contains multiple users’ information and their decision to buy Alexa in terms of “YES” or “NO”. The dataset features several independent columns and a labeled class attribute. Our main objective is to predict whether a user will buy Alexa or not (class attribute). The class predictions are based on the input attributes composed of several independent columns.
To better understand, we will use the same dataset, i.e., Purchase Alexa Dataset, to implement all the classification algorithms such as decision tree, random forest, naive Bayes, k-NN, logistic regression, and support vector machine (SVM). This will allow us to compare the results and the working of different classification algorithms. We will start by creating a pre-processing data template for our dataset, which shall remain common for all the classification models. Later, we will implement a classification model for each algorithm mentioned above. Then the performance of the model under study would be analyzed using the confusion matrix and the performance metrics. Finally, the training and testing results would be visualized graphically to better understand the working of classifiers.
A stepwise approach to implementing a classification model in Python is as follows:
Step 1: Importing libraries—Importing of libraries is required to perform various functions and steps related to data pre-processing and building the classifiers.
Step 2: Loading dataset—Download and load the desired dataset to Spyder.
Step 3: Splitting the dataset into training and testing datasets—Splitting the loaded dataset into training and testing subsets.
This chapter draws the attention to systems of thought other than international law and that are similarly articulated around a postulation of the necessary presence of some content or substance deemed to be hidden in some way (what is called here the necessity of secret content) and/or the necessary performance of an act of revelation of some content or substance previously unknown (what is called here the necessity of revelation). The attention is drawn on the epistemologies of the secrets at work in Greek logocentric thought, in the Christian governance of the mind, in modern thought, in the idea of critique inherited from modern thought, in bourgeois literature, in Freudian psychoanalysis, in structuralist thought, and in poststructuralist thought.
• To import the necessary libraries and loading the dataset.
• To split the dataset into training and testing datasets.
• To build the simple linear model and make predictions.
• To visualize the training set and testing set results.
• To calculate mean absolute error, mean squared error, and root mean squared error.
In this chapter, we are going to implement the simple linear regression in Python. To implement this concept, we will analyze how the stipend of a researcher is related to their years of research experience. Our aim is to predict the stipend of the researcher based on his/her research experience.
Problem Statement and Dataset
To perform this task, we will consider a dataset consisting of two attributes: ResearchExperience and Stipend. There are 30 observations in this dataset to draw the correlation between the research experience and their corresponding stipend. A research institute aims to find this correlation between research experience and the stipend. This will assist the management in providing an appropriate stipend to new research scholars based on their years of research experience, rather than deciding randomly. The obvious thing is that the stipend is directly proportional to the research experience. The higher the experience, the more will be the stipend. We will use a simple linear regression model to solve this problem.
Let us quickly refresh our concepts of a simple linear regression model. We know that simple linear regression can best fit the straight line to generate a relationship between the research experience and the stipend. Though the dataset is quite simple, it has a great business value to it, as the model created will help the institute predict the stipend of the researcher based on their experience. Therefore, using this model, the stipend of the new researchers can be easily predicted, and this would also acknowledge the transparency in the management. Here, ResearchExperience (independent variable) will be our X and act as horizontal axis. In contrast, the Stipend (variable to be predicted is the dependent variable) will be Y and act as vertical axis, as shown in Figure 6.1.