We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
After careful study of this chapter, students should be able to do the following:
LO1: Identify the difference between engineering mechanics and the theory of elasticity approach.
LO2: Explain yielding and brittle fracture.
LO3: Describe the stress–strain behavior of common engineering materials.
LO4: Compare hardness, ductility, malleability, toughness, and creep.
LO5: Explain different hardness measurement techniques.
1.1 INTRODUCTION [LO1]
Mechanics is one of the oldest physical sciences, dating back to the times of Aristotle and Archimedes. The subject deals with force, displacement, and motion. The concepts of mechanics have been used to solve many mechanical and structural engineering problems through the ages. Because of its intriguing nature, many great scientists including Sir Isaac Newton and Albert Einstein delved into it for solving intricate problems in their own fields.
Engineering mechanics and mechanics of materials developed over centuries with a few experiment-based postulates and assumptions, particularly to solve engineering problems in designing machines and structural parts. Problems are many and varied. However, in most cases, the requirement is to ensure sufficient strength, stiffness, and stability of the components, and eventually those of the whole machine or structure. In order to do this, we first analyze the forces and stresses at different points in a member, and then select materials of known strength and deformation behavior, to withstand the stress distribution with tolerable deformation and stability limits. The methodology has now developed to the extent of coding that takes into account the whole field stress, strain, deformation behaviors, and material characteristics to predict the probability of failure of a component at the weakest point. Inputs from the theory of elasticity and plasticity, mathematical and computational techniques, material science, and many other branches of science are needed to develop such sophisticated coding.
The theory of elasticity too developed but as an applied mathematics topic, and engineers took very little notice of it until recently, when critical analyses of components in high-speed machinery, vehicles, aerospace technology, and many other applications became necessary. The types of problems considered in both the elementary strength of material and the theory of elasticity are similar, but the approaches are different. The strength of the materials approach is generally simple. Here the emphasis is on finding practical solutions to a problem with simplifying assumptions.
This chapter explores one of the key facets of animals’ lack of legal inclusion in the context of civil law: their lack of legal standing. That animals are unable to take legal actions to seek redress for the wrongs done to them is particularly salient given – as the previous chapter demonstrated through the discussion of animal welfare laws – animals’ only other avenue for seeking legal justice is severely limited. This chapter explores the issue of animal standing by comparing various court cases that have involved animal plaintiffs, including the Animal Legal Defense Fund’s case on behalf of Justice the horse, and PETA’s cases on behalf of Naruto the macaque and orcas held captive in a Sea World. These cases have hinged on whether these animals – and indeed, non-human animals in general – can be regarded as having the legal and cognitive capacities that tend to be associated with legal subjecthood. With the courts in these cases finding that animals lack these capacities, the issue of standing exemplifies the broader problem that animals face in the legal system: they are rendered more-or-less invisible to the legal system because they lack the right legal status.
After careful study of this chapter, students should be able to do the following:
LO1: Describe constitutive equations.
LO2: Relate the elastic constants.
LO3: Recognize boundary value problems.
LO4: Explain St. Venant's principle.
LO5: Describe the principle of superposition.
LO6: Illustrate the uniqueness theorem.
LO7: Develop stress function approach.
4.1 CONSTITUTIVE EQUATIONS [LO1]
So far, we have discussed the strain and stress analysis in detail. In this chapter, we shall link the stress and strain by considering the material properties in order to completely describe the elastic, plastic, elasto-plastic, visco-elastic, or other such deformation characteristics of solids. These are known as constitutive equations, or in simpler terms the stress–strain relations. There are endless varieties of materials and loading conditions, and therefore development of a general form of constitutive equation may be challenging. Here we mainly consider linear elastic solids along with their mechanical properties and deformation behavior.
Fundamental relation between stress and strain was first given by Robert Hooke in 1676 in the most simplified manner as, “Force varies as the stretch”. This implies a load–deflection relation that was later interpreted as a stress–strain relation. Following this, we can write P = kδ, where P is the force, δ is the stretch or elongation, and k is the spring constant. This can also be written for linear elastic materials as σ = E∈, where σ is the stress, ∈ is the strain, and E is the modulus of elasticity. For nonlinear elasticity, we may write in a simplistic manner σ = E∈n, where n ≠ 1.
Hooke's Law based on this fundamental relation is given as the stress–strain relation, and in its most general form, stresses are functions of all the strain components as shown in equation (4.1.1).
Limited research has been devoted to investigating assumptions about competition dynamics established through a neoliberal lens. Advocates argue that competition fosters innovation and benefits consumers by incentivizing private enterprises to develop better products or services at competitive prices compared to their rivals. Critics argue that competition exacerbates inequality by disproportionately rewarding high achievers. Rewarding high achievers reflects the meritocratic aspect of competition, which has been widely assumed to be rooted in the individualistic culture of Western countries. Contrary to this assumption, the ideology of meritocratic competition thrived in ancient collectivist Asian countries. Moreover, the assumed linear relationship between individualism, competition, and inequality is contradicted by economic literature, which suggests more individualistic nations display lower income inequality. Despite extensive economic and cultural examination of competition, competition’s political dimensions remain understudied. This interdisciplinary book challenges conventional assumptions about competition, synthesizing evidence across economics, culture, and politics.
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
Heat, like gravity, penetrates every substance of the universe, its rays occupy all parts of space.
Jean-Baptiste-Joseph Fourier
learning Outcomes
After reading this chapter, the reader will be able to
Understand the meaning of three processes of heat flow: conduction, convection, and radiation
Know about thermal conductivity, diffusivity, and steady-state condition of a thermal conductor
Derive Fourier's one-dimensional heat flow equation and solve it in the steady state
Derive the mathematical expression for the temperature distribution in a lagged bar
Derive the amount of heat flow in a cylindrical and a spherical thermal conductor
Solve numerical problems and multiple choice questions on the process of conduction of heat
6.1 Introduction
Heat is the thermal energy transferred between different substances that are maintained at different temperatures. This energy is always transferred from the hotter object (which is maintained at a higher temperature) to the colder one (which is maintained at a lower temperature). Heat is the energy arising due to the movement of atoms and molecules that are continuously moving around, hitting each other and other objects. This motion is faster for the molecules with a largeramount of energy than the molecules with a smaller amount of energy that causes the former to have more heat. Transfer of heat continues until both objects attain the same temperature or the same speed. This transfer of heat depends upon the nature of the material property determined by a parameter known as thermal conductivity or coefficient of thermal conduction. This parameter helps us to understand the concept of transfer of thermal energy from a hotter to a colder body, to differentiate various objects in terms of the thermal property, and to determine the amount of heat conducted from the hotter to the colder region of an object. The transfer of thermal energy occurs in several situations:
When there exists a difference in temperature between an object and its surroundings,
When there exists a difference in temperature between two objects in contact with each other, and
When there exists a temperature gradient within the same object.
This chapter outlines how the Principle of Multispecies Legality offers solutions to the barriers to legal inclusion facing animals in both criminal and civil law contexts: by enabling animals to take legal action; by ensuring that, in civil suits, harms to animals are taken seriously and benefits are awarded to the animals themselves; and that defences of ‘necessity’ in animal welfare laws only apply when the otherwise harmful action is taken for the ultimate benefit of the animal him- or herself. The chapter then explores four institutional safeguards needed to ensure the PML is effective: that legislation is developed under the principle of anticipatory accommodation; that there is the establishment of independent offices of animal welfare; that there is the establishment of dedicated animal crime units and public prosecutors; and that there is equal access to legal services to ensure that all humans who seek to assist animals in taking legal action can do so, regardless of their financial circumstances. Finally, the chapter considers how we need to learn to recognise more expansive conceptions of (political) communication and learn how to be more receptive to them.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.
These motions [Brownian motion] were such as to satisfy me, after frequently repeated observation, that they arose neither from currents in the fluid, nor from its gradual evaporation, but belonged to the particle itself.
Robert Brown
Learning Outcomes
After reading this chapter, the reader will be able to
Express the meaning of sphere of influence and collision frequency
Derive the distribution function for the free paths among the molecules and demonstrate the concept of mean free path
Calculate the expression for mean free path following Clausius and Maxwell
Derive the expression for pressure exerted by a gas using the survival equation
Calculate the expressions for viscosity, thermal conductivity, and diffusion coefficient of a gaseous system
Demonstrate Brownian motion with its characteristics and calculate the mean square displacement of a particle executing Brownian motion
State the idea of a random walk problem
Solve numerical problems and multiple choice questions on the mean free path, viscosity, thermal conduction, diffusion, Brownian motion, and random walk
4.1 Introduction
Gases are distinguished from other forms of matter, not only by their power of indefinite expansion so as to fill any vessel, however large, and by the great effect heat has in dilating them, but by the uniformity and simplicity of the laws which regulate these changes.
James Clerk Maxwell
The molecules of an ideal gas are considered as randomly moving point particles. From the concept of kinetic theory of gases (KTG), it is well established that even at room temperature, such point molecules of the ideal gas move at very large speeds. The average value of this speed can be determined assuming that the molecules obey Maxwell's speed distribution law and is given by the following expression
This chapter considers how, with animals recognised as a part of nature, legally enshrined ‘rights of nature’ could provide a basis for animals’ legal subjecthood. The chapter centres on the case of Estrellita, an Ecuadorean woolly monkey who was declared to be a subject of rights under Ecuador’s constitutionally enshrined rights of ‘pachamama’ or ‘Mother Earth’. Yet, while Estrellita’s case highlights the potential for rights of nature to serve as a source of animals’ legal subjectivity, the chapter stresses caution. First, several rights-of-nature provisions have arguably co-opted Indigenous ideas, and served to justify continued resource extraction under the guise of living in balance with nature. Second, rights-of-nature provisions maintain the ontological human/all-other-nature divide that exists in current legal systems. Finally, the rights of nature may operate as a kind of ‘eco-coverture’ by encapsulating the interests of individual animals within the sphere of nature’s interests, thereby limiting the potential scope of animals’ legal protection. The chapter concludes that we can do better than grounding animals’ legal subjecthood in the rights of nature.