To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
After careful study of this chapter, students should be able to do the following:
LO1: Identify stress concentration in machine members.
LO2: Explain stress concentration from the theory of elasticity approach.
LO3: Calculate stress concentration due to a circular hole in a plate.
LO4: Analyze stress concentration due to an elliptical hole in a plate.
LO5: Evaluate notch sensitivity.
LO6: Create designs for reducing stress concentration.
9.1 INTRODUCTION [LO1]
Stresses given by relatively simple equations in the strength of materials for structures or machine members are based on the assumed continuity of the elastic medium. However, the presence of discontinuity destroys the assumed regularity of stress distribution in a member and a sudden increase in stresses occurs in the neighborhood of the discontinuity. In developing machines, it is impossible to avoid abrupt changes in cross-sections, holes, notches, shoulders, etc. Abrupt changes in cross-section also occur at the roots of gear teeth and threads of bolts. Some examples are shown in Figure 9.1.
Any such discontinuity acts as a stress raiser. Ideally, discontinuity in materials such as non-metallic inclusions in metals, casting defects, residual stresses from welding may also act as stress raisers. In this chapter, however, we shall consider only the geometric discontinuity that arises from design considerations of structures or machine parts.
Many theoretical methods and experimental techniques have been developed to determine stress concentrations in different structural and mechanical systems. In order to understand the concept, we shall begin with a plate with a centrally located hole. The plate is subjected to uniformly distributed tensile loading at the ends, as shown in Figure 9.2.
All metals and alloys exhibit a reduction in electrical resistance as they cool. As the temperature drops, atoms’ thermal vibrations become less intense, and conduction electrons scatter less frequently. The resistivity should decrease toward zero as the temperature approaches zero Kelvin for a perfect pure metal, where the only thing standing in the way of an electron's travel is the thermal vibrations of the lattice. This zero resistance, which a hypothetical perfect specimen would acquire if it could be cooled to absolute zero, is the phenomenon of superconductivity. Any real specimen of metal cannot be perfectly pure and will contain some impurities. As a result, in addition to being scattered by the thermal vibrations of the lattice atoms, the electrons are also dispersed by impurities, and this impurity scattering is largely temperature independent. As a result, at the lowest temperature, there will be some residual resistance. The residual resistivity of a metal increases with the degree of impurity.
The phenomenon of superconductivity was first discovered by Dutch physicist H. Kamerling Onnes of Leiden University in 1911 during the investigation of the variation of electrical resistance of mercury in the newly available range of low temperatures, in the neighborhood of temperature of liquid helium (or 4.2 K). He observed that the resistance of mercury suddenly falls from 0.08 ohm at about 4 K to less than 3 × 10−6 ohm over a very small temperature of 0.01 K.
The nonconducting materials such as paper, wood, glass, ceramics, polymers and so on do not have free charge carriers, that is, electrons or holes. Therefore, they prevent the flow of electrical current and heat through them.
When the main function of nonconducting materials is to provide electrical isolation then they are called insulators.
When the main function of nonconducting materials is for charge storage then it is called dielectric.
The dielectrics are polarized under the influence of an external electric field.
Dielectric Constant
Let us consider two parallel plates separated by a distance “d” connected with a dc supply of voltage V, as shown in Figure 6.1(a). Now the circuit is disconnected, and the dielectric is inserted between the plates, as shown in Figure 6.1(b).
Then, the voltage across the capacitor is reduced from V to V′. The change in voltage across the plates can be related by a factor as
Since V < V , the relative permittivity or dielectric constant ɛr 1 >.
The capacitance without dielectric is given as
The capacitance with dielectric is given as
Now, put the value of C and C¢ in equation (6.1), the relative permittivity or dielectric constant is
In the early days, an operational amplifier (op-amp) was the only linear integrated circuit (IC) that was used in the design of linear IC circuits and systems. Typical applications of the op-amps were mathematical operations, such as summation, subtraction, integration, small signal amplification, and generating oscillations. Over the years, other devices, such as operational transconductance amplifiers, current conveyors, and so on, have also come into common use; still, it has not reduced the importance and areas of application of op-amps. Rather, it became possible to realize many more advanced functions with linear ICs and many applications coming under the domain of nonlinear applications with advances in the process technology and increased level of integration. Some of the more common nonlinear applications are precision rectifiers, voltage-level detectors, and Schmitt trigger circuits. The Schmitt trigger circuit itself is very popular in generating varieties of pulses and other waveforms like triangular waveforms. Some other nonlinear applications such as log and antilog amplifiers, analog multiplier, charge amplifier, and isolation amplifiers are discussed in brief; phase lock loop and its basic function are also included.
Precision Rectifiers
Conventional rectifiers work well for converting alternating supply to a pulsating one. Filters are normally used to remove ripples in the pulsating voltage to obtain dc. It is observed that these rectifiers have some limitations. One of the main limitations is that when a diode conducts during rectification, it has a voltage drop across its terminals, which is approximately 0.7 V. Hence, the ac voltage available for conversion to dc is reduced by that amount.
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
After careful study of this chapter, students should be able to do the following:
LO1: Identify the difference between engineering mechanics and the theory of elasticity approach.
LO2: Explain yielding and brittle fracture.
LO3: Describe the stress–strain behavior of common engineering materials.
LO4: Compare hardness, ductility, malleability, toughness, and creep.
LO5: Explain different hardness measurement techniques.
1.1 INTRODUCTION [LO1]
Mechanics is one of the oldest physical sciences, dating back to the times of Aristotle and Archimedes. The subject deals with force, displacement, and motion. The concepts of mechanics have been used to solve many mechanical and structural engineering problems through the ages. Because of its intriguing nature, many great scientists including Sir Isaac Newton and Albert Einstein delved into it for solving intricate problems in their own fields.
Engineering mechanics and mechanics of materials developed over centuries with a few experiment-based postulates and assumptions, particularly to solve engineering problems in designing machines and structural parts. Problems are many and varied. However, in most cases, the requirement is to ensure sufficient strength, stiffness, and stability of the components, and eventually those of the whole machine or structure. In order to do this, we first analyze the forces and stresses at different points in a member, and then select materials of known strength and deformation behavior, to withstand the stress distribution with tolerable deformation and stability limits. The methodology has now developed to the extent of coding that takes into account the whole field stress, strain, deformation behaviors, and material characteristics to predict the probability of failure of a component at the weakest point. Inputs from the theory of elasticity and plasticity, mathematical and computational techniques, material science, and many other branches of science are needed to develop such sophisticated coding.
The theory of elasticity too developed but as an applied mathematics topic, and engineers took very little notice of it until recently, when critical analyses of components in high-speed machinery, vehicles, aerospace technology, and many other applications became necessary. The types of problems considered in both the elementary strength of material and the theory of elasticity are similar, but the approaches are different. The strength of the materials approach is generally simple. Here the emphasis is on finding practical solutions to a problem with simplifying assumptions.
Wave optics is the branch of modern physics in which the nature of light and its propagation are studied.
Interference
When two waves of the same frequency, having a constant phase difference between them, and traveling in the same medium are allowed to superimpose each other, there is a modification in the intensity pattern. This phenomenon is known as interference of light.
When the resultant amplitude at certain points is the sum of the amplitudes of the two waves, this interference is known as constructive interference.
When the resultant amplitude at certain points is the difference of the amplitudes of the two waves, this interference is known as destructive interference, as shown in Figure 11.1.
COHERENT SOURCES
Two sources are said to be coherent if the waves emitted from them have a constant phase difference with time.
THEORY OF INTERFERENCE
Let us consider two coherent sources S1 and S2 that are equidistant from source S. Let a1 and a2 be the amplitudes of the waves originated from source S1 and S2, respectively, as shown in Figure 11.2. Then the displacement y1 from the source S is given by
where δ is the phase difference between the two waves.
Now, according to the law of superposition, the resultant wave is given by
The band theory of solids is different from the others because the atoms are arranged very close to each other such that the energy levels of the outermost orbital electrons are affected. But the energy level of the innermost electrons is not affected by the neighboring atoms.
In general, if there is n number of atoms, then there will be n discrete energy levels in each energy band. In such a system of n number of atoms, the molecular orbitals are called energy bands shown in Figure 7.1.
CLASSIFICATION OF SOLIDS ON THE BASIS OF BAND THEORY
The solids can be classified on the basis of band theory. The parameter that differentiates the solids among insulator, conductor, and semiconductor is known as energy band gap and represented by (Eg), as shown in Figure 7.2. When the energy band gap (Eg) between conduction band and valence band is greater than 5 eV (electron-volt) then the solid is classified as insulator. When the energy band gap (E g)between conduction band and valence band is 0 eV (electron-volt), that is, overlapping of bands occurs then the solid is classified as conductor. When the energy band gap (Eg) between conduction band and valence band is approximately equals to 1 eV (electron-volt) then the solid is classified as semiconductors.
After careful study of this chapter, students should be able to do the following:
LO1: Describe constitutive equations.
LO2: Relate the elastic constants.
LO3: Recognize boundary value problems.
LO4: Explain St. Venant's principle.
LO5: Describe the principle of superposition.
LO6: Illustrate the uniqueness theorem.
LO7: Develop stress function approach.
4.1 CONSTITUTIVE EQUATIONS [LO1]
So far, we have discussed the strain and stress analysis in detail. In this chapter, we shall link the stress and strain by considering the material properties in order to completely describe the elastic, plastic, elasto-plastic, visco-elastic, or other such deformation characteristics of solids. These are known as constitutive equations, or in simpler terms the stress–strain relations. There are endless varieties of materials and loading conditions, and therefore development of a general form of constitutive equation may be challenging. Here we mainly consider linear elastic solids along with their mechanical properties and deformation behavior.
Fundamental relation between stress and strain was first given by Robert Hooke in 1676 in the most simplified manner as, “Force varies as the stretch”. This implies a load–deflection relation that was later interpreted as a stress–strain relation. Following this, we can write P = kδ, where P is the force, δ is the stretch or elongation, and k is the spring constant. This can also be written for linear elastic materials as σ = E∈, where σ is the stress, ∈ is the strain, and E is the modulus of elasticity. For nonlinear elasticity, we may write in a simplistic manner σ = E∈n, where n ≠ 1.
Hooke's Law based on this fundamental relation is given as the stress–strain relation, and in its most general form, stresses are functions of all the strain components as shown in equation (4.1.1).
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
In various applications of computer vision and imageprocessing, it is required to detect points in animage, which characterize the visual content of thescene in its neighborhood and are distinguishableeven in other imaging instances of the same scene.These points are called key points of an image andthey are characterized by the functionaldistributions, such as distribution of brightnessvalues or color values, around its neighborhood foran image. For example, in the monocular and stereocamera geometries, various analyses involvecomputations of transformation matrices such as,homography between two scenes, fundamental matrixbetween two images of the same scene in a stereoimaging setup, etc. These transformation matricesare computed using key points of the same scenepoint of a pair of images. The image points of thesame scene point in different images of the sceneare called points ofcorrespondence or corresponding points. Key points ofimages are good candidates to form such pairs ofcorresponding points between two images of the samescene. Hence detection and matching of key points ina pair of images are fundamental tasks for suchgeometric analysis.
Consider Fig. 4.1, where images of the same scene arecaptured from two different views. Though theregions of structures in the images visuallycorrespond to each other, it is difficult toprecisely define points of correspondences betweenthem. Even an image of a two-dimensional (2-D)scene, such as 2-D objects on a plane, may gothrough various kinds of transformations, likerotation, scale, shear, etc. It may be required tocompute this transformation among such a pair ofimages. This is also a common problem of imageregistration.
Heat, like gravity, penetrates every substance of the universe, its rays occupy all parts of space.
Jean-Baptiste-Joseph Fourier
learning Outcomes
After reading this chapter, the reader will be able to
Understand the meaning of three processes of heat flow: conduction, convection, and radiation
Know about thermal conductivity, diffusivity, and steady-state condition of a thermal conductor
Derive Fourier's one-dimensional heat flow equation and solve it in the steady state
Derive the mathematical expression for the temperature distribution in a lagged bar
Derive the amount of heat flow in a cylindrical and a spherical thermal conductor
Solve numerical problems and multiple choice questions on the process of conduction of heat
6.1 Introduction
Heat is the thermal energy transferred between different substances that are maintained at different temperatures. This energy is always transferred from the hotter object (which is maintained at a higher temperature) to the colder one (which is maintained at a lower temperature). Heat is the energy arising due to the movement of atoms and molecules that are continuously moving around, hitting each other and other objects. This motion is faster for the molecules with a largeramount of energy than the molecules with a smaller amount of energy that causes the former to have more heat. Transfer of heat continues until both objects attain the same temperature or the same speed. This transfer of heat depends upon the nature of the material property determined by a parameter known as thermal conductivity or coefficient of thermal conduction. This parameter helps us to understand the concept of transfer of thermal energy from a hotter to a colder body, to differentiate various objects in terms of the thermal property, and to determine the amount of heat conducted from the hotter to the colder region of an object. The transfer of thermal energy occurs in several situations:
When there exists a difference in temperature between an object and its surroundings,
When there exists a difference in temperature between two objects in contact with each other, and
When there exists a temperature gradient within the same object.
Statistical mechanics bridges the gaps between the laws of thermodynamics and the internal structure of the matter. Some examples are as follows:
1. Assembly of atoms in gaseous or liquid helium.
2. Assembly of water molecules in solid, liquid, or vapor state.
3. Assembly of free electrons in metal.
The behavior of all these abovementioned assemblies is totally different in different phases. Therefore, it is most significant to relate the macroscopic behavior of the system to its microscopic structure.
In this mechanics, most probable behavior of assembly are studied instead of individual particle interactions or behavior.
The behavior of assembly that is repeated a maximum time is known as most probable behavior.
hase Space
Six coordinates can fully characterize the state of any system:
1. Three for describing the position x, y, z and three for momentum Px, Py, Pz.
2. The combined position and momentum space (x, y, z, Px, Py, Pz) is called phase space.
3. The momentum space represents the energy of state,
For a system of N particles, there exists 3N position coordinates and 3N momentum coordinates. A single particle in phase space is known as a phase point, and the space occupied by it is known as µ-space.
olume Element ofµ-Space
4. Consider a particle having the position and momentum coordinates in the range.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.
Humans have had a lengthy history of understanding electricity and magnetism. The tangible characteristics of light have also been studied. But in contrast to optics, electricity and magnetism—now known as electromagnetics—have been believed to be governed by different physical laws. This makes sense because optical physics as it was previously understood by humans differs significantly from the physics of electricity and magnetism. For instance, the ancient Greeks and Asians were aware of lode stone between 600 and 400 BC. Since 200 BC, China has been using the compass. The Greeks described static energy as early as 400 BC. But these oddities had no real effect until the invention of telegraphy. The voltaic cell or galvanic cell was created by Luigi Galvani and Alesandro Volta in the late 1700s, which led to the development of telegraphy. It quickly became clear that information could be transmitted using just two wires attached to a voltaic cell. The development of telegraphy was therefore prompted by this potential by the early 1800s. To learn more about the characteristics of electricity and magnetism, Andre-Marie Ampere (1823) and Michael Faraday (1838) conducted tests. Ampere's law and Faraday's law are consequently called after them. In order to comprehend telegraphy better, Kirchhoff voltage and current rules were also established in 1845. The data transmission mechanism was not well comprehended despite these laws. The cause of the data transmission signal's distortion was unknown. The ideal signal would alternate between ones and zeros, but the digital signal quickly lost its shape along a data transmission line.