To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wave optics is the branch of modern physics in which the nature of light and its propagation are studied.
Interference
When two waves of the same frequency, having a constant phase difference between them, and traveling in the same medium are allowed to superimpose each other, there is a modification in the intensity pattern. This phenomenon is known as interference of light.
When the resultant amplitude at certain points is the sum of the amplitudes of the two waves, this interference is known as constructive interference.
When the resultant amplitude at certain points is the difference of the amplitudes of the two waves, this interference is known as destructive interference, as shown in Figure 11.1.
COHERENT SOURCES
Two sources are said to be coherent if the waves emitted from them have a constant phase difference with time.
THEORY OF INTERFERENCE
Let us consider two coherent sources S1 and S2 that are equidistant from source S. Let a1 and a2 be the amplitudes of the waves originated from source S1 and S2, respectively, as shown in Figure 11.2. Then the displacement y1 from the source S is given by
where δ is the phase difference between the two waves.
Now, according to the law of superposition, the resultant wave is given by
The band theory of solids is different from the others because the atoms are arranged very close to each other such that the energy levels of the outermost orbital electrons are affected. But the energy level of the innermost electrons is not affected by the neighboring atoms.
In general, if there is n number of atoms, then there will be n discrete energy levels in each energy band. In such a system of n number of atoms, the molecular orbitals are called energy bands shown in Figure 7.1.
CLASSIFICATION OF SOLIDS ON THE BASIS OF BAND THEORY
The solids can be classified on the basis of band theory. The parameter that differentiates the solids among insulator, conductor, and semiconductor is known as energy band gap and represented by (Eg), as shown in Figure 7.2. When the energy band gap (Eg) between conduction band and valence band is greater than 5 eV (electron-volt) then the solid is classified as insulator. When the energy band gap (E g)between conduction band and valence band is 0 eV (electron-volt), that is, overlapping of bands occurs then the solid is classified as conductor. When the energy band gap (Eg) between conduction band and valence band is approximately equals to 1 eV (electron-volt) then the solid is classified as semiconductors.
After careful study of this chapter, students should be able to do the following:
LO1: Describe constitutive equations.
LO2: Relate the elastic constants.
LO3: Recognize boundary value problems.
LO4: Explain St. Venant's principle.
LO5: Describe the principle of superposition.
LO6: Illustrate the uniqueness theorem.
LO7: Develop stress function approach.
4.1 CONSTITUTIVE EQUATIONS [LO1]
So far, we have discussed the strain and stress analysis in detail. In this chapter, we shall link the stress and strain by considering the material properties in order to completely describe the elastic, plastic, elasto-plastic, visco-elastic, or other such deformation characteristics of solids. These are known as constitutive equations, or in simpler terms the stress–strain relations. There are endless varieties of materials and loading conditions, and therefore development of a general form of constitutive equation may be challenging. Here we mainly consider linear elastic solids along with their mechanical properties and deformation behavior.
Fundamental relation between stress and strain was first given by Robert Hooke in 1676 in the most simplified manner as, “Force varies as the stretch”. This implies a load–deflection relation that was later interpreted as a stress–strain relation. Following this, we can write P = kδ, where P is the force, δ is the stretch or elongation, and k is the spring constant. This can also be written for linear elastic materials as σ = E∈, where σ is the stress, ∈ is the strain, and E is the modulus of elasticity. For nonlinear elasticity, we may write in a simplistic manner σ = E∈n, where n ≠ 1.
Hooke's Law based on this fundamental relation is given as the stress–strain relation, and in its most general form, stresses are functions of all the strain components as shown in equation (4.1.1).
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
In various applications of computer vision and imageprocessing, it is required to detect points in animage, which characterize the visual content of thescene in its neighborhood and are distinguishableeven in other imaging instances of the same scene.These points are called key points of an image andthey are characterized by the functionaldistributions, such as distribution of brightnessvalues or color values, around its neighborhood foran image. For example, in the monocular and stereocamera geometries, various analyses involvecomputations of transformation matrices such as,homography between two scenes, fundamental matrixbetween two images of the same scene in a stereoimaging setup, etc. These transformation matricesare computed using key points of the same scenepoint of a pair of images. The image points of thesame scene point in different images of the sceneare called points ofcorrespondence or corresponding points. Key points ofimages are good candidates to form such pairs ofcorresponding points between two images of the samescene. Hence detection and matching of key points ina pair of images are fundamental tasks for suchgeometric analysis.
Consider Fig. 4.1, where images of the same scene arecaptured from two different views. Though theregions of structures in the images visuallycorrespond to each other, it is difficult toprecisely define points of correspondences betweenthem. Even an image of a two-dimensional (2-D)scene, such as 2-D objects on a plane, may gothrough various kinds of transformations, likerotation, scale, shear, etc. It may be required tocompute this transformation among such a pair ofimages. This is also a common problem of imageregistration.
Heat, like gravity, penetrates every substance of the universe, its rays occupy all parts of space.
Jean-Baptiste-Joseph Fourier
learning Outcomes
After reading this chapter, the reader will be able to
Understand the meaning of three processes of heat flow: conduction, convection, and radiation
Know about thermal conductivity, diffusivity, and steady-state condition of a thermal conductor
Derive Fourier's one-dimensional heat flow equation and solve it in the steady state
Derive the mathematical expression for the temperature distribution in a lagged bar
Derive the amount of heat flow in a cylindrical and a spherical thermal conductor
Solve numerical problems and multiple choice questions on the process of conduction of heat
6.1 Introduction
Heat is the thermal energy transferred between different substances that are maintained at different temperatures. This energy is always transferred from the hotter object (which is maintained at a higher temperature) to the colder one (which is maintained at a lower temperature). Heat is the energy arising due to the movement of atoms and molecules that are continuously moving around, hitting each other and other objects. This motion is faster for the molecules with a largeramount of energy than the molecules with a smaller amount of energy that causes the former to have more heat. Transfer of heat continues until both objects attain the same temperature or the same speed. This transfer of heat depends upon the nature of the material property determined by a parameter known as thermal conductivity or coefficient of thermal conduction. This parameter helps us to understand the concept of transfer of thermal energy from a hotter to a colder body, to differentiate various objects in terms of the thermal property, and to determine the amount of heat conducted from the hotter to the colder region of an object. The transfer of thermal energy occurs in several situations:
When there exists a difference in temperature between an object and its surroundings,
When there exists a difference in temperature between two objects in contact with each other, and
When there exists a temperature gradient within the same object.
Statistical mechanics bridges the gaps between the laws of thermodynamics and the internal structure of the matter. Some examples are as follows:
1. Assembly of atoms in gaseous or liquid helium.
2. Assembly of water molecules in solid, liquid, or vapor state.
3. Assembly of free electrons in metal.
The behavior of all these abovementioned assemblies is totally different in different phases. Therefore, it is most significant to relate the macroscopic behavior of the system to its microscopic structure.
In this mechanics, most probable behavior of assembly are studied instead of individual particle interactions or behavior.
The behavior of assembly that is repeated a maximum time is known as most probable behavior.
hase Space
Six coordinates can fully characterize the state of any system:
1. Three for describing the position x, y, z and three for momentum Px, Py, Pz.
2. The combined position and momentum space (x, y, z, Px, Py, Pz) is called phase space.
3. The momentum space represents the energy of state,
For a system of N particles, there exists 3N position coordinates and 3N momentum coordinates. A single particle in phase space is known as a phase point, and the space occupied by it is known as µ-space.
olume Element ofµ-Space
4. Consider a particle having the position and momentum coordinates in the range.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.
Humans have had a lengthy history of understanding electricity and magnetism. The tangible characteristics of light have also been studied. But in contrast to optics, electricity and magnetism—now known as electromagnetics—have been believed to be governed by different physical laws. This makes sense because optical physics as it was previously understood by humans differs significantly from the physics of electricity and magnetism. For instance, the ancient Greeks and Asians were aware of lode stone between 600 and 400 BC. Since 200 BC, China has been using the compass. The Greeks described static energy as early as 400 BC. But these oddities had no real effect until the invention of telegraphy. The voltaic cell or galvanic cell was created by Luigi Galvani and Alesandro Volta in the late 1700s, which led to the development of telegraphy. It quickly became clear that information could be transmitted using just two wires attached to a voltaic cell. The development of telegraphy was therefore prompted by this potential by the early 1800s. To learn more about the characteristics of electricity and magnetism, Andre-Marie Ampere (1823) and Michael Faraday (1838) conducted tests. Ampere's law and Faraday's law are consequently called after them. In order to comprehend telegraphy better, Kirchhoff voltage and current rules were also established in 1845. The data transmission mechanism was not well comprehended despite these laws. The cause of the data transmission signal's distortion was unknown. The ideal signal would alternate between ones and zeros, but the digital signal quickly lost its shape along a data transmission line.
CHANCE PERMEATES OUR physical and mental universe. While the role of chance in human lives has had a longer history, starting with the more authoritative influence of the nobility, the more rationally sound theory of probability and statistics has come into practice in diverse areas of science and engineering starting from the early to mid-twentieth century. Practical applications of statistical theories proliferated to such an extent in the previous century that the American government-sponsored RAND corporation published a 600-page book that wholly consisted of a random number table and a table of standard normal deviates. One of the primary objectives of this book was to enable a computer-simulated approximate solution of an exact but unsolvable problem by a procedure known as the Monte Carlo method devised by Fermi, von Neumann, and Ulam in the 1930s–40s.
Statistical methods are the mainstay of conducting modern scientific experiments. One such experimental paradigm is known as a randomized control trial, which is widely used in a variety of fields such as psychology, drug verification, testing the efficacy of vaccines, agricultural sciences, and demography. These statistical experiments require sophisticated sampling techniques in order to nullify experimental biases. With the explosion of information in the modern era, the need to develop advanced and accurate predictive capabilities has grown manifold. This has led to the emergence of modern artificial intelligence (AI) technologies. Further, climate change has become a reality of modern civilization. Accurate prediction of weather and climatic patterns relies on sophisticated AI and statistical techniques. It is impossible to think of a modern economy and social life without the influence and role of chance, and hence without the influence of technological interventions based on statistical principles. We must begin this journey by learning the foundational tenets of probability and statistics.
EMPIRICAL TECHNIQUES rely on abstracting meaning from observable phenomena by constructing relationships between different observations. This process of abstraction is facilitated by appropriate measurements (experiments), suitable organization of data generated by measurements, and, finally, rigorous analysis of the data. The latter is a functional exercise that synthesizes information (data) and theory (model) and enables prediction of hitherto unobserved phenomena.1 It is important to underscore that a good theory (model) that explains a certain phenomenon well by appealing to a set of laws and conditions is expected to be a good candidate for predicting the same using reliable data. For example, a good model for the weight of a normal human being is w = m * h, where w and h refer to weight and height of the person, and m can be set to unity if appropriate units are chosen. A rational explanation of such a formula for weight based on anatomical considerations is perhaps very reasonable. From an empirical standpoint, if we collect height and weight data of normal humans, we will notice that a linear model of the form w = m * h represents the data reasonably well and may be used to predict the weight of the person based on the height of the person. This fact ascertains a functional symmetry between explanation and prediction. Therefore, a good predictive model must automatically be able to explain the data (and related events) well.