To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, the theory and properties of singleview camera geometry are discussed. We consider theprinciple of image formation in optical cameras inthis case and apply it to relate thethree-dimensional (3-D) world with the image pointson a two-dimensional (2-D) plane.
11.1 | Pinhole camera
A mapping of a point in a 3-D coordinate space to apoint on a 2-D plane has been already discussed inthe previous chapter while explaining the canonicalconfiguration of a 2-D projective space. We relatethese concepts with respect to a pinhole camerabased imaging system. Consider a 3-D scene point, 𝑷, as shown in Fig. 11.1. The corresponding imagepoint, 𝒑′, is the point of intersection of theimage plane and the straight line from 𝑷 thatpasses through the center of the lens, 𝑂. In thesame analogy, consider the formation of an image infront of the camera center, where the correspondingimage plane is placed at the same distance as thesensor is placed behind the lens. In this case, theimages obtained on the image plane that is placed infront of the lens are of the same size as on thesensor, and there is a logical transformation ofcoordinates from point 𝒑′ to point 𝒑. Thus we maydirectly relate the scene point 𝑷 with the imagepoint 𝒑. This is a convenient way of handlingcoordinate system of image points by placing it infront of the camera in the same side of the viewingobjects.
Very often, the term “chemical potential” is not well understood by the students. After studying thermal physics and statistical mechanics for several times, students are still in a lot of confusion about the meaning of the term “chemical potential”. This quantity is represented by the letter ð. Typically, students learn the definition of ð, its properties, its derivation in some simple cases, and its consequences, and work out numerical problems on it. Still, students ask the question: “What is the chemical potential?” and “What does it actually mean?” Attempts are made in this appendix to clarify the meaning of this physical quantity ð with some simple examples.
The concept of chemical potential has appeared first in the classical works of J. W. Gibbs. Since then, it has become actually a subtle concept in thermodynamics and statistical mechanics. It is not easy to grasp the meaning and significance of chemical potential ð, like thermodynamic concepts such as temperature ð , internal energy ð¸, or even entropy ð. In fact, chemical potential ð has acquired a reputation as a concept not easy to grasp even for the experienced physicist. Chemical potential was introduced by Gibbs within the context of an extensive exposition on the foundations of statistical mechanics. In his exposition, Gibbs considered a grand canonical ensemble of systems in which the exchange of particles occurs with the surroundings. In this description, the chemical potential ð appears as a constant required for a necessary closure to the corresponding set of equations. Thus, a fundamental connection with thermodynamics is achieved by observing that the unknown constant ð is indeed related to standard thermodynamic functions like the Helmholtz free energy ð¹ = ð â ð ð or the Gibbs thermodynamic potential ðº = ð¹ + ð ð through their first derivatives. ð, in fact, appeared as a conjugate variable to volume V. 4A.1 Comments about chemical potential
We are familiar with the term potential used in mechanical and electrical system. A capacity factor is associated with each potential term. For example, in a mechanical system, mass is the capacity factor associated with the gravitational potential ð(â2 â â1), where â1 and â2 are the corresponding heights, and the gravitational work done is given by ðð(â2 â â1).
After careful study of this chapter, students should be able to do the following:
LO1: Describe strain energy in different loading conditions.
LO2: Explain the principle of superposition and reciprocal relations.
LO3: Apply the first theorem of Castigliano.
LO4: Analyze the theorem of virtual work.
LO5: Apply the dummy load method.
LO6: Analyze the theorem of virtual work.
12.1 INTRODUCTION [LO1]
There are in general two approaches to solving equilibrium problems in solid mechanics: Eulerian and Lagrangian. The first approach deals with vectors such as force and moments, and considers the static equilibrium and compatibility equations to solve the problems. In the second approach, scalars such as work and energy are used, and here solutions to problems are based on the principle of conservation of energy. There are many situations where the second approach is more advantageous, and here some powerful methods, such as the method of virtual work, based on this approach, are used.
Eulerian and Lagrangian approaches to solving solid mechanics problems are much more involved. However, here we have chosen to describe these in a simplified manner, which is suitable as a prologue to the present discussion on energy methods.
In mechanics, energy is defined as the capacity to do work, and this may exist in different forms. We are concerned here with elastic strain energy, which is a form of potential energy stored in a body on which some work is done by externally applied forces. Here it is assumed that the material remains elastic when work has been done so that all the energy is recoverable and no permanent deformation occurs. This means that strain energy U = work done. If the load is applied gradually in straining, the material load–extension graph is as shown in Figure 12.1, and we may write U = ½ Pδ.
The hatched portion of the load–extension graph represents the strain energy and the unhatched portion ABD represents the complementary energy that is utilized in some advanced energy methods of solution.
It is a remarkable fact that the second law of thermodynamics has played in the history of science a fundamental role far beyond its original scope. Suffice it to mention Boltzmann's work on kinetic theory, Planck's discovery of quantum theory or Einstein's theory of spontaneous emission, which were all based on the second law of thermodynamics.
Ilya Prigogine
Learning Outcomes
After reading this chapter, the reader will be able to
Demonstrate the meaning of reversible, irreversible, and quasi-static processes used in thermodynamics
Explain heat engines, and their efficiency and indicator diagram
Formulate the second law of thermodynamics and apply it to various thermodynamic processes
Demonstrate an idea about entropy and its variation in various thermodynamic processes
State and compare various statements of the second law of thermodynamics
Elucidate the thermodynamic scale of temperature and its equivalence to the perfect gas scale
Explain the principle of increase of entropy
Understand the third law of thermodynamics and explain the significance of unattainability of absolute zero
Solve numerical problems and multiple choice questions on the second law of thermodynamics
9.1 Introduction
The first law of thermodynamics states that only those processes can occur in nature in which the law of conservation of energy holds good. But our daily experience shows that this cannot be the only restriction imposed by nature, because there are many possible thermodynamic processes that conserve energy but do not occur in nature. For example, when two objects are in thermal contact with each other, the heat never flows from the colder object to the warmer one, even though this is not forbidden by the first law of thermodynamics. This simple example indicates that there are some other basic principles in thermodynamics that must be responsible for controlling the behavior of natural processes. One such basic principle is contained in the formulation of the second law of thermodynamics.
This principle limits the use of energy within a source and elucidates that energy cannot be arbitrarily passed from one object to another, just as heat cannot be transferred from a colder object to a hotter one without doing any external work. Similarly, cream cannot be separated from coffee without a chemical process that changes the physical characteristics of the system or its surroundings. Further, the internal energy stored in the air cannot be used to propel a car, or the energy of the ocean cannot be used to run a ship without disturbing something (surroundings) around that object.
Classification, characteristics, and basic design methods of certain types of networks that perform filtering action on the basis of the frequency of signals are briefly discussed in this chapter. The filters, which used only passive elements, and known as passive filters, were the only kind of filters in earlier days. Passive filters are still in use in many specific cases but have been replaced by active filters (using at least one active device) in a majority of applications. One essential reason for the changeover from the passive filters to the active filters was the inability of the realization of practically feasible inductors in integrated circuit (IC) form over a large frequency range of operation. Hence, structures that replaced (simulated) inductors employing resistance, capacitance, and op-amp were synonymous with the active filters, and these were called active RC filters. The usage of op-amps is still dominant, but other active devices are also used in a big way.
Another important approach to analog filter realization has emerged in the form of switched capacitor (SC) circuits. An important feature of the SC circuits is that it uses only capacitors, op-amps, and electronic switches. Consequently, performance parameters of the circuit depend on capacitor ratios and switching frequency. It is to be noted that very small value capacitances can be used, resulting in consuming less chip area, and better practical results as capacitors in ratio form can be fabricated with much less tolerance.
• To comprehend the concept of multiple linear regression.
• To understand the process of handling nominal or categorical attributes and the concept of label encoding, one-hot encoding, and dummy variables.
• To build the multiple regression model.
• To understand the need, concept, and the process to calculate the P-value.
• To comprehend various variable selection methods.
• To comprehend the concept of polynomial linear regression.
• To understand the importance of the degree of independent variables.
7.1 Introduction to Multiple Linear Regression
In simple linear regression, we have one dependent variable and only one independent variable. As we discussed in the previous chapter, the stipend of a research scholar is dependent on his years of research experience.
But most of the time, the dependent variable is influenced by more than one independent variable. When the dependent variable depends on more than one independent variable, it is known as multiple linear regression.
Figure 7.1 indicates the difference between simple and multiple regressions mathematically.
In Figure 7.1(b), b0 is the constant while x1, x2, and xn are independent variables on which the dependent variable y depends. You can point out that mathematically multiple linear regression is derived similarly to simple linear regression. The major difference is that in multiple linear regression, we have multiple independent variables, as it is from x1 to xn instead of only one independent variable, x1, in the case of simple linear regression. It is also important to note that we have b1 to bn as the coefficients of these independent variables, respectively.
The price prediction of a house can be viewed as a multiple linear regression problem, where factors such as plot size, number of bedrooms, and location play a significant role in determining the price. Unlike simple linear regression, which relies solely on plot size, multiple linear regression considers various features to accurately estimate the house price.
Let us understand this concept further with a real-life example. Consider a case where a venture capitalist has hired a data scientist to analyze different companies’ data to predict their profit. Identifying the company having maximum profit will help the venture capitalist to select the company he could invest in the near future to earn maximum profit.
This chapter provides an insight to some of the generalimage transforms that offer an alternativerepresentation of images and videos. Few of theirproperties and applications are also discussed thatare related to image compression and reconstruction.Other forms of representation that depend on data,like principal component analysis and sparserepresentation, are provided as an extension tothese representations. Techniques of computing basisfunctions and dictionary learning are introduced inthis chapter.
2.1 Image transforms
Consider a continuous function, 𝑓(𝑥), inone-dimensional (1-D) space, where, 𝑥 ∈ ℝ. Considera set, B, of 1-D basis functions, whose functionalvalues may either be in real or in complex domain.This is represented as in Eq. 2.1.
The term “nano” is derived from a Greek word that means “dwarf” (small) and is represented by the symbol “n.” As a unit prefix, it signifies “one billionth,” denoting a factor of 10-9 or 0.000000001. It is primarily used with the metric system, as illustrated in Figures 8.1 and 8.2. For example, one nanometer is equal to 1 × 10-9 m, and one nanosecond is equal to
1 × 10-9 sec. It is frequently encountered in science and electronics, particularly for prefixing units of time and length.
HISTORY OF NANOTECHNOLOGY
The origin of nanotechnology is often attributed to American physicist Richard Feynman's speech, “There's Plenty of Room at the Bottom,” which he gave on December 29, 1959, at an American Physical Society conference at Caltech. A 1959 lecture by Richard Feynman served as the intellectual inspiration for the field of nanotechnology. The term “nanotechnology” was initially used in a conference in 1974 by a Japanese scientist by the name of Norio Taniguchi from Tokyo University of Science to describe semiconductor techniques with characteristic control on the order of a nanometer, such as thin film deposition and ion beam milling. According to his definition, “nanotechnology” is primarily the processing, separation, consolidation, and deformation of materials by a single atom or molecule.
Clustering is a task oforganizing objects into groups whose members aresimilar in some way. A cluster is a collection of objects thatare similar to each other, but dissimilar to theobjects belonging to other clusters. In other words,a cluster is a group of objects with loosely definedsimilarity among them, which may have the potentialto form a class. A class is a known group of objects thatare described by similar characteristics, andclassification isthe task of assigning a defined class to an object.Image segmentationis also a problem that is similar to clustering,where the clusters are formed by groups of pixelsthat are similar in some context. In imagesegmentation, homogeneous regions in an image may beclustered to derive segments in the image. Thesesegments represent clusters. An example of imagesegmentation is shown in Fig. 6.1, where theforeground is represented by mushroom and thebackground is represented by humus substance aroundit. In this case, the image is primarily clusteredinto two regions, which are shown by white solidcontour (foreground) and white dashed contour(background).
The main motivations of clustering techniques are asfollows.
• To find representative samples of homogeneousgroups in the given data, which would reduce thedata transmission and storage requirements incertain applications. Here, the data isrepresented by a smaller set of representativesamples that capture the characteristics of totaldata.
• To discover natural groups or categories inthe data, which may be used to describe the datasamples by their unknown properties.
• To find relevant groups in the data, whichfacilitates to draw attention toward major groupsof the data in the distribution. These groups formthe major clusters in a given context, likesegments in an image.
• To detect unusual data objects, which are theoutliers in the data, that deviate from thecollective characteristics of groups of data in agiven context.
• To understand the conceptual framework for the implementation of recurrent neural network (RNN) using the long short-term memory model.
• To perform data pre-processing on the time series data.
• To install and import TensorFlow, Keras, and other desired packages.
• To build the architecture of RNN.
• To learn the procedure for compiling RNN.
• To perform a fit operation on the compiled RNN model.
• To prepare the test dataset in desired data structure.
• To perform visualization and analysis of the results.
21.1 Introduction
In recent years, a recurrent neural network (RNN) has become one of the most prominent neural networks for predicting values based on time series data. The time series is a collection of data points ordered over even intervals in time. Hence, time series analysis is useful when we want to study some parameter changes over time. In RNNs, the output from the previous steps is fed into the input of the current state. Thus, RNN could predict the next letter of any word, the next word of the sentence, and the diesel prices, stock prices, or weather. In all these tasks, there is a need to remember the previous values, and consequently, the output at t+1 will be dependent on the output at the time t interval. In this chapter, we will implement an RNN to predict the trend of diesel prices.
We will implement an RNN through long short-term memory (LSTM) units. We will use stacked layers of LSTM, as RNNs suffer from the issue of vanishing gradient and exploding gradient. Whereas LSTM encompasses gates through which they can regulate the flow of information. Because of this added advantage, LSTMs are commonly used for implementing RNNs.
21.2 Implementation of RNN in Python
Problem Statement: In this experiment, our primary objective is to predict the trend of diesel prices in Delhi, i.e., we wish to predict that the diesel prices in Delhi will witness an upward or downward trend for the near future. It is difficult to predict the exact future price of diesel on a particular day, as our dataset for this problem is quite small. Hence, using an RNN model (implemented through stacked layers of LSTM), we will predict an approximate value of future diesel prices in Delhi, which helps to predict the trend accurately (upward or downward).
1A Calculation of the number of accessible states to an ideal gas
We consider an ideal gas enclosed in a container of volume ð at a temperature ð. The gas consists of ð number of molecules, each of mass ð. Suppose the total energy of the system lies in a narrow range from (ð¸ â ðð¸) to ð¸. Any molecule of the ideal gas lying within this energy range is described by a state having an elementary volume
where ðâ²ð and ðâ²ð are, respectively, the position and momentum coordinates of the molecules of the gaseous system.
• To understand about different steps to perform for the implementation of classifier algorithms in Python.
• To import the desired libraries and dataset.
• To split the dataset into training and testing datasets.
• To perform feature scaling on the data.
• To build the different classifications models and predict the test set results.
• To make the confusion matrix for the result analysis.
• To visualize the training and testing set result.
• To implement important classification algorithms like decision tree, random forest, naive Bayes, k-NN, logistic regression, and support vector machine.
11.1 Introduction to Classification Algorithms and Steps for Its Implementation
In this chapter, we will implement several classification algorithms of machine learning (ML) in Python. For the same, we will consider the Purchase Alexa Dataset, which contains multiple users’ information and their decision to buy Alexa in terms of “YES” or “NO”. The dataset features several independent columns and a labeled class attribute. Our main objective is to predict whether a user will buy Alexa or not (class attribute). The class predictions are based on the input attributes composed of several independent columns.
To better understand, we will use the same dataset, i.e., Purchase Alexa Dataset, to implement all the classification algorithms such as decision tree, random forest, naive Bayes, k-NN, logistic regression, and support vector machine (SVM). This will allow us to compare the results and the working of different classification algorithms. We will start by creating a pre-processing data template for our dataset, which shall remain common for all the classification models. Later, we will implement a classification model for each algorithm mentioned above. Then the performance of the model under study would be analyzed using the confusion matrix and the performance metrics. Finally, the training and testing results would be visualized graphically to better understand the working of classifiers.
A stepwise approach to implementing a classification model in Python is as follows:
Step 1: Importing libraries—Importing of libraries is required to perform various functions and steps related to data pre-processing and building the classifiers.
Step 2: Loading dataset—Download and load the desired dataset to Spyder.
Step 3: Splitting the dataset into training and testing datasets—Splitting the loaded dataset into training and testing subsets.
Classical mechanics is mainly based on Newton's laws of motion and gravitation. Initially, it was thought that Newton's second law of motion was valid and applicable at all speeds. But new experimental evidence showed that Newton's second law of motion is valid and applicable at low speeds and invalid when the object is moving at high speeds comparable to the velocity of light. This failure of classical mechanics led to the development of the special theory of relativity by young physicist Albert Einstein in 1905, which showed everything in the universe is relative and nothing is absolute. Relativity connects space and time, matter and energy, electricity and magnetism, which are useful and remarkable to our understanding of the physical universe.
The special theory of relativity is applicable to all branches of modern physics, high-energy physics, optics, quantum mechanics, semiconductor devices, atomic theory, nanotechnology, and many other branches of science and technology.
The theory of relativity has two parts: the special theory of relativity and the general theory of relativity. The special theory of relativity deals with the inertial frame of references, while the general theory of relativity deals with the accelerated frame of references. Some common technical terms that are frequently used in relativistic mechanics are as follows:
1. Particle:A particle is a tiny bit of matter with almost no linear dimensions and is considered to be located at a single place. Its mass and charge define it. Examples include the electron, proton, and photon, among others.