To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Our lived experiences are punctuated by events that are sometimes a result of our purposeful intentions and at other times outcomes that happen by pure chance. Even at an abstract level, it is a very human endeavor to deduce meaning from seemingly random observations an exercise whose primary objective is to derive a causal structure in observed phenomena. In fact, our whole intellectual pursuit that differentiates us from other beings can be understood through our inner urge to discover the very purpose of our existence and the conditions that make this possible. This eternal play between chance episodes and purposeful volition manifests in diverse situations that I have labored to recreate through computer simulations of realistic events. This play has a dual role - first, it binds together the flow of our varied experiences and, second, it offers us a perspective to assimilate our understanding of events happening around us that affect us. In order to appreciate this play of chance and purpose, it is essential that students and readers have a conceptual grounding in the areas of probability, statistics, and stochastic processes. Therefore, several playful computer simulations and projects are interlaced with theoretical foundations and numerical examples - both solved and exercise problems. In this way, the presentation in this book remains true to its spirit of inviting thoughtful readers to the various aspects of this area of study.
Historical remark
The advent of a rigorous framework for studying probability and statistics dates back to the eighth century AD and is documented in the works of Al-Khalil, who was an Arab philologist. This branch of mathematics continues to be under development with major contributions from Soviet mathematician Andrey N. Kolmogorov, who developed the modern foundations of probability and statistical theory from a measure-theoretic standpoint in the twentieth century.
This chapter provides an insight to some of the generalimage transforms that offer an alternativerepresentation of images and videos. Few of theirproperties and applications are also discussed thatare related to image compression and reconstruction.Other forms of representation that depend on data,like principal component analysis and sparserepresentation, are provided as an extension tothese representations. Techniques of computing basisfunctions and dictionary learning are introduced inthis chapter.
2.1 Image transforms
Consider a continuous function, 𝑓(𝑥), inone-dimensional (1-D) space, where, 𝑥 ∈ ℝ. Considera set, B, of 1-D basis functions, whose functionalvalues may either be in real or in complex domain.This is represented as in Eq. 2.1.
The term “nano” is derived from a Greek word that means “dwarf” (small) and is represented by the symbol “n.” As a unit prefix, it signifies “one billionth,” denoting a factor of 10-9 or 0.000000001. It is primarily used with the metric system, as illustrated in Figures 8.1 and 8.2. For example, one nanometer is equal to 1 × 10-9 m, and one nanosecond is equal to
1 × 10-9 sec. It is frequently encountered in science and electronics, particularly for prefixing units of time and length.
HISTORY OF NANOTECHNOLOGY
The origin of nanotechnology is often attributed to American physicist Richard Feynman's speech, “There's Plenty of Room at the Bottom,” which he gave on December 29, 1959, at an American Physical Society conference at Caltech. A 1959 lecture by Richard Feynman served as the intellectual inspiration for the field of nanotechnology. The term “nanotechnology” was initially used in a conference in 1974 by a Japanese scientist by the name of Norio Taniguchi from Tokyo University of Science to describe semiconductor techniques with characteristic control on the order of a nanometer, such as thin film deposition and ion beam milling. According to his definition, “nanotechnology” is primarily the processing, separation, consolidation, and deformation of materials by a single atom or molecule.
Clustering is a task oforganizing objects into groups whose members aresimilar in some way. A cluster is a collection of objects thatare similar to each other, but dissimilar to theobjects belonging to other clusters. In other words,a cluster is a group of objects with loosely definedsimilarity among them, which may have the potentialto form a class. A class is a known group of objects thatare described by similar characteristics, andclassification isthe task of assigning a defined class to an object.Image segmentationis also a problem that is similar to clustering,where the clusters are formed by groups of pixelsthat are similar in some context. In imagesegmentation, homogeneous regions in an image may beclustered to derive segments in the image. Thesesegments represent clusters. An example of imagesegmentation is shown in Fig. 6.1, where theforeground is represented by mushroom and thebackground is represented by humus substance aroundit. In this case, the image is primarily clusteredinto two regions, which are shown by white solidcontour (foreground) and white dashed contour(background).
The main motivations of clustering techniques are asfollows.
• To find representative samples of homogeneousgroups in the given data, which would reduce thedata transmission and storage requirements incertain applications. Here, the data isrepresented by a smaller set of representativesamples that capture the characteristics of totaldata.
• To discover natural groups or categories inthe data, which may be used to describe the datasamples by their unknown properties.
• To find relevant groups in the data, whichfacilitates to draw attention toward major groupsof the data in the distribution. These groups formthe major clusters in a given context, likesegments in an image.
• To detect unusual data objects, which are theoutliers in the data, that deviate from thecollective characteristics of groups of data in agiven context.
The effectiveness of comparative studies resides in the breadth and suitability of the cases used in pursuit of a research question. We have selected thirteen societies to develop a comparative understanding of how premodern economies were organized and operated. These span a broad range of societies in terms of organization, complexity, and their place in time and space. They include societies from around the world: six from the Americas and seven from Eurasia and Africa (Figure 2.1). They are diverse in adaptation and scale, and include horticultural, foraging, pastoral, mixed economy, and sedentary agricultural groups. Examples include tribal, chiefdom, and ancient state-level societies. Despite this diversity and the historical independence of the Americas and Eurasian/African examples, commonalities exist in economic structures because of the cumulative and shared nature of economic behaviors that we hope to capture.
1A Calculation of the number of accessible states to an ideal gas
We consider an ideal gas enclosed in a container of volume ð at a temperature ð. The gas consists of ð number of molecules, each of mass ð. Suppose the total energy of the system lies in a narrow range from (ð¸ â ðð¸) to ð¸. Any molecule of the ideal gas lying within this energy range is described by a state having an elementary volume
where ðâ²ð and ðâ²ð are, respectively, the position and momentum coordinates of the molecules of the gaseous system.
Classical mechanics is mainly based on Newton's laws of motion and gravitation. Initially, it was thought that Newton's second law of motion was valid and applicable at all speeds. But new experimental evidence showed that Newton's second law of motion is valid and applicable at low speeds and invalid when the object is moving at high speeds comparable to the velocity of light. This failure of classical mechanics led to the development of the special theory of relativity by young physicist Albert Einstein in 1905, which showed everything in the universe is relative and nothing is absolute. Relativity connects space and time, matter and energy, electricity and magnetism, which are useful and remarkable to our understanding of the physical universe.
The special theory of relativity is applicable to all branches of modern physics, high-energy physics, optics, quantum mechanics, semiconductor devices, atomic theory, nanotechnology, and many other branches of science and technology.
The theory of relativity has two parts: the special theory of relativity and the general theory of relativity. The special theory of relativity deals with the inertial frame of references, while the general theory of relativity deals with the accelerated frame of references. Some common technical terms that are frequently used in relativistic mechanics are as follows:
1. Particle:A particle is a tiny bit of matter with almost no linear dimensions and is considered to be located at a single place. Its mass and charge define it. Examples include the electron, proton, and photon, among others.
Remote sensing involves measurements on a targetwithout getting in contact with it and it comprisestechniques for collecting, storing, and processinggeoreferenced and geospatial data to extractvaluable information. In this context, data refer torepresentations stored in computer memory, which canbe manipulated using computers to derive meaningfulinsights. Remote sensing imaging systems primarilywork with georeferenced images, capturing Earth'ssurfaces, environment, atmosphere, etc. Theseimaging systems may be carried by satellites orairborne platforms like airplanes or drones. Forsatellite-based imaging, revolution of the satellitearound continuously rotating Earth allows periodiccapture of images over the same area. There are twomain types of imaging systems: passive and active.In passive systems, sensors detect reflected andemitted electromagnetic (EM) waves from Earth'ssurface, from mainly two types of energy sources,namely sunlight during the day and terrestrial heatat night. These sensors operate within specificspectral bands, converting energy into electricalsignals stored as two-dimension (2-D) images. Theprinciple is similar to optical cameras.Additionally, energy from Earth's thermal emissioncontributes to night time imaging, particularly inthe thermal infrared (IR) or far IR bands. Passiveremote sensing involves capturing images acrossvarious spectral bands, resulting in multispectraland hyperspectral images of specific regions onEarth.
In active imaging systems, microwave radar (RAdioDetection And Ranging) technology is utilized. Aradar transmitter emits a pulse of an EM wave with aspecific wavelength (in the microwave band). Whenthis pulse strikes a target, some of its energyreflects back to the radar antenna to which theradar receiver is connected. The receiver capturesinformation about the location and geometry of thetarget by recording the phase and amplitude of thereturned signal. By scanning the radar beam over anarea, an image of that region is formed. One of thelimitations of radar imaging is that, the size ofthe transmitting antenna is required to be large forobtaining images of high spatial resolution.
An artificial neural network (ANN) is a network ofneural nodes or perceptual nodes.1 In a feed forwardneural network, each node is fed with a weightedinput vector and the net sum of the weighted vectorsfrom several such nodes is passed through anonlinear function, whose response is the output ofthat node. The layers formed by the input nodes andthe output nodes are known as the input layer andthe output layer, respectively. The layers formed byother nodes are known as hidden layers. A network ofseveral such layers (along with input and outputlayers) forms an ANN, as shown in Fig. 9.1.2 Theconventional ANNs have very few hidden layers,usually not more than three. Using only one or twohidden layers is also common in many applications.In contrast, deep neuralarchitectures have relatively more numberof layers. Even more than 100 hidden layers is notuncommon. This is one of the distinguishedcharacteristics of deep neural networks (DNN) froman ordinary ANN.
The concepts used in deep neural computations aredecades older. In fact, they involve the same neuralnetwork computations as in a simple ANN model. Theseconcepts were introduced in 1980's and their basicprinciples still remain the same. However, a boom inusing deep architectures after almost three decadesof their proposition is mainly attributed to theadvancement in technology and science.
• Electricity scenario in India and the need for transition to green energy in the country
• Indian pledge at CoP-26 at Glasgow and the targets of 2030
• National solar mission and major initiatives that led to exponential growth in solar energy installation in India
• Net-zero target of India and road map for achieving it
• Major solar power projects in India
• Various policies and government organizations involved in achieving the target of solar PV deployment in India
• Changes required in the grid in view of massive deployment of variable and uncertain sources of electricity
Introduction
India is now the most populous country in the world, with almost 18% of the global population. Being a developing country and one of the major evolving economies, the electricity demand in the country is also growing. India ranked fifth in terms of installed capacity and third in terms of electricity produced, in 2018, in the world. India's annual per capita electricity consumption, although, is about 1122 kWh, which is much lower than the world average of 2674 kWh per year, but this number too is one of the fastest-growing. In the last decade the installed capacity of the Indian grid has increased by more than 200 GW. At this rate India is set to become the biggest electric load centre in the world by 2030 with about 1.5 billion people.
Being the most populous developing country, Indian response to the climate crisis is key to the success of sustainable development and the climate protection mission. India historically accounted for less than 5% of the global emissions.
Just like a computer, we must remember things in the order in which entropy increases. This makes the second law of thermodynamics almost trivial. Disorder increases with time because we measure time in the direction in which disorder increases. You can't have a safer bet than that!
Stephen Hawking
Learning Outcomes
After reading this chapter, the reader will be able to
Understand various thermodynamic potentials such as internal energy, enthalpy, Helmholtz free energy, and Gibbs free energy and their applications
Calculate the magnetic work done by a paramagnetic system and understand the process of creating low temperatures using the principle of adiabatic demagnetization
Apprehend the idea of first and second-order phase transitions and Clausius–Clapeyron and Ehrenfest equations related to the phase transitions, respectively
Derive Maxwell's thermodynamic relations
Apply Maxwell's thermodynamic relations to derive energy equations, ð –ðð equations, and other thermodynamic relations connecting ð¶ð and ð¶ð
Derive Joule–Kelvin coefficient for ideal and real gases like Van der Waals gas
Describe Joule's experiment in case of adiabatic expansion of ideal and real Gases
Understand the Joule–Thomson effect for real and Van der Waals gases through porous plug experiment and the temperature of inversion
Solve numerical problems and multiple choice questions on thermodynamic potentials, Maxwell's thermodynamic relations, and Joule–Kelvin coefficient
10.1 Introduction
The term thermodynamic potentials refers to a specific measure of the capacity of a thermodynamic system to perform work. It is a key concept in thermodynamics and encompasses four variables: internal energy ð , Helmholtz free energy ð¹ , enthalpy ð», and Gibbs free energy ðº. The choice of the suitable thermodynamic potential depends upon the specific conditions of the system—whether any isolated, closed, or open systems. This means each of these four potentials has its unique usage scenario and interpretation. These potentials are paramount in describing the energy changes within systems. These potentials are extensive state variables of dimensions of energy and are introduced to account for specific constraints such as isothermal, adiabatic, isochoric, and isobaric processes in a thermodynamic system. Their purpose is to allow for simple treatment of equilibrium for systems interacting with the environment. Starting from the first and second laws of thermodynamics, the differential form of four thermodynamic potentials are derived, and these are called fundamental equations.
We suppose ⦠that the constituent molecules of any simple gas whatever (i.e., the molecules which are at such a distance from each other that they cannot exercise their mutual action) are not formed of a solitary elementary molecule, but are made up of a certain number of these molecules united by attraction to form a single one.
Count of Quaregna Amedeo Avogadro
Learning Outcomes
After reading this chapter, the reader will be able to:
List the differences between ideal and real gas
List the experiments that depicted the behavior of real gases over a large range of pressures and temperatures
Demonstrate the meaning of liquid–gas interface, critical volume, critical pressure, and critical temperature
Derive the equation of state of a real gas considering the effect of pressure and volume
Obtain the reduced equation of state, the law of corresponding state, and the compressibility factor
Compare and contrast the Van der Waals equation of state with experimental results on CO2 due to Andrews
Solve numerical problems and multiple choice questions on the Van der Waals equation of state, reduced equation state, and critical constants of a gas
5.1 Introduction
The foundation of kinetic theory of gases (KTG) is based on two important assumptions: (i) the volume occupied by the molecules of the gas is negligible compared to the total volume of the container, and (ii) no appreciable intermolecular attractive or repulsive forces are present among the molecules. A gas is said to be an ideal one when it conforms exactly to these tenets of the KTG. According to the KTG, such an ideal gas of ð mole obeys the equation of state: ð ð = ðð ð. It is the task of the experimental physicists to test the validity of this equation of state over the whole range of physical parameters such as pressure and temperature. There are a large number of direct and indirect experimental pieces of evidences which clearly indicate that in reality, gases do not behave ideally, that is, the equation ð ð = ðð ð is not satisfied by the real gases over the entire range of the above-mentioned physical parameters. Real gases deviate from ideal behavior, especially at high pressures and low temperatures.