To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
My futile attempts to fit the elementary quantum of action somehow into the classical theory continued for a number of years and they cost me a great deal of effort. Many of my colleagues saw in this something bordering on tragedy. But I feel differently about it. For the thorough enlightenment I thus received was all the more valuable. I now knew for a fact that the elementary quantum of action played a far more significant part in physics than I had originally been inclined to suspect and this recognition made me see clearly the need for the introduction of totally new methods of analysis and reasoning in the treatment of atomic problems.
Max Planck
Learning Outcomes
After reading this chapter, the reader will be able to
Understand the distribution of energy density in a black body radiation as a function of wavelength and temperature
Derive classical laws of black body radiation such as Wien distribution law and Rayleigh–Jeans law
Get an idea about the development of quantum theory of radiation
Understand Planck's quanta postulates and explain the black body radiation spectrum
Derive Planck's law of black body radiation
Verify Planck's law of black body radiation experimentally
Derive Wien distribution law and Rayleigh–Jeans law and explain ultraviolet catastrophe from Planck's law of black body radiation
Determine the temperature of cosmic microwave background radiation using Planck's law of black body radiation
Solve numerical problems and multiple choice questions on black body Radiation
11.1 Introduction
Figure 11.1 The whole electromagnetic spectrum. Thermal radiation ranges in frequency from the shortest infrared rays through the visible-light spectrum to the longest ultraviolet rays.
Radiation emitted from the surface of a heated source is known as thermal radiation. In this process, thermal energy is spread out in all directions in the form of electromagnetic radiation and travels directly to its point of absorption at the speed of light. It does not require an intervening medium for its propagation. The wavelength of thermal radiation ranges from the longest infrared rays through the visible-light spectrum to the shortest ultraviolet rays. Such electromagnetic spectrum is shown in Figure 11.1 as a function of frequency. The distribution of radiant energy with their corresponding intensities within various ranges of wavelengths is governed by the temperature of the emitting surface .
STATISICAL EXPERIMENTS ENABLE us to make inferences from data about parameters that characterize a population. Generally speaking, inferences may be of two types, namely, deductive inference and inductive inference. Deductive inference pertains to conclusions based on a set of premises (propositions) and their synthesis. Deductive reasoning has a definitive character. For example, all men are mortal (first proposition); Socrates is a man (second proposition); hence, Socrates is mortal (deductive conclusion). On the other hand, inductive inference has a probabilistic character. One conducts an experiment and collects data. Based on this data, certain conclusions are drawn that may have a broader applicability beyond the contours of the particular experiment performed by the researcher. This generalization of the conclusions drawn from the particular experiment constitutes the framework of inductive reasoning. For example, measurement of heights of a small group of people belonging to a certain population is conducted. Based on the calculations of this small sample set, and upon finding that for this small group the average height of men is greater than the average height of women, it is inferred that the men of this population are generally taller than the women.
The formal practice of inductive reasoning dates back to the thesis of Gottfried Wilhelm Leibniz (see Figure 5.1). He was the first to propose that probability is a relation between hypothesis and evidence (data). His thesis was founded on three conceptual pillars: chance (probability), possibilities (realizable random events), and ideas (generalization of inferences by induction). We have encountered the first two concepts in earlier chapters of this textbook. In this chapter, we will delve into the third theme whereby we will discuss methods to draw conclusions from data derived from statistical experiments based on the principles of inductive reasoning.
Biometrics is the scienceand technology of uniquely identifying a person bythe physical, physiological, genomic, or behavioralcharacteristics. For example, the biometric traitsor signatures for unique characterization of aperson may be obtained from fingerprint, palm print,face, iris, retina, shape of ear, voice, signature,gait, vein in the hand, odor, handwriting, DNAsequences, etc. Some of these traits are evidentlyvisible and are often used in our socialinteractions to identify a person. But many of themmay need use of technology and computationalprocessing for extraction of biometriccharacteristic signatures from them and verifyingthem subsequently. For a unique identification of aperson, the biometric data, also referred to as thebiometric signature, should have the properties ofuniqueness and permanence. The uniqueness is thecharacteristics that uniquely identifies anindividual person and permanence implies that itshould remain unchanged throughout the life of theindividual. However, permanence in absolute sense isseldom true in practice. In view of that, it ispragmatic to use those biometric traits, which areexpected to remain mostly unaltered for asignificant period of time. During this period,there may be some marginal deviations that can belargely tolerated for a practical solution.
17.1 | Biometric system
A biometric system isprimarily designed for managing the identity of aperson. Identity management is required in almostevery sphere of social interaction and activitieslike, border control, access control to certainresources, to avail conditional facilities (food,LPG connection, etc.), financial transactions,admission to examinations, certifying thequalification and competence, etc. There may also bevarious related tasks other than identifying aperson like, determining age and gender of anindividual, establishing kinship between twopersons, etc. There are three main generic tasksthat are involved in such a system of identitymanagement.
Classification is a taskof assigning a known category or class to an object.A class is a wellstudied group of objects that is identified by theircommon properties or characteristics. For example,consider the image in Fig. 7.1, where an instance ofa region in the image is denoted by a rectangularbounding box. Here, the task is to classifydifferent regions in the image by consideringvarious patches, as illustrated by a few boundingboxes, to two classes, “human” or “nonhuman”.Likewise a few other examples of imageclassification problem are, detection of pedestriansin an image patch, recognition of a letter given atwo-dimensional (2-D) image pattern, assigning apixel of an image to its foreground or background,finding whether an image captured indoor or outdoor,etc. We may observe here the diversified nature ofclassification problems and by solving them,different types of tasks are performed. Mostly, theclassification problem falls under the supervisedlearning framework, where training samples withappropriate features and class labels are used tolearn a model that is suitable for predictingclasses of the given data. There are variousapproaches for addressing the classification problemlike probabilistic approach, distance basedapproach, discriminant analysis based approach,artificial neural network (ANN) based approach, etc.This chapter introduces four specific techniquesfrom these approaches, namely, Bayesianclassification technique (particularly, naiveBayesian classifier), 𝐾-nearest neighbor (𝐾-NN)classifier, use of linear discriminant functions,and artificial neutral network, respectively.
Computer vision is the science of facilitating amachine or a computer with the human-like capabilityof seeing and understanding the environment. It is afield of artificial intelligence (AI), which dealswith the theory, algorithmic basis, and computationfor automatic understanding of visual data acquiredfrom an environment. With the rapid advancement ofdigital and computing technology, it is possible tocapture images and videos of a scene and store thedata in the memory of a computer. Computer vision isprimarily concerned with the automatic extraction,analysis and understanding of useful informationfrom a single image, a set of images, or a videowhich is a sequence of images. It has a wide rangeof applications across the society and variousindustries, such as in autonomous vehicles, healthcare, surveillance, augmented reality, robotics,remote sensing, document processing, biometrics, andmore. Some of the key tasks of computer vision areacquisition and processing of images and videos,extracting information, and finally, derivingknowledge and description about the scene. In thisintroductory chapter, we briefly review some of thefundamental aspects of image and video processingwhich may be sufficient to follow the content of therest of the book. However, the readers may beadvised to go through first level image and videoprocessing textbooks to know more details aboutit.
1.1 Image representation
To understand how images are represented in a computer,consider an image shown in Fig 1.1. A small portionof this image, shown by a white rectangle, is zoomedto reveal enlarging details of that portion of theimage. We observe that, within this zoomed portion,although the details are better visible, the edgesappear jagged.
As human societies formed multi-scalar organizations assembling household units, labor and resources were needed to support supra-family activities. Perhaps most important was the way that labor was mobilized in reciprocal relationships between household and in support of community and political institutions. In colloquial parlance, ‘work’ and ‘labor’ are interchangeable, the essential human actions in all economic activities involving subsistence procurement, manufacture, building, transport, warfare, and ritual. Though in many respects isomorphic, we will speak mostly of labor. One difference is that work applies to expenditure of energy in individual and group tasks. Labor is social work engaged between parties (including for supernaturals); the social connections activated in labor parties could be the key motivator for people to work at all (Weiss and Rupp 2011:91). Labor contrasts with organic work (breathing, masticating, pumping blood) or habitual work (tying shoes, brushing teeth). Lucassen (2021:2) quotes Charles and Chris Tilly’s definition of work: “human effort adding use value to goods and services.” Weiss (2014:39) defines work as “agentic activity for changing the environment and creating artifacts,” a definition pleasing to archaeologists. Weiss and Rupp recommend a person-centric approach, finding out what it is like to be working – the lived experience (2011:83, 87). To Lucassen, empirical study of labor should focus on descriptions of men’s and women’s daily practice in their own words (2021:xvii). Lucassen concluded that the “satisfaction, pride, pleasure and the propensity for cooperation and the pursuit of equality in remuneration for effort” characterize labor (2021:45). All that tallies with George Cowgill’s admonition that archaeology should be eliciting human “lived experience” (2013:132–133).
The amplifiers studied so far are small signal amplifiers, where the magnitude of the input signal is small, and the main aim is to amplify either voltage or current with minimum distortion. However, in many applications like control, communication, and power conversion, a large amount of power, sometimes exceeding tens of kW, is to be handled by transistors and other semiconductor devices. In that case, the employed amplifiers are called power amplifiers or large signal amplifiers, where output signals, voltage and current, are large in magnitude.
Based on the type of circuit configuration like CE, CB, and CC, and the location of the quiescent point on the output characteristics, power amplifiers are classified as class A, class B, class AB, class C, and D, E, and F. Each class has its advantages and limitations, which will be discussed along with their circuits and operation. Class D is used very little, and classes E and F are rarely used, so only A, B, and C types of amplifiers will form part of this study, and their classification criterion is mentioned next.
Class A Amplifier: In class A operation, an amplifier is so biased that its operating point is almost in the middle of the output characteristics. The magnitude of the input signal is such that the amplifier operates over its full linear region of the characteristics, but without any clipping of the input signal. So, the output is the amplified replica of the input signal with minimal distortion. However, class A operation works with poor power conversion efficiency; the theoretical maximum power conversion efficiency from DC input to AC output is from 25–50%.
The formations of central places in human societies involved the development of multi-scalar institutions, for which central places played key roles in the economy, politics, social stratification, and religion. With the development of cities, we see a clear linkage to a multiplicity of hierarchical relationships that increasingly dominated ancient and modern societies. The term city has been applied variously to large, populous settlements, depending on the theoretical orientation of scholars, different cultural and geographical areas where they occur, and phases of urbanization through which they pass (Marcus and Sabloff 2008). As seen from cases considered by our group, not all societies had large cities. Pueblo IV in the American Southwest, the Nordic Bronze Age (BA) chiefdoms, and the South Pare people of East Africa lived in settlements without having anything approaching a city. Cities were dynamic and diversified communities that changed according to the social, environmental, and political conditions that shaped their political and economic roles within their territories. They arose for different reasons and their formation requires understanding the economies and environmental conditions that supported them. But what is a city and what is urban? Those are important distinctions to make before comparing the economies of early urban societies.
• Decarbonization pyramid and the importance of energy conservation in sustainable development
• Concept of energy management for optimal utilization of electricity
• Demand-side management
• Role of energy-efficient appliances in decarbonization
• Energy Conservation Act of India
• Major schemes on energy conservation by the BEE in India
• Concept and types of energy audit, energy managers, and energy auditors
• Power factor and energy conservation
• Importance of awareness campaigns, and participation of stakeholders in energy conservation
Introduction
Decarbonizing the electricity infrastructure is of prime importance for achieving climate protection and SDGs. Switching over to carbon-free generation of electricity, like solar and wind, is a mandatory requirement for it. But this energy shifting is not the sufficient requirement for decarbonization. Conservation of energy, in addition to energy shifting, needs to be pursued and implemented simultaneously. Energy conservation is using less energy by avoiding unnecessary uses of energy. The idea of energy conservation, in fact, is in the true spirit of sustainable development also. As defined earlier, development that meets the needs of the present without compromising the ability of future generations to meet their own needs is sustainable development.
‘One unit saved is equal to two units generated’ has been a famous saying of electrical engineering for a long time.
The objective behind this principle, however, was more on financial savings. But in the changed scenario, this principle needs aggressive reiteration as it involves financial as well as environmental savings. In addition, energy conservation leads to reduction in peak demand and the requirement of new infrastructure.
In a feedback system, a signal that is proportional to the output is fed back to the input. It may happen unintentionally or be done intentionally. When the feedback signal adds to the input signal, it is called positive feedback, and when the input signal gets subtracted from the feedback signal, it becomes negative feedback.
Positive feedback is mostly used for the realization of oscillators, whereas negative feedback is used to stabilize the gain of amplifiers against a variation in transistor parameters, supply voltage, and temperature etc.The study in this chapter is limited to negative feedback only, which is primarily used to improve any one of the four types of amplifiers given in the next section, such that the amplifiers become as close to ideal as possible. However, certain conditions are required that help achieve the objective. For example, a primary amplifier is needed to have a very high gain in the forward direction, minimum reverse transmission, which normally happens as a property of the transistors used. Appropriate negative feedback connection and minimum effect of loading due to the feedback network on the main amplifier circuit are also very important.
The above mentioned term appropriate negative feedback needs a bit of explanation. In the voltage and current amplifiers, variables at the input and output are the same, hence there is no problem as such while feeding a part of the output to input.
In this chapter, the theory and properties of singleview camera geometry are discussed. We consider theprinciple of image formation in optical cameras inthis case and apply it to relate thethree-dimensional (3-D) world with the image pointson a two-dimensional (2-D) plane.
11.1 | Pinhole camera
A mapping of a point in a 3-D coordinate space to apoint on a 2-D plane has been already discussed inthe previous chapter while explaining the canonicalconfiguration of a 2-D projective space. We relatethese concepts with respect to a pinhole camerabased imaging system. Consider a 3-D scene point, 𝑷, as shown in Fig. 11.1. The corresponding imagepoint, 𝒑′, is the point of intersection of theimage plane and the straight line from 𝑷 thatpasses through the center of the lens, 𝑂. In thesame analogy, consider the formation of an image infront of the camera center, where the correspondingimage plane is placed at the same distance as thesensor is placed behind the lens. In this case, theimages obtained on the image plane that is placed infront of the lens are of the same size as on thesensor, and there is a logical transformation ofcoordinates from point 𝒑′ to point 𝒑. Thus we maydirectly relate the scene point 𝑷 with the imagepoint 𝒑. This is a convenient way of handlingcoordinate system of image points by placing it infront of the camera in the same side of the viewingobjects.
Very often, the term “chemical potential” is not well understood by the students. After studying thermal physics and statistical mechanics for several times, students are still in a lot of confusion about the meaning of the term “chemical potential”. This quantity is represented by the letter ð. Typically, students learn the definition of ð, its properties, its derivation in some simple cases, and its consequences, and work out numerical problems on it. Still, students ask the question: “What is the chemical potential?” and “What does it actually mean?” Attempts are made in this appendix to clarify the meaning of this physical quantity ð with some simple examples.
The concept of chemical potential has appeared first in the classical works of J. W. Gibbs. Since then, it has become actually a subtle concept in thermodynamics and statistical mechanics. It is not easy to grasp the meaning and significance of chemical potential ð, like thermodynamic concepts such as temperature ð , internal energy ð¸, or even entropy ð. In fact, chemical potential ð has acquired a reputation as a concept not easy to grasp even for the experienced physicist. Chemical potential was introduced by Gibbs within the context of an extensive exposition on the foundations of statistical mechanics. In his exposition, Gibbs considered a grand canonical ensemble of systems in which the exchange of particles occurs with the surroundings. In this description, the chemical potential ð appears as a constant required for a necessary closure to the corresponding set of equations. Thus, a fundamental connection with thermodynamics is achieved by observing that the unknown constant ð is indeed related to standard thermodynamic functions like the Helmholtz free energy ð¹ = ð â ð ð or the Gibbs thermodynamic potential ðº = ð¹ + ð ð through their first derivatives. ð, in fact, appeared as a conjugate variable to volume V. 4A.1 Comments about chemical potential
We are familiar with the term potential used in mechanical and electrical system. A capacity factor is associated with each potential term. For example, in a mechanical system, mass is the capacity factor associated with the gravitational potential ð(â2 â â1), where â1 and â2 are the corresponding heights, and the gravitational work done is given by ðð(â2 â â1).
After careful study of this chapter, students should be able to do the following:
LO1: Describe strain energy in different loading conditions.
LO2: Explain the principle of superposition and reciprocal relations.
LO3: Apply the first theorem of Castigliano.
LO4: Analyze the theorem of virtual work.
LO5: Apply the dummy load method.
LO6: Analyze the theorem of virtual work.
12.1 INTRODUCTION [LO1]
There are in general two approaches to solving equilibrium problems in solid mechanics: Eulerian and Lagrangian. The first approach deals with vectors such as force and moments, and considers the static equilibrium and compatibility equations to solve the problems. In the second approach, scalars such as work and energy are used, and here solutions to problems are based on the principle of conservation of energy. There are many situations where the second approach is more advantageous, and here some powerful methods, such as the method of virtual work, based on this approach, are used.
Eulerian and Lagrangian approaches to solving solid mechanics problems are much more involved. However, here we have chosen to describe these in a simplified manner, which is suitable as a prologue to the present discussion on energy methods.
In mechanics, energy is defined as the capacity to do work, and this may exist in different forms. We are concerned here with elastic strain energy, which is a form of potential energy stored in a body on which some work is done by externally applied forces. Here it is assumed that the material remains elastic when work has been done so that all the energy is recoverable and no permanent deformation occurs. This means that strain energy U = work done. If the load is applied gradually in straining, the material load–extension graph is as shown in Figure 12.1, and we may write U = ½ Pδ.
The hatched portion of the load–extension graph represents the strain energy and the unhatched portion ABD represents the complementary energy that is utilized in some advanced energy methods of solution.
It is a remarkable fact that the second law of thermodynamics has played in the history of science a fundamental role far beyond its original scope. Suffice it to mention Boltzmann's work on kinetic theory, Planck's discovery of quantum theory or Einstein's theory of spontaneous emission, which were all based on the second law of thermodynamics.
Ilya Prigogine
Learning Outcomes
After reading this chapter, the reader will be able to
Demonstrate the meaning of reversible, irreversible, and quasi-static processes used in thermodynamics
Explain heat engines, and their efficiency and indicator diagram
Formulate the second law of thermodynamics and apply it to various thermodynamic processes
Demonstrate an idea about entropy and its variation in various thermodynamic processes
State and compare various statements of the second law of thermodynamics
Elucidate the thermodynamic scale of temperature and its equivalence to the perfect gas scale
Explain the principle of increase of entropy
Understand the third law of thermodynamics and explain the significance of unattainability of absolute zero
Solve numerical problems and multiple choice questions on the second law of thermodynamics
9.1 Introduction
The first law of thermodynamics states that only those processes can occur in nature in which the law of conservation of energy holds good. But our daily experience shows that this cannot be the only restriction imposed by nature, because there are many possible thermodynamic processes that conserve energy but do not occur in nature. For example, when two objects are in thermal contact with each other, the heat never flows from the colder object to the warmer one, even though this is not forbidden by the first law of thermodynamics. This simple example indicates that there are some other basic principles in thermodynamics that must be responsible for controlling the behavior of natural processes. One such basic principle is contained in the formulation of the second law of thermodynamics.
This principle limits the use of energy within a source and elucidates that energy cannot be arbitrarily passed from one object to another, just as heat cannot be transferred from a colder object to a hotter one without doing any external work. Similarly, cream cannot be separated from coffee without a chemical process that changes the physical characteristics of the system or its surroundings. Further, the internal energy stored in the air cannot be used to propel a car, or the energy of the ocean cannot be used to run a ship without disturbing something (surroundings) around that object.
Classification, characteristics, and basic design methods of certain types of networks that perform filtering action on the basis of the frequency of signals are briefly discussed in this chapter. The filters, which used only passive elements, and known as passive filters, were the only kind of filters in earlier days. Passive filters are still in use in many specific cases but have been replaced by active filters (using at least one active device) in a majority of applications. One essential reason for the changeover from the passive filters to the active filters was the inability of the realization of practically feasible inductors in integrated circuit (IC) form over a large frequency range of operation. Hence, structures that replaced (simulated) inductors employing resistance, capacitance, and op-amp were synonymous with the active filters, and these were called active RC filters. The usage of op-amps is still dominant, but other active devices are also used in a big way.
Another important approach to analog filter realization has emerged in the form of switched capacitor (SC) circuits. An important feature of the SC circuits is that it uses only capacitors, op-amps, and electronic switches. Consequently, performance parameters of the circuit depend on capacitor ratios and switching frequency. It is to be noted that very small value capacitances can be used, resulting in consuming less chip area, and better practical results as capacitors in ratio form can be fabricated with much less tolerance.