To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
CHANCE PERMEATES OUR physical and mental universe. While the role of chance in human lives has had a longer history, starting with the more authoritative influence of the nobility, the more rationally sound theory of probability and statistics has come into practice in diverse areas of science and engineering starting from the early to mid-twentieth century. Practical applications of statistical theories proliferated to such an extent in the previous century that the American government-sponsored RAND corporation published a 600-page book that wholly consisted of a random number table and a table of standard normal deviates. One of the primary objectives of this book was to enable a computer-simulated approximate solution of an exact but unsolvable problem by a procedure known as the Monte Carlo method devised by Fermi, von Neumann, and Ulam in the 1930s–40s.
Statistical methods are the mainstay of conducting modern scientific experiments. One such experimental paradigm is known as a randomized control trial, which is widely used in a variety of fields such as psychology, drug verification, testing the efficacy of vaccines, agricultural sciences, and demography. These statistical experiments require sophisticated sampling techniques in order to nullify experimental biases. With the explosion of information in the modern era, the need to develop advanced and accurate predictive capabilities has grown manifold. This has led to the emergence of modern artificial intelligence (AI) technologies. Further, climate change has become a reality of modern civilization. Accurate prediction of weather and climatic patterns relies on sophisticated AI and statistical techniques. It is impossible to think of a modern economy and social life without the influence and role of chance, and hence without the influence of technological interventions based on statistical principles. We must begin this journey by learning the foundational tenets of probability and statistics.
EMPIRICAL TECHNIQUES rely on abstracting meaning from observable phenomena by constructing relationships between different observations. This process of abstraction is facilitated by appropriate measurements (experiments), suitable organization of data generated by measurements, and, finally, rigorous analysis of the data. The latter is a functional exercise that synthesizes information (data) and theory (model) and enables prediction of hitherto unobserved phenomena.1 It is important to underscore that a good theory (model) that explains a certain phenomenon well by appealing to a set of laws and conditions is expected to be a good candidate for predicting the same using reliable data. For example, a good model for the weight of a normal human being is w = m * h, where w and h refer to weight and height of the person, and m can be set to unity if appropriate units are chosen. A rational explanation of such a formula for weight based on anatomical considerations is perhaps very reasonable. From an empirical standpoint, if we collect height and weight data of normal humans, we will notice that a linear model of the form w = m * h represents the data reasonably well and may be used to predict the weight of the person based on the height of the person. This fact ascertains a functional symmetry between explanation and prediction. Therefore, a good predictive model must automatically be able to explain the data (and related events) well.
These motions [Brownian motion] were such as to satisfy me, after frequently repeated observation, that they arose neither from currents in the fluid, nor from its gradual evaporation, but belonged to the particle itself.
Robert Brown
Learning Outcomes
After reading this chapter, the reader will be able to
Express the meaning of sphere of influence and collision frequency
Derive the distribution function for the free paths among the molecules and demonstrate the concept of mean free path
Calculate the expression for mean free path following Clausius and Maxwell
Derive the expression for pressure exerted by a gas using the survival equation
Calculate the expressions for viscosity, thermal conductivity, and diffusion coefficient of a gaseous system
Demonstrate Brownian motion with its characteristics and calculate the mean square displacement of a particle executing Brownian motion
State the idea of a random walk problem
Solve numerical problems and multiple choice questions on the mean free path, viscosity, thermal conduction, diffusion, Brownian motion, and random walk
4.1 Introduction
Gases are distinguished from other forms of matter, not only by their power of indefinite expansion so as to fill any vessel, however large, and by the great effect heat has in dilating them, but by the uniformity and simplicity of the laws which regulate these changes.
James Clerk Maxwell
The molecules of an ideal gas are considered as randomly moving point particles. From the concept of kinetic theory of gases (KTG), it is well established that even at room temperature, such point molecules of the ideal gas move at very large speeds. The average value of this speed can be determined assuming that the molecules obey Maxwell's speed distribution law and is given by the following expression
Color is a psycho-physiological property of humanvisual experiences when the eyes look at objects andlight. Color is not a physical property of thoseobjects or light, rather, it is the result of aninteraction between physical light in theenvironment and human visual system (Palmer, 1999).For processing color images, it is required todevelop an understanding on how colors arerepresented following human perception.
3.1 Light sources
A broad range of electromagnetic spectrum, shown inFig. 3.1, consists of electromagnetic waves rangingfrom very long wavelengths at radio waves to veryhigh frequency at gamma waves. A very narrowinterval in this spectrum, toward the higher end ofspectral frequencies, accounts for the visible raysand it is called the visiblespectrum. The light and colors that ahuman eye perceives relate to the frequencies ofwaves that fall under the visible spectrum. Apictorial representation of the correspondence ofwavelengths in the visible range of the spectrum todifferent perceived colors has been shown in Fig.3.1. There are seven distinguishable colors in thefigure, violet, indigo, blue, green, yellow, orange,and red, usually known in order of their increasingwavelengths by the acronym of VIBGYOR. The luminancesensitivity function that is shown as a curve inFig. 3.1 is a function of the wavelength. It isempirically observed that the sensitivity of thehuman visual system is maximum in the green zone ofthe visible spectrum. The luminance sensitivityfunction gradually decays toward violet (higherfrequencies) and red (lower frequencies) from thegreen zone, as shown in the figure by the whitecurve.
An operational amplifier (op-amp) is a very prominent active device used in analog integrated circuit (IC) design. Prominence is due to the widespread and diverse areas of applications of the op-amps as its parameters are very close to ideal in a certain range of operating frequencies. Apart from basic arithmetic operations such as addition, multiplication, and integration, op-amps are also widely employed as amplifiers, wave shaping circuits, active filters, log/anti-logarithmic amplifiers, nonlinear function generators, and in analog-to digital and digital-to-analog conversion, and so on.
Figure 8.1(a) shows a pin connection diagram of the most commonly used type-741 op-amp; it needs a dual power supply, has two terminals for inverting and non-inverting inputs, one terminal for the output, and three terminals without any connections for simple applications. Dual op-amps and quad op-amp ICs with matching characteristics are also available.
Op-amp is essentially a high-gain differential amplifier (DA) that can be shown in its simplest form as represented in Figure 8.1(b). The output voltage of the op-amp is the difference between the two input voltages multiplied by the high-gain factor A, so the output voltage is expressed as:
The differential gain A is frequency dependent in a practical op-amp. Therefore, as a first approximation, it is represented by a single-pole roll-off model given below.
• The growing share of electricity in the energy sector
• The connection of electricity and global warming
• Important terms related to electricity
• Conventional sources of electricity generation
• Green and renewable sources of electricity generation
• Smart grid
Introduction
Electricity is the fundamental driver for growth of the modern society. The availability of reliable electric supply is a priority for any residential, industrial, or commercial setup. With the rapid proliferation of digital appliances and the critical role they are playing in our daily life, the dependence on high-quality electric power supply has further increased manifold.
Electricity started as a source of energy for lighting, replacing oil and gas-based lamps. But at that time very few people would have realized that slowly this new source of energy will ‘capture’ the whole residential, industrial, and workplace setup. It is difficult to imagine our lives without electricity now – starting from heating our meals, washing and drying of our clothes, heating the water, keeping the house or office cool or hot to running all kinds of entertainment and communication appliances. This source of energy has turned into an omnipresent phenomenon in our lives. Electricity is the main driver behind technologies related to the Internet and communication also. A major part of the railways is already running on electricity, and the transition of road transport is also imminent in the near future.
• Role of education, training, research and development in successful transition to green energy
Introduction
Rapid transition of the energy system with growing utilization of green and renewable sources has come up with a number of challenges and opportunities. This transition will continue and completely alter the whole energy network. These developments have come up at a time when a number of new and established technologies are available which need to be used and integrated in this changed network. Artificial intelligence (AI), ML, Big Data, cloud computing, blockchain, and so on are some of these important technologies.
Operation and maintenance of solar and wind plants and the role of AI, ML, Big Data and so on; peer-to-peer energy transactions and the role of blockchain in them; grid integration challenges and their solutions; off-grid applications with and without battery storage; handling of PV waste; and solar energy derivatives such as green hydrogen are the areas which are set to play very important roles in the successful transition to the green and distributed energy network.
Apart from these technologies, other important developments are underway, such as solar PV modules of higher efficiency with new technology and material, a new shape, a lesser effect on ambient temperature, requiring less water for cleaning, and so on.
A semiconductor diode is a two-terminal device. Ideally, the diode behaves as a short circuit in one direction for current flow; it is called the forward direction, and the same diode behaves as an open circuit for the current flow in the opposite direction, which is called the reverse direction.
Ideal Silicon p–n Junction Diode
One of the most important characteristics of an ideal diode is that it behaves as an ideal switch, and the switching action is controlled by the direction of the current that flows in one direction only. Figure 1.1(a) shows a symbol for an ideal diode as a switch, where the arrow is in the forward direction. It also shows the positive direction of the current and voltage drop across the diode. Figure 1.1(b) shows the v, i characteristics of the ideal diode, which conducts only in one direction. The names assigned to the diode terminals are anode and cathode. The direction of the forward current in the diode is from the anode to the cathode inside the diode.
Fabricating an ideal diode in practice is not possible. Still, its idealized model in Figure 1.1(b) serves as a good approximation of a practical diode for basic analysis purposes.
At an early stage, diodes were realized using vacuum tubes with filament inside. However, now solid-state diodes are fabricated using semiconductors.
Figure 10.1 shows a simple block diagram of a typical analog signal processing system. The first step in the development of an analog signal processing system is to divide the given specifications into analog and digital parts. Based on the advantages in VLSI technology, a variety of signal processing systems have been developed and will continue to be developed with the format of Figure 10.1. One such example is the advent of sampled data technique and MOS technology, which enabled the fabrication of a general signal processor. However, analog-sampled techniques moved onward from MOS technology toward CMOS technology, as it was highly suitable for combining analog and digital systems.
In most of the practical cases, input signals are of analog types like speech signals, sensor output, radar signals, and so on. The first block in Figure 10.1 is a pre-processing block, which usually consists of analog filters, sample and hold process, and an analog-to-digital converter (ADC). Depending on the nature of the input signal, the input block may require signal processing with strict speed and accurate specifications. After the conversion of analog signals to digital signals, the next block is mostly a microprocessor. The advantage of using a microprocessor is that its function can easily be controlled and modified. Post-processing is done in the final stage, wherein the signal is converted back to the analog form in most cases, and this stage uses digital-to-analog converter (DAC) and some filtering. Proper interfacing is required between the three building blocks of Figure 10.1; placement of the the inter facing is indicated symbolically by the arrow heads.
The climate change challenge, mainly reflected as global warming, has emerged as an existential crisis not only for humanity but for the planet itself. This challenge and the need for sustainable development are, therefore, the most talked about issues of recent times. Ensuring development without causing harm to nature is the basic idea behind sustainable development. In line with this principle, there is a need to review and reset the energy sector and make it more environment friendly.
The need for electrical energy is a basic requirement of modern society. But meeting this requirement has contributed a large part to the climate change also. Conventional methods of generating electricity have been major contributors to CO2 emissions. In order to rectify this, generating the required energy in an environment friendly way has to be implemented. The decarbonization of the electric energy system, therefore, is an integral part of reworking the electrical energy sector in the spirit of sustainable development.
India, over the years, has shown unwavering commitment to contributing towards attaining the sustainable development goals. The country has taken excellent steps in this area under the leadership of Hon’ble Prime Minister Shri Narendra Modi. A major boost to these efforts was announced in terms of the Panchamrit promises declared at the 26th Conference of Parties at Glasgow, UK. Making 50% of the total installed capacity based on non-fossil fuel sources, reducing the carbon emission intensity in its GDP by 45%, and the installation of 500 GW of green energy plants by 2030 are indicative targets of the national resolve.
The development of commerce and integrated market exchange is perhaps one of the most dramatic factors determining the nature and evolution of human economies. Among other things, these developments become closely linked to urban communities and other central places as points to assemble and distribute labor and goods. These places, when they developed as part of the broader process of commercialization, were transformative, increasing the ease of day-by-day interactions, specialization, and freedom of movement.
In Chapter 1 on introduction of image processing, imageformation in a camera has been briefly described.Consider the image in Fig. 10.1. As a basic rule ofprojection, for a given scene point, 𝑷 , a ray from𝑷 that passes through the center of projection, 𝑶,intersects the image plane at its image point, 𝒑.This is a mapping, 𝑷 → 𝒑, of a three-dimensional(3-D) scene point to its two-dimensional (2-D) imagepoint. This rule of perspective projection isapplied for getting the image point of any scenepoint, in general. This particular geometry is thebasis of projectivegeometry in our context.
10.0.1 | Real and projective spaces
Consider a 2-D space, where a point, 𝒑, is denoted bya pair of coordinates, (𝑥, 𝑦), as shown in Fig.10.2. Since it is a cartesian product in real axis,the 2-D space is also denoted as ℝ2, and the point𝒑 belongs to the 2-D coordinate space. Followingthe coordinate conventions, these coordinates aredefined corresponding to an origin, 𝑶, and twoperpendicular axes meeting at the origin, namely,𝑥-axis and 𝑦-axis. The considered projectivespace, although defined in a 2-D space, implicitlyincludes a 3-D space behind its definition. Forexample, though all the points in an image are in a2-D plane, they are related to 3-D points of a scenewhich are lying on the ray of projection. This isthe abstraction of a 2-D projective space. Considera 3-D space, as shown in Fig. 10.2. If a ray passesthrough the origin, 𝑶, and the considered point,𝒑, 𝒑 is said to be the representative of the ray.Every point in this projected plane represents aray. In this case, the set of projection points,each representing a ray or straight line passingthrough the origin, is known as a 2-D projectivespace, ℙ2.
Some amplifier circuits employing either BJT or FET were analyzed in Chapter 2. Almost all the circuits contained blocking and bypass capacitors in addition to the biasing components, load resistance, and so on. However, these external capacitors and the internal parasitic capacitors of the transistor were considered absent during the analysis because either dc analysis was done or for the simplification in the analysis. There was no mention of the practical limit on the device parameters or on the components used. It does not mean that the obtained expressions for the voltage gain, current gain, and input and output resistance were wrong. These expressions are very important and relevant, and for most of the operations of amplifiers, these expressions are to be employed. However, there is something more, which is also very important, and that remains to be studied.
When the internal device capacitances or the intentionally connected capacitance or load impedance having reactive components are considered, the amplifier gain becomes a complex number, A–θ, instead of a real number. A significant point is that both the gain A and phase angle –θ depend on the input signal frequency, and the gain magnitude decreases at low and at comparatively high signal frequencies; amplification remains almost constant in the mid-frequency range. The frequency response characteristics of an amplifier are the plot of gain and phase with frequency.
I think a strong claim can be made that the process of scientific discovery may be regarded as a form of art. This is best seen in the theoretical aspects of Physical Science. The mathematical theorist builds up on certain assumptions and according to well understood logical rules, step by step, a stately edifice, while his imaginative power brings out clearly the hidden relations between its parts. A well-constructed theory is in some respects undoubtedly an artistic production. A fine example is the famous Kinetic Theory of Maxwell, â¦. The theory of relativity by Einstein, quite apart from any question of its validity, cannot but be regarded as a magnificent work of art.
Sir Ernest Rutherford
Learning Outcomes
After reading this chapter, the reader will be able to
State the assumptions of kinetic theory of gases (KTG)
Explain the concept of pressure and calculate the expression for it
Demonstrate mathematically the gas laws using the expression for pressure derived from KTG
Present the kinetic interpretation of temperature
Derive the expression for specific heat at constant volume ð¶ð and constant pressure ð¶ð
Explain the concept of degree of freedom
Solve numerical problems and multiple choice questions on KTG
2.1 Introduction
The kinetic theory of gases (KTG) is a theoretical model that describes the physical properties of a gaseous system in terms of a large number of submicroscopic particles, such as atoms, molecules, and small particles. These constituent elements are in random motion and collide constantly with each other and also with the walls of the container. Considering the molecular composition and characteristic features of such random motion of the molecules, various macroscopic properties of the gaseous system, such as pressure, temperature, viscosity, thermal conductivity, and mass diffusivity can be explained with the help of KTG. In this theory, it is postulated that the pressure exerted by a gas is due to the collision of atoms or molecules moving at different velocities on the walls of a container. It basically attempts to explain the macroscopic properties that are related to the microscopic phenomenon. The physical properties of solids and liquids, in general, are described by their shape, size, mass, volume, etc. Gases, however, have no definite shape, and size. Furthermore, their mass and volume are not directly measurable. In such cases, the KTG can be successfully applied to extract the physical properties of the gaseous system.
A document is an object which is primarily meant forhuman reading. It is not limited to only text.Besides text, which is primarily for reading, it mayalso contain figures, diagrams, photographs, tables,charts, etc. Many of these auxiliary componentsfacilitate the reading experience. Document imageprocessing involves processing of images ofdocuments. Examples of documents comprise of scannedimages of printed or handwritten pages, photographsof documents, etc. In general, images that containdifferent kinds of reading materials are consideredas images of documents. In this context, the imageof a text displayed in the environment is also anexample of document, e.g., an image of a signboard.Such kinds of texts are referred to as scenetexts.
A few examples of images of different kinds ofdocuments are shown in Fig. 16.1. The document inFig. 16.1 (a) contains printed text and graphics.Fig. 16.1 (b) is an example of an official document,which is a typical format of an official purchaseorder. Every organization usually has some standardtemplate or format of each official form used inregular administrative routines, which belongs tothis particular category of documents.
A page of magazine is shown in Fig. 16.1 (c), where thegraphics and text are overlaid in a specific layout.In Fig. 16.1 (d), an example of scene text is shown,which is a photograph of a stone tablet describing ahistorical monument. The image shown in Fig. 16.1(e) is a scanned document of a handwritten page, anexample of writing of the famous Bengali poet andNobel laureate, Rabindranath Tagore (1861–1941).