To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This topic examines government policy and regulation. The starting point is the objectives of government regulation and market failure. The implications for managerial decision making are outlined. The nature of market failure and its different aspects are discussed. Externalities are discussed, and the policy implications. Public goods, and their nature and policy implications, are examined. Imperfect information and the policy implications are discussed. Transaction costs are discussed. Monopoly and the nature of market power and its consequences are examined. Strategic behaviour, such as collusion, predatory pricing and exclusive dealing, is discussed. The distinction between structural and strategic barriers to entry and their different policy implications is explained. Aspects of new technology are discussed, such as network effects and the ‘winner-takes-most’ phenomenon. Various policy approaches and their costs and benefits are examined. Case studies involve two situations in the UK where governments may have made errors of policy. A final case study relates to the global phenomenon of increasing concentration and its consequences.
Chapter 4 presents the basic formulation of the structural reliability problem. It starts with the so-called R-S problem with R denoting a capacity (resistance, supply, strength, etc.) value and S denoting a measure of the corresponding demand (load, stress, etc.), both modeled as random variables. Solutions in integral form are presented for the failure probability by conditioning on R or S, or using formulations in terms of the safety margin or safety factor that lead to the introduction of the concept of reliability index. Exact solutions are presented for specific distributions of R or S. This allows examination of the so-called tail-sensitivity problem, i.e., the sensitivity of the failure probability to the selected probability distributions. It is shown that small failure probabilities are sensitive to the shape of the selected distributions in the tail. The formulation of the structural reliability problem is then generalized and presented in terms of a limit-state function of basic random variables. Using this formulation, the probability of failure is expressed as a multifold integral over the outcome space of the basic random variables. Descriptions of several example applications of the generalized structural reliability formulation conclude the chapter.
Cost estimation – this topic parallels the topic of demand estimation in many ways, in terms of examining the nature of the process involved in cost estimation. Different types of cost scenario are described, explaining the differences between short-run, long-run and learning curve situations. The implications for appropriate model specification are explained, along with the interpretation of different mathematical forms. Cost elasticities and their relationship to returns to scale are discussed. For different scenarios the nature of empirical studies is described, the method of estimation using regression analysis is explained, and the problems of estimation and the implications in terms of managerial decision making are discussed. As with other topics, case studies are important in illustrating the application of principles to real-life situations. Three case studies are presented, all involving recent data from major industries where digital applications are important: banking, airlines and electricity generation.
Managerial economics provides a toolbox for solving problems that managers frequently face. It addresses issues relating to any aspect of decision making that ultimately affects the profit of a firm. Although the general methodology of managerial economics has not changed over the decades, there have been rapid and significant changes in the business environment in the last ten years or so, and three new themes have become increasingly important: digitization; behavioural aspects; and globalization. The first of these developments involves aspects of big data and advanced data analytics, the human-machine interface and the interconnectedness of electronic devices. The second relates to psychological aspects of decision making that cause both consumers and managers to engage in behaviour normally referred to as ‘irrational’. The third development is that improvements in technology relating to digitization have made the business world more interconnected. The text makes heavy use of recent case studies involving these three themes, for example on tech firms, Covid-19 and climate change, so students can see how the tools of managerial economics can be applied in real-life situations.
Another useful transform related to the Fourier and Laplace transforms is the Z-transform, which, like the Laplace transform, converts a time-domain function into a frequency-domain function of a generalized complex frequency parameter. But the Z-transform operates on sampled (or “discrete-time”) functions, often called “sequences” while the Laplace transform operates on continuous-time functions. Thus the relationship between the Z-transform and the Laplace transform parallels the relationship between the discrete-time Fourier transform and the continuous-time Fourier transform. Understanding the concepts and mathematics of discrete-time transforms such as the Z-transform is especially important for solving problems and designing devices and systems using digital computers, in which differential equations become difference equations and signals are represented by sequences of data values.
Chapter 15 describes the use of the Bayesian network (BN) methodology for reliability assessment and updating of structural and infrastructure systems. A brief review of the BN as a graphical representation of random variables and an efficient framework for encoding their joint distribution and its updating upon observations is presented. D-separation rules describing the flow of information within the network upon observation of random variables are described and methods are presented for discretizing continuous random variables, thus allowing the use of efficient algorithms applicable to BNs with discrete nodes. Efficient BN models for components, systems, random fields, and seismic hazard are developed. For time- or space-variant problems, the dynamic Bayesian network is introduced. This model is used in conjunction with structural reliability methods (FORM, SORM, simulation) to develop enhanced BNs to solve reliability problems for structures under time-varying loads. Detailed examples are presented, including post-earthquake risk assessment of a spatially distributed infrastructure system and reliability assessment of a deteriorating structure under stochastic loads. The chapter concludes with a discussion of the potential of the BN as a tool for near-real-time risk assessment and decision support for constructed facilities, and the need for further research and development to realize this potential.
Chapter 5 presents methods for assessing structural reliability under incomplete probability information, i.e., when complete distributional information on the basic random variables is not available. First, second-moment methods are presented where the available information is limited to the means, variances, and covariances of the basic random variables. These include the mean-centered first-order second-moment (MCFOSM) method, the first-order second-moment (FOSM) method, and the generalized second-moment method. These methods lead to approximate computations of the reliability index as a measure of safety. Lack of invariance of the MCFOSM method relative to the formulation of the limit-state function is demonstrated. The FOSM method requires finding the “design point,” which is the point in a transformed standard outcome space that has minimum distance from the origin. An algorithm for finding this point is presented. Next, methods are presented that incorporate probabilistic information beyond the second moments, including knowledge of higher moments and marginal distributions. Last, a method is presented that employs the upper Chebyshev bound for any given state of probability information. The chapter ends with a discussion of the historical significance of the above methods as well as their shortcomings and argues that they should no longer be used in practice.
This chapter examines two fundamental issues regarding the nature of firms. First, why are they necessary in the business environment? Second, what are their objectives? Addressing these issues involves various aspects of theory which are not always associated with economics: transaction cost theory, property rights theory, motivation theory, information theory and agency theory. Regarding the first issue, the necessity for the existence of firms may appear to be self-evident, but on closer examination we can see that many transactions can be performed between individuals without firms existing at all. The problem is that with complex activities the transaction cost of engagement as individuals can be high, whereas internalizing transactions within firms can reduce this cost. The second issue regarding objectives begins with the concept of profit maximization, and then examines the various assumptions underlying it. Various problem areas related to these assumptions are identified, in particular: the existence of agency problems, the measurement of profit, risk and uncertainty, and multi-product firms. The impact of these problems on firms’ objectives is discussed.
This topic examines the nature of game theory, why it is relevant for managerial decision making, and how it determines decisions. The starting point is an explanation of the nature of game theory in terms of the inter-dependence of decision making, and its wide range of applications in real life. Different types of game and their elements are described. The prisoner’s dilemma illustrates some of the counterintuitive aspects of game theory. Static and dynamic games are analysed, and the different types of equilibrium: dominant strategy equilibrium, iterated dominant strategy equilibrium, Nash equilibrium, subgame perfect Nash equilibrium and mixed strategy equilibrium. Cournot, Bertrand and Stackelberg types of oligopoly and their strategy implications are analysed, and comparisons are drawn between them and with perfect competition and monopoly. Games with uncertain outcomes and repeated games are discussed, along with commitment strategies and credibility. Limitations of standard game theory are discussed, such as the existence of bounded rationality and social preferences. Aspects of behavioural game theory are introduced to account for these factors.
Chapter 7 describes the second-order reliability method (SORM), which employs a second-order approximation of the limit-state surface fitted at the design point in the standard normal space. Three distinct SORM approximations are presented. The classical SORM fits the second-order approximating surface to the principal curvatures of the limit-state surface at the design point. This approach requires computing the Hessian (second-derivative matrix) of the limit-state function at the design point and its eigenvalues as the principal curvatures. The second approach computes the principal curvatures iteratively in the process of finding the design point. This approach requires only first-order derivatives of the limit-state function but repeated solutions of the optimization problem for finding the design point. One advantage is that the principal curvatures are found in decreasing order of magnitude and, hence, the computations can be stopped when the curvature found is sufficiently small. The third approach fits the approximating second-order surface to fitting points in the neighborhood of the design point. This approach also avoids computing the Hessian. Furthermore, it corrects for situations where the curvature is zero but the surface is curved, e.g., when the design point is an inflection point of the surface. Results from the three methods are compared numerically.
In human perception, the role of sparse representation has been studied extensively. As we have alluded to in the Introduction, Chapter 1, investigators in neuroscience have revealed that in both low-level and mid-level human vision, many neurons in the visual pathway are selective for recognizing a variety of specific stimuli, such as color, texture, orientation, scale, and even view-tuned object images [OF97, Ser06]. Considering these neurons to form an overcomplete dictionary of base signal elements at each visual stage, the firing of the neurons with respect to a given input image is typically highly sparse.
Chapter 11 addresses time- and/or space-variant structural reliability problems. It begins with a description of problem types as encroaching or outcrossing, subject to the type of dependence on the time or space variable. A brief review of essentials from the random process theory is presented, including second-moment characterization of the process in terms of mean and auto-covariance functions and the power spectral density. Special attention is given to Gaussian and Poisson processes as building blocks for stochastic load modeling. Bounds to the failure probability are developed in terms of mean crossing rates or using a series system representation through parameter discretization. A Poisson-based approximation for rare failure events is also presented. Next, the Poisson process is used to build idealized stochastic load models that describe macro-level load changes or intermittent occurrences with random magnitudes and durations. The chapter concludes with the development of the load-coincidence method for combination of stochastic loads. The probability distribution of the maximum combined load effect is derived and used to estimate the failure probability.