To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we study the almost sure convergence for sequences of asymptotically negative associated (ANA) random variables. As a result, we extend the classical Khintchine–Kolmogorov convergence theorem, Marcinkiewicz strong law of large numbers, and the three series theorem for sequences of independent random variables to sequences of ANA random variables without necessarily adding any extra conditions.
In many scheduling applications, minimizing delays is of high importance. One adverse effect of such delays is that the reward for completion of a job may decay over time. Indeed in healthcare settings, delays in access to care can result in worse outcomes, such as an increase in mortality risk. Motivated by managing hospital operations in disaster scenarios, as well as other applications in perishable inventory control and information services, we consider non-preemptive scheduling of jobs whose internal value decays over time. Because solving for the optimal scheduling policy is computationally intractable, we focus our attention on the performance of three intuitive heuristics: (1) a policy which maximizes the expected immediate reward, (2) a policy which maximizes the expected immediate reward rate, and (3) a policy which prioritizes jobs with imminent deadlines. We provide performance guarantees for all three policies and show that many of these performance bounds are tight. In addition, we provide numerical experiments and simulations to compare how the policies perform in a variety of scenarios. Our theoretical and numerical results allow us to establish rules-of-thumb for applying these heuristics in a variety of situations, including patient scheduling scenarios.
Sketches are an essential tool for designers. They allow the externalizing of ideas and are therefore economic cognitively. Sketches also provide the designer with new insights, which play an important role in the emergence of ideas. However, some studies tend to show that sketching does not systematically have a positive effect on idea generation. Our research thus aims to analyze the generative effects of sketches by studying the way sketches support the design strategy of designers. We especially focus on the role of knowledge in comparison with concepts. Three sequences of sketches are analyzed employing C–K design theory; we show that drawings refer to both concepts and knowledge, but mostly to knowledge. In particular, sketching helps the architect mobilize knowledge distant from the initial topic. Moreover, the designer carries out through sketching an important work of knowledge structuration that we call ‘knowledge preordering’; by carefully selecting, testing and, if necessary, removing knowledge, the designer organizes a strategically built knowledge space. In particular, all elements involving modularity or determinism in the knowledge basis are abandoned. Such knowledge preordering thus allows the building of a splitting knowledge structure, which offers new rules for concept generation and enhances the production of original and disruptive ideas.
In this paper, we establish a fluid limit for a two-sided Markov order book model. The main result states that in a certain asymptotic regime, a pair of measure-valued processes representing the “sell-side shape” and “buy-side shape” of an order book converges to a pair of deterministic measure-valued processes in a certain sense. We also test the fluid approximation on data. The empirical results suggest that the approximation is reasonably good for liquidly traded stocks in certain time periods.
We propose a novel class of unsupervised learning-based algorithms that extend the conditional restricted Boltzmann machine to predict, in real-time, a lower limb exoskeleton wearer's intended movement type and future trajectory. During training, our algorithm automatically clusters unlabeled exoskeletal measurement data into movement types. Our predictor then takes as input a short time series of measurements, and outputs in real-time both the movement type and the forward trajectory time series. Physical experiments with a prototype exoskeleton demonstrate that our method more accurately and stably predicts both movement type and the forward trajectory compared to existing methods.
Given a graph F, let st(F) be the number of subdivisions of F, each with a different vertex set, which one can guarantee in a graph G in which every edge lies in at least t copies of F. In 1990, Tuza asked for which graphs F and large t, one has that st(F) is exponential in a power of t. We show that, somewhat surprisingly, the only such F are complete graphs, and for every F which is not complete, st(F) is polynomial in t. Further, for a natural strengthening of the local condition above, we also characterize those F for which st(F) is exponential in a power of t.
Ultrasonic Power Transfer (UPT) has been developed as an alternative solution for achieving wireless power transfer. This paper proposes a new model describing UPT systems with tightly coupled piezoelectric transducers firmly bound to solid media. The model is derived from the short-circuit admittance of the system measured from the primary transducer. The mechanical characteristics of the system are modeled with parallel LCR branches, which reveal the fundamental relationships between the power transfer characteristics of the tightly coupled UPT system and system parameters. The loading conditions for achieving the maximum power transfer are identified, and the operating frequencies corresponding to the peak power transfer points for variable loads are determined. A practical UPT system is built with two 28 kHz Langevin-type piezoelectric transducers connected to a 5 mm-thick aluminum plate, and the practical results have verified the accuracy of the proposed model.
We discuss in this chapter the socio-economical aspects of cities and we start by revisiting the classical models of urban economics such as the Alonso-Muth-MIlls and the Beckmann models. We enclose all the details of the derivation for these models, allowing non-economists to follow and to understand the basic assumptions and results used in urban economics. We then discuss the spatial income segregation in cities, both from an empirical and a modeling point of view. We propose a discussion on the Schelling model and its relation with statistical physics. We end this chapter by presenting scaling ideas and theoretical approaches for computing the exponents.
Most of the world's people are now living in cities and urbanization is expected to keep increasing in the near future. The resulting challenges are complex, difficult to handle, and range from increasing dependence on energy, to the emergence of socio-spatial inequalities, to serious environmental and sustainability issues. Understanding and modeling the structure and evolution of cities is then more important than ever as policy makers are actively looking for new paradigms in urban and transport planning.
The recent advances obtained in the understanding of cities have generated increased attention to the potential implication of new theoretical models in agreement with data. Questions such as urban sprawl, effects of congestion, dominant mechanisms governing the spatial distribution of activities and residences, and the effect of new transportation infrastructures are fundamental questions that we need to understand if we want a harmonious development of cities in the future, from both social and economic points of view.
Cities were for a long time the subject of numerous studies in a large number of fields. Discussion of the ideal city can be traced back at least to the Renaissance, and more recently scientists have tried to describe quantitatively the formation and evolution of cities. Regional science and then quantitative geography addressed various problems such as the spatial organization of cities, the impact of infrastructures, and transport. It is remarkable to note that as early as the 1970s quantitative geographers realized the crucial importance of networks in these systems, and produced visionary studies about networks, their evolution, and the complexity of cities (Haggett et al. 1977).
These studies were further developed mathematically by economists who discussed the interplay between space and economic aspects in cities. Many important models find their origin in the seminal paper of Von Thunen and describe isolated, monocentric cities in terms of utility maximization subject to budget constraints. These models allowed spatial economics to get a grasp of the relations between space, income, and transportation; for example. Japanese economists Fujita and Ogawa discussed the impact of agglomeration effects between firms in a general model that deals with the location choice for individuals and companies.
The modules of parallel tool heads with 2R1T degrees of freedom (DOFs), i.e., two rotational DOFs and one translational DOF, have become so important in the field of machine tools that corresponding research studies have attracted extensive attention from both academia and industry. A 3-PUU (P represents a prismatic joint, U represents a universal joint) parallel mechanism with 2R1T DOFs is proposed in this paper, and a detailed discussion about its architecture, geometrical constraints, and mobility characteristics is presented. Furthermore, on the basis of its special geometrical constraint, we derive and explicitly express the parasitic motion of the 3-PUU mechanism. Then, the inverse kinematics problem, the Jacobian matrix calculation and the forward kinematics problem are also investigated. Finally, with a simplified dynamics model, the inverse dynamics analysis for the mechanism is carried out with the Principle of Virtual Work, and corresponding results are compared with that of the 3-PRS mechanism. The above analyses illustrate that the 3-PUU parallel mechanism has good dynamics features, which validates the feasibility of applying this mechanism as a tool head module.
In the previous chapter, we discussed the analysis and modeling of mobility patterns in cities. However, as cities expand, their transportation networks are also growing, with increasing interconnections between different transportation modes. In large cities, we can now choose the transportation mode to travel from one point to another, and this trip can even involve several different modes. This multimodality is a new aspect of large cities and brings new questions and problems. From the users, point of view, it becomes difficult to deal with the huge amount of information needed for describing the different transportation networks and their interconnections. From the transport agencies point of view, the managing task becomes harder because the different modes are usually run by separate agencies; this renders optimization difficult owing to the large number of aspects that have to be taken into account (Guo and Wilson 2011).
In particular, an important problem concerning multimodality is the synchronization between different modes. For example, on average in the UK, 23% of travel time is lost in connections for trips with more than one mode (Gallotti and Barthelemy, 2014). This lack of synchronization between modes induces differences between the theoretical quickest trip and the “time-respecting” path, which takes into account waiting times at interconnection nodes.
In order to address these problems and more generally to understand the impact of the coupling between modes, we need new tools in order to identify the main factors that govern their efficiency. The multilayer network approach seems to be the most convenient framework for studying these systems (Kivel ä et al. 2014; Boccaletti et al. 2014). In this framework, each layer represents a mode and intermodal connections are represented by inter-layer links. In this chapter we discuss some aspects of multimodality and present tools for measuring and characterizing these coupled networks and their efficiency as a whole.
A multilayer network view of urban navigation
Empirical observations of multimodality
We first describe empirical results obtained by Gallotti and Barthelemy (2014) from timetables for the whole UK and for all transport modes. We note that these results were not obtained for traffic data (that are usually difficult to get). Instead we used these timetable data and the assumption of uniform demand.
As we discussed in Chapter 1, “understanding” has many definitions, with a variable amount of quantitative input. For a physicist, understanding does not mean having a story consistent with reality only, but also having mathematical tools and models able to describe real phenomena and to predict the outcome of experiments. Even if a qualitative description of processes is somewhat satisfying, it is not enough for constructing a science of cities. Indeed, we would like to identify the most important parameters, not only to understand the past, but also to be able to construct a model that gives, with a reasonable confidence, the future evolution of a city and to test the impact of various policies.
At this point, we certainly have a number of pieces of the puzzle, and we have discussed some of them in this book. It doesn't however mean that we have solved the full puzzle. New data sources and large datasets allow us to get a precise idea of what is happening in cities. We are currently experiencing an exciting time during which we can challenge the purely theoretical developments made these last decades. In many empirical studies, the identification of relevant factors was essentially done statistically, and we can now hope to go beyond and to have a more mechanistic approach, where a model based on simple processes is able to reproduce empirical observations.
Concerning the spatial structure of cities, new data sources give us a real-time, high-resolution picture of mobility. The structure of mobility flows that come out from these datasets departs from the usual image of a monocentric city where flows converge towards the central business district. Instead, for large cities, the main flows appear to be far from the localization between centers of residence and activities, that we could have na ïvely expected. This massive amount of data also allows us to quantitatively assess the degree of polycentricity of an urban system. A simple model showed that congestion is a crucial factor in understanding the evolution of polycentricity and mobility patterns with the population size.
The beginning of statistical physics can be traced back to thermodynamics in the nineteenth century. The field is still very active today, with modern problems occurring in out-of-equilibrium systems. The first problems (up to c. 1850) were to describe the exchange of heat through work and to define concepts such as temperature and entropy. A little later many studies were devoted to understanding the link between a microscopic description of a system (in terms of atoms and molecules) and a macroscopic observation (e.g., the pressure or the volume of a system). The concepts of energy and entropy could then be made more precise, leading to an important formalization of the dynamics of systems and their equilibrium properties.
More recently, during the twentieth century, statistical physicists invested much time in understanding phase transitions. The typical example is a liquid that undergoes a liquid-to-solid transition when the temperature is lowered. This very common phenomenon turned out, however, to be quite complex to understand and to describe theoretically. Indeed, this type of “emergent behavior” is not easily predictable from the properties of the elementary constituents and as Anderson (1972) put it: ”… the whole becomes not only more than but very different from the sum of its parts.” In these studies, physicists understood that interactions play a critical role: without interactions there is usually no emergent behavior, since the new properties that appear at large scales result from the interactions between constituents. Even if the interaction is “simple,” the emergent behavior might be hard to predict or describe. In addition, the emergent behavior depends, not on all the details describing the system, but rather on a small number of parameters that are actually relevant at large scales (see for example Goldenfeld 1992).
Statistical physics thus primarily deals with the link between microscopic rules and macroscopic emergent behavior and many techniques and concepts have been developed in order to understand this translation – among them the notion of relevant parameters, but also the idea that at each level of description of a system there is a specifically adapted set of tools and concepts.
We discuss here modeling approaches for explaining the population distribution characterized by the famous Zipf's law. We start with classical models (Gibrat and Gabaix), discuss their derivation, results, and limits. We then propose a discussion of a new approach based on stochastic diffusion. We also revisit the Central place theory from a quantitative point of view and show that most of Christaller's results can be understood in terms of sptial fluctuations.
Mobility is obviously a crucial phenomenon in cities. In fact, it is probably one of the most important mechanisms that govern the structure and dynamics of cities. Indeed, individuals go to cities to buy, sell or exchange goods, to work, or to meet with other individuals and for this they need different transportation means. This is where technology enters the problem through the (average) velocity of transportation modes. This average velocity increased during the evolution of technology and modified the structure and organization of cities. For example, we see in Fig. 5.1 that the “horizon” of an individual depends strongly on her transportation mode. For a walker, the horizon is essentially isotropic and small, while the car allows for a wider exploration but one which is anisotropic and follows transportation infrastructures. This correlation between the spatial structure of the city and the available technology at the moment of its creation is clearly illustrated by Anas et al. (1998) for US cities. Many major cities, such as Denver or Oklahoma City, developed around rail terminals that triggered the formation of central business districts. In contrast, automobile-era cities that developed later, such as Dallas or Houston, have a spatial organization that is essentially determined by the highway system.
In terms of mobility, the city center is also the location that mimimizes the average distance to all other locations in the city. Very naturally, it is then the main attraction for businesses and residences, which leads to competition for space between individuals or firms, giving rise to the real-estate market. There is also a well-known relation between land-use and accessibility, as was discussed some time ago by Hansen (1959), and new, extensive datasets will certainly enable us in the future to characterize precisely the relation between these important factors.
It is of course very difficult to make an exhaustive review about all studies on mobility and we will focus in this chapter on several specific points. We will mostly describe the general features of mobility and will leave the discussion about multimodal aspects for Chapter 6.