To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As we discussed in Chapter 1, “understanding” has many definitions, with a variable amount of quantitative input. For a physicist, understanding does not mean having a story consistent with reality only, but also having mathematical tools and models able to describe real phenomena and to predict the outcome of experiments. Even if a qualitative description of processes is somewhat satisfying, it is not enough for constructing a science of cities. Indeed, we would like to identify the most important parameters, not only to understand the past, but also to be able to construct a model that gives, with a reasonable confidence, the future evolution of a city and to test the impact of various policies.
At this point, we certainly have a number of pieces of the puzzle, and we have discussed some of them in this book. It doesn't however mean that we have solved the full puzzle. New data sources and large datasets allow us to get a precise idea of what is happening in cities. We are currently experiencing an exciting time during which we can challenge the purely theoretical developments made these last decades. In many empirical studies, the identification of relevant factors was essentially done statistically, and we can now hope to go beyond and to have a more mechanistic approach, where a model based on simple processes is able to reproduce empirical observations.
Concerning the spatial structure of cities, new data sources give us a real-time, high-resolution picture of mobility. The structure of mobility flows that come out from these datasets departs from the usual image of a monocentric city where flows converge towards the central business district. Instead, for large cities, the main flows appear to be far from the localization between centers of residence and activities, that we could have na ïvely expected. This massive amount of data also allows us to quantitatively assess the degree of polycentricity of an urban system. A simple model showed that congestion is a crucial factor in understanding the evolution of polycentricity and mobility patterns with the population size.
The beginning of statistical physics can be traced back to thermodynamics in the nineteenth century. The field is still very active today, with modern problems occurring in out-of-equilibrium systems. The first problems (up to c. 1850) were to describe the exchange of heat through work and to define concepts such as temperature and entropy. A little later many studies were devoted to understanding the link between a microscopic description of a system (in terms of atoms and molecules) and a macroscopic observation (e.g., the pressure or the volume of a system). The concepts of energy and entropy could then be made more precise, leading to an important formalization of the dynamics of systems and their equilibrium properties.
More recently, during the twentieth century, statistical physicists invested much time in understanding phase transitions. The typical example is a liquid that undergoes a liquid-to-solid transition when the temperature is lowered. This very common phenomenon turned out, however, to be quite complex to understand and to describe theoretically. Indeed, this type of “emergent behavior” is not easily predictable from the properties of the elementary constituents and as Anderson (1972) put it: ”… the whole becomes not only more than but very different from the sum of its parts.” In these studies, physicists understood that interactions play a critical role: without interactions there is usually no emergent behavior, since the new properties that appear at large scales result from the interactions between constituents. Even if the interaction is “simple,” the emergent behavior might be hard to predict or describe. In addition, the emergent behavior depends, not on all the details describing the system, but rather on a small number of parameters that are actually relevant at large scales (see for example Goldenfeld 1992).
Statistical physics thus primarily deals with the link between microscopic rules and macroscopic emergent behavior and many techniques and concepts have been developed in order to understand this translation – among them the notion of relevant parameters, but also the idea that at each level of description of a system there is a specifically adapted set of tools and concepts.
We discuss here modeling approaches for explaining the population distribution characterized by the famous Zipf's law. We start with classical models (Gibrat and Gabaix), discuss their derivation, results, and limits. We then propose a discussion of a new approach based on stochastic diffusion. We also revisit the Central place theory from a quantitative point of view and show that most of Christaller's results can be understood in terms of sptial fluctuations.
Mobility is obviously a crucial phenomenon in cities. In fact, it is probably one of the most important mechanisms that govern the structure and dynamics of cities. Indeed, individuals go to cities to buy, sell or exchange goods, to work, or to meet with other individuals and for this they need different transportation means. This is where technology enters the problem through the (average) velocity of transportation modes. This average velocity increased during the evolution of technology and modified the structure and organization of cities. For example, we see in Fig. 5.1 that the “horizon” of an individual depends strongly on her transportation mode. For a walker, the horizon is essentially isotropic and small, while the car allows for a wider exploration but one which is anisotropic and follows transportation infrastructures. This correlation between the spatial structure of the city and the available technology at the moment of its creation is clearly illustrated by Anas et al. (1998) for US cities. Many major cities, such as Denver or Oklahoma City, developed around rail terminals that triggered the formation of central business districts. In contrast, automobile-era cities that developed later, such as Dallas or Houston, have a spatial organization that is essentially determined by the highway system.
In terms of mobility, the city center is also the location that mimimizes the average distance to all other locations in the city. Very naturally, it is then the main attraction for businesses and residences, which leads to competition for space between individuals or firms, giving rise to the real-estate market. There is also a well-known relation between land-use and accessibility, as was discussed some time ago by Hansen (1959), and new, extensive datasets will certainly enable us in the future to characterize precisely the relation between these important factors.
It is of course very difficult to make an exhaustive review about all studies on mobility and we will focus in this chapter on several specific points. We will mostly describe the general features of mobility and will leave the discussion about multimodal aspects for Chapter 6.
The locations of homes, activities, and businesses shape a city, and identifying the mechanisms that govern these spatial distributions is crucial for our understanding of these systems. We present here some recently discussed aspects, which may provide a basis for further insights.We will begin with a discussion on the location of stores and facilities, which are very likely governed by optimal considerations.
We will then discuss the polycentric aspects of cities by starting with their identification and measures. We will describe how to characterize and measure an activity center – a “hotspot” – defined as a local maximum of the activity density. The important empirical result is that the number of these hotspots scales sublinearly with the population size.
We continue by describing two classical theoretical models for polycentricity: the Fujita–Ogawa model, proposed in the 1980s, which relies on the idea that agglomeration effects are responsible for polycentricity, and the edge-city model proposed by Krugman. As we shall see, these models cannot, however, explain the scaling of the number of hotspots with population and this leads us to reconsider the classical Fujita–Ogawa model in order to derive a result in agreement with empirical observations.
Optimal locations
Distribution of public facilities
Public facilities such as airports, post offices, and hospitals have to be distributed according to the local population density in order to optimize their efficiency. These facilities constitute an important part of the urban structure and help to shape the spatial distribution of population. It is therefore important to understand the organization of these particular places.
We can measure these spatial distributions, and the natural null model to compare against these empirical observations is the optimal case where the average distance from an individual to the nearest facility is minimized (Gastner and Newman 2006), and we follow here the derivation given by Gusein-Zade (1982).
Infrastructure such as transportation networks for individuals and freight, power grids or distribution systems is crucial for our societies and the good functioning of cities. All these networks are embedded in space: nodes have a position and links have a certain length, and hence a cost associated with their formation and maintenance. These “spatial networks” (Barthelemy 2011) necessitate specific tools for their characterization and in this chapter we first review some of the most important ones. We then focus on transportation networks, starting with the road and street network, followed by subway networks. We end this chapter with a digression on the railroad network and we discuss the importance of the spatial scale and the main differences between subways and railroads.
Roads and streets: patterns
An important component of cities is their street network (in the following we will not make the distinction between streets and roads). These networks are made of nodes that represent the intersections and the links are segments of roads between consecutive intersections. These networks can be thought as a simplified view of cities, that captures a large part of their structure and organization (Southworth and Ben-Joseph 2003), and contains a large amount of information about underlying and universal mechanisms at play in their formation and evolution. Identifying the main mechanisms in these systems is not a new task (Haggett and Chorley 1969; Xie and Levinson 2011), but the recent availability of digitized maps, historical or contemporary (see Chapter 1), allows us to test ideas and models on large-scale cross-sectional and historical data.
Street networks are approximatively planar graphs and are now fairly well characterized (Jiang and Claramunt 2004; Rosvall et al. 2005; Porta et al. 2006b,a; Lämmer et al. 2006; Crucitti et al. 2006; Cardillo et al. 2006; Xie and Levinson 2007; Jiang 2007; Masucci et al. 2009; Chan et al. 2011; Courtat et al. 2011).
In this first chapter, we propose a rough, synthetic view of cities, by retaining what we believe to be some salient features from a quantitative point of view. There are many books and reviews, giving countless details and figures about cities (in particular, the reader can consult the updated version of reports produced by the UN – see for example “World urbanization propects, the 2014 revision”), and instead of offering a long list of various properties of cities (which can be found in different books and reliable sources such as the Census Bureau for the US, the UN, the OECD, or the World Bank), we focus here on a small set of key figures and discuss important scales manifesting in cities.
Cities are complex objects with many different temporal and spatial scales, related to a large number of processes. While a small set of numbers is certainly not enough to describe the full complexity of cities, such numbers can nevertheless allow for quantitative studies and for a large-scale characterization of urban systems. There is much variety among cities in terms of morphology, population, density distribution, and also functions, yet despite these differences, we observe statistical regularities for some observables. Indeed, we can expect that large systems composed of a large number of constituents lead to collective behaviors characterized by statistical regularities. Another reason for this “universality” is the existence of fundamental processes common to all cities: spatial organization of activities and residences, mobility of individuals, and so on. One of the most challenging problems of a science of cities is then to identify the minimal set of mechanisms that describe the evolution of cities.
A science of cities
The nature of the problem
A central issue in understanding urbanization is the large number of entangled, time-varying processes that generate cities. Many disciplines such as quantitative geography or urban economics have addressed some aspects of this problem and produced either very abstract models, or, at the other extreme, simulations with very large numbers of parameters designed for specific locales.
Although nanoparticles have been shown to have clear technological advantages, their use in some consumer products remains controversial, particularly where these products come in direct contact with our bodies. There has been much discussion about using metal oxide nanoparticles in sunscreens, and numerous technology assessments aimed at predicting the type, size and concentration of nanoparticles and surface treatments that will be best for consumers. Yet, the optimal configuration is ultimately the one that people actually want and are willing to pay for, but until now consumer preferences have not been included in model predictions. We describe and discuss a proof of concept study in which we design and implement a hypothetical sunscreen product configurator to predict how people tradeoff sun protection factor (SPF), product transparency and potential toxicity from reactive oxygen species (ROS) in configuring their most preferred sunscreen. We also show that preferred nanoparticle sizes and concentrations vary across demographic groups. Our results suggest that while consumers choose to reduce or eliminate potential toxicity when possible, they do not automatically sacrifice high SPF and product transparency to avoid the possibility of toxicity from ROS. We discuss some advantages of using product configurators to study potential product designs and suggest some future research possibilities.
Many students complete PhDs in functional programming each year. As a service to the community, the Journal of Functional Programming publishes the abstracts from PhD dissertations completed during the previous year.
In light of the increased interest in e-mobility, comfortable, and safe charging systems, such as inductive charging systems, are gaining importance. Several standardization bodies develop guidelines and specifications for inductive power transfer systems in order to ensure a good interoperability between different coil architectures from the various car manufacturers, wireless power transfer suppliers, and infrastructure companies. A combination of a bipolar magnetic coil design on the primary side with a secondary solenoidal coil promises a good magnetic coupling and a high-transmitted power with small dimensions at the same time. In order to get a profound knowledge of the influence and behavior of the main variables on the coil system, a detailed parameter study is conducted in this paper. Based on these findings, a solenoid was designed for a specific case of application. Further, this design is optimized. The dimensions of the system could be reduced by 50% with a constant coupling factor at the same time. Besides the reduction of the dimensions and subsequently the costs of the systems, the stray field could be reduced significantly.
Adding active toe joints to a humanoid robot structure has lots of difficulties such as mounting a small motor and an encoder on the robot feet. Conversely, adding passive toe joints is simple, since it only consists of a spring and a damper. Due to lots of benefits of implementing passive toe joints, mentioned in the literature, the goal of this study is to add passive toe joints to the SURENA III humanoid robot which was designed and fabricated at the Center of Advanced Systems and Technologies (CAST), University of Tehran. To this end, a simple passive toe joint is designed and fabricated, at first. Then, stiffness and damping coefficients are calculated using a vision-based measurement. Afterwards, a gait planning routine for humanoid robots equipped with passive toe joints is implemented. The tip-over stability of the gait is studied, considering the vibration of the passive toe joints in swing phases. The multi-body dynamics of the robot equipped with passive toe joints are presented using the Lagrange approach. Furthermore, system identification routine is adopted to model the dynamic behaviors of the power transmission system. By adding the calculated actuating torques for these two models, the whole dynamic model of the robot is computed. Finally, the performance of the proposed approach is evaluated by several simulations and experimental results. Results show that using passive toe joints reduces energy consumption of ankle and knee joints by 15.3% and 9.0%, respectively. Moreover, with relatively large values of stiffness coefficients, the required torque and power of the knee and hip joints during heel-off motion reduces as the ankle joint torque and power increases.
This paper addresses the question: given some theory T that we accept, is there some natural, generally applicable way of extending T to a theory S that can prove a range of things about what it itself (i.e., S) can prove, including a range of things about what it cannot prove, such as claims to the effect that it cannot prove certain particular sentences (e.g., 0 = 1), or the claim that it is consistent? Typical characterizations of Gödel’s second incompleteness theorem, and its significance, would lead us to believe that the answer is ‘no’. But the present paper explores a positive answer. The general approach is to follow the lead of recent (and not so recent) approaches to truth and the Liar paradox.
Subexponential logic is a variant of linear logic with a family of exponential connectives – called subexponentials – that are indexed and arranged in a pre-order. Each subexponential has or lacks associated structural properties of weakening and contraction. We show that a classical propositional multiplicative subexponential logic (MSEL) with one unrestricted and two linear subexponentials can encode the halting problem for two register Minsky machines, and is hence undecidable. We then show how the additive connectives can be directly simulated by giving an encoding of propositional multiplicative additive linear logic (MALL) in an MSEL with one unrestricted and four linear subexponentials.
We sketch the history of spectral ranking—a general umbrella name for techniques that apply the theory of linear maps (in particular, eigenvalues and eigenvectors) to matrices that do not represent geometric transformations, but rather some kind of relationship between entities. Albeit recently made famous by the ample press coverage of Google's PageRank algorithm, spectral ranking was devised more than 60 years ago, almost exactly in the same terms, and has been studied in psychology, social sciences, bibliometrics, economy, and choice theory. We describe the contribution given by previous scholars in precise and modern mathematical terms: Along the way, we show how to express in a general way damped rankings, such as Katz's index, as dominant eigenvectors of perturbed matrices, and then use results on the Drazin inverse to go back to the dominant eigenvectors by a limit process. The result suggests a regularized definition of spectral ranking that yields for a general matrix a unique vector depending on a boundary condition.
We analyse the reduction of differential interaction nets from the point of view of so-called ‘true concurrency,’ that is, employing a non-interleaving model of parallelism. More precisely, we associate with each differential interaction net an event structure describing its reduction. We show how differential interaction nets are only able to generate confusion-free event structures, and we argue that this is a serious limitation in terms of the concurrent behaviours they may express. In fact, confusion is an extremely elementary phenomenon in concurrency (for example, it already appears in CCS with just prefixing and parallel composition) and we show how its presence is preserved by any encoding respecting the degree of distribution and the reduction semantics. We thus infer that no reasonably expressive process calculus may be satisfactorily encoded in differential interaction nets. We conclude with an analysis of one such encoding proposed by Ehrhard and Laurent, and argue that it does not contradict our claims, but rather supports them.
from
Part I
-
Basics of Wireless Energy Harvesting and Transfer Technology
By
Dusit Niyato, Nanyang Technological University, Singapore,
Ekram Hossain, University of Manitoba, Winnipeg, MB, Canada,
Xiao Lu, University of Alberta, AB, Canada
Energy harvesting is an important aspect of green communication that provides self-sustainable operation of wireless communications systems and networks. Energy harvesting has been adopted in low-power communication devices and sensors. There are different forms of energy harvesting suitable for different applications. Table 1.1 shows the summary of different energy harvesting technologies.
• Photovoltaic technology has been developed over decades, and it is one of the most commonly used energy harvesting techniques. A solar panel which is composed of multiple solar cells converts sunlight into a flow of electrons based on the photovoltaic effect. The effect describes the phenomenon that the light excites electrons into a higher state of energy. The electrons then can act as charge carriers for electric current. A solar cell contains a photovoltaic material, e.g., monocrystalline silicon, polycrystalline silicon, amorphous silicon, and copper indium gallium selenide/sulfide. The efficiency of a solar cell can be up to 43.5%, while the average efficiency of a commercial solar cell is 12%–18%. Photovoltaic technology has been adopted in many applications, including rooftop and building integrated systems, power stations, rural electrification, and telecommunication. However, photovoltaic systems need a large area and cannot supply energy during the night. Moreover, their efficiency depends on the orientation of the solar panel, which can be complicated to optimize. Photovoltaic systems are suitable for static data communication units, e.g., a base station and access point, while their applicability to mobile units, e.g., user equipment, is limited.
• Thermal energy or heat can be converted to electricity using a thermoelectric generator based on the Seebeck effect or Thomson effect. The effect describes the conversion of temperature difference and electricity in thermoelectric devices. While thermoelectric devices are typically used for measuring temperature, recently they have been developed to serve as energy sources. The devices can produce 20–16 μW/cm2 with the human body as a heat source at room temperature. The benefit of thermoelectric devices is the capability of generating energy as long as there is a temperature difference or a heat flow. Additionally, since they do not have any moving parts, they have high reliability. However, the devices are bulky and heavy. Also, they can supply only a small amount of energy, with typical efficiencies of approximately 5%–8%.
from
Part II
-
Architectures, Protocols, and Performance Analysis
By
Derrick Wing Kwan Ng, University of New South Wales, Sydney, Australia,
Shiyang Leng, The Pennsylvania State University, USA,
Robert Schober, Friedrich–Alexander–Universität Erlangen–Nürnberg, Erlangen, Germany
The development of wireless communication networks worldwide has triggered an exponential growth in the number of wireless communication devices and sensors for applications such as e-health, automated control, environmental monitoring, energy management, and safety management. It is expected that, by 2020, the number of inter-connected devices on the planet may reach 50 billion. Recent efforts in next-generation communication system development aim at providing secure, ubiquitous, and high-speed communication with guaranteed quality of service (QoS). However, the related tremendous increase in the number of transmitters and receivers has also led to a huge demand for energy. A relevant technique for reducing the energy consumption of wireless devices is multiple-input multiple-output (MIMO), since it offers extra degrees of freedom for more efficient resource allocation. In particular, multiuser MIMO, where a transmitter equipped with multiple antennas serves multiple single-antenna receivers, is considered an effective solution for realizing the potential performance gains offered by multiple antennas to improve the system spectral efficiency and reduce the transmit power. On the other hand, battery-powered mobile devices such as wireless sensors have been widely deployed and have become critical components of many wireless communication networks over the past decades. However, batteries have limited energy storage capacity and their replacement can be costly or even impossible, which creates a performance bottleneck in wireless networks. As a result, energy harvesting technology is foreseen as a viable solution to remove the last wires of wireless devices. The integration of energy harvesting (EH) capabilities into communication devices facilitates self-sustainability of energy limited communication systems. Solar, wind, hydroelectric, and piezoelectric are the major conventional energy sources for EH. For instance, energy harvesters for harvesting wind and solar energy have been successfully integrated into base station transmitters for providing communication services in remote areas [1]. However, the availability of these natural energy sources is usually limited by location, climate, and time of day. Besides, the implementation of conventional energy harvesters may be problematic and renewable energy from natural sources may not be available in indoor environments. Thus, a new form of controllable energy source for portable wireless devices is needed in order to extend the lifetime of communication networks.