To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Riccardo Gallotti, Institut de Physique Théorique, CEA, France,
Thomas Louail, Institut de Physique Théorique, CEA, France,
Rémi Louf, Institut de Physique Théorique, CEA, France,
Marc Barthelemy, Institut de Physique Théorique, CEA, France
The recent availability of large amounts of data for urban systems opens the exciting possibility of a new science of cities. These datasets can roughly be divided into three large categories according to their time scale. We will illustrate each category by an example on a particular aspect of cities. At small time scales (of order a day or less), mobility data provided by cell phones and GPS reveal urban mobility patterns but also provide information about the spatial organization of urban systems. At very large scales, the digitalization of historical maps allows us to study the evolution of infrastructure such as road networks, and permits us to distinguish on a quantitative basis self-organized growth from top-down central planning. Finally at intermediate time scales, we will show how socio-economical series provide a nice test for modeling and identifying fundamental mechanisms governing the structure and evolution of urban systems. All these examples illustrate, at various degrees, how the empirical analysis of data can help in constructing a theoretically solid approach to urban systems, and to understand the elementary mechanisms that govern urbanization leaving out specific historical, geographical, social, or cultural factors. At this period of human history that experiences rapid urban expansion, such a scientific approach appears more important than ever in order to understand the impact of current urban planning decisions on the future evolution of cities.
Big data and urban systems
A common trait shared by all complex systems – including cities – is the existence of a large variety of processes occurring over awide range of time and spatial scales.The main obstacle to the understanding of these systems therefore resides at least in uncovering the hierarchy of processes and in singling out the few that govern their dynamics. Albeit difficult, the hierarchization of processes is of prime importance. A failure to do so leads either to modelswhich are too complex to give any real insight into the phenomenon or to be validated, or too simple to provide a satisfactory framework which can be built upon. As a matter of fact, despite numerous attempts [1–6], a theoretical understanding of many observed empirical regularities in cities is still missing. This situation is, however, changing with the recent availability of an unprecedented amount of data about cities and their inhabitants.
This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach.
Introduction
Motivation
The ever-increasing demand for rapid access to large amounts of data anywhere anytime has been the driving force in the current development of next-generation wireless network infrastructure. It is projected that within 10 years, the wireless cellular network will offer up to 1000× throughput performance over the current 4G technology [1]. By that time the network should also be able to deliver a fiber-like user experience, boasting 10 Gb/s individual transmission rate for data-intensive cloud-based applications.
Achieving this lofty goal requires revolutionary infrastructure and highly sophisticated resource management solutions. A promising network architecture to meet this requirement is the so-called cloud-based radio access network (RAN), where a large number of networked base stations (BSs) are deployed for wireless access, while powerful cloud centers are used at the back end to perform centralized network management [1–4]. Intuitively, a large number of networked access nodes, when intelligently provisioned, will offer significantly improved spectrum efficiency, real-time load balancing and hotspot coverage. In practice, the optimal network provisioning is extremely challenging, and its success depends on smart joint backhaul provisioning, physical layer transmit/receive schemes, BS/user cooperation and so on.
This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. We show that many difficult problems in this domain can be solved efficiently and in a parallel manner, by advanced optimization algorithms such as the block successive upper-bound minimization (BSUM) method and the alternating direction methods of multipliers (ADMM) method.
The organization of the chapter
To begin with, we introduce a few well-known first-order optimization algorithms. Our focus is on algorithms suitable for solving problems with certain block-structure, where the optimization variables can be divided into (possibly overlapping) blocks.
In this chapter, the problem of sparsity-aware distributed learning is studied. In particular, we consider the setup of an ad-hoc network, the nodes of which are tasked to estimate, in a collaborative way, a sparse parameter vector of interest. Both batch and online algorithms will be discussed. In the batch learning context, the distributed LASSO algorithm and a distributed greedy technique will be presented. Furthermore, an LMS-based sparsity promoting algorithm, revolving around the l1 norm, as well as a greedy distributed LMS will be discussed. Moreover, a set-theoretic sparsity promoting distributed technique will be examined. Finally, the performance of the presented algorithms will be validated in several scenarios.
Introduction
The volume of data captured worldwide is growing at an exponential rate posing certain challenges regarding their processing and analysis. Data mining, regression, and prediction/forecasting have played a leading role in learning insights and extracting useful information from raw data. The employment of such techniques covers a wide range of applications in several areas such as biomedical, econometrics, forecasting sales models, content preference, etc. The massive amount of data produced together with their increased complexity (new types of data emerge) as well as their involvement in the Internet of Things [1] paradigm call for further advances in already established machine learning techniques in order to cope with the new challenges.
Even though data tend to live in high-dimensional spaces, they often exhibit a high degree of redundancy; that is, their useful information can be represented by using a number of attributes much lower compared to their original dimensionality. Often, this redundancy can be effectively exploited by treating the data in a transformed domain, in which they can be represented by sparse models; that is, models comprising a few nonzero parameters. Besides, sparsity is an attribute that is met in a plethora of models, modeling natural signals, since nature tends to be parsimonious. Such sparse structures can be effectively exploited in big data applications in order to reduce processing demands. The advent of compressed sensing led to novel theoretical as well as algorithmic tools, which can be efficiently employed for sparsity-aware learning, e.g. [2–7].
In many cases, processing of large amount of data is not only cumbersome but might be proved to be infeasible due to lack of processing power and/or of storage capabilities.
Writing items for reading comprehension assessment is time-consuming. Automating part of the process can help test-designers to develop assessments more efficiently and consistently. This paper presents an approach to automatically generating short answer questions for reading comprehension assessment. Our major contribution is to introduce Lexical Functional Grammar (LFG) as the linguistic framework for question generation, which enables systematic utilization of semantic and syntactic information. The approach can efficiently generate questions of better quality than previous high-performing question generation systems, and uses paraphrasing and sentence selection to improve the cognitive complexity and effectiveness of questions.
In reality, clearances in the joints are inevitable due to tolerances, and defects arising from design and manufacturing. Therefore, poor dynamic performance, reduction in components component lifetimes and generation of undesirable vibrations result in impacts of mating parts in the clearance joint. In this study, the dynamic behavior of a planar mechanism with revolute joints, in the presence of clearances is investigated. A continuous contact force model, based on elastic Hertz theory together with a dissipative term, is used to evaluate the contact forces here. Moreover, using a contact model, the effects of working speed and clearance size on the dynamic characteristics of a planar mechanical system are analyzed and compared. Furthermore, numerical results for a 3RRR planar parallel manipulator with six revolute clearance joints are presented.
Research on data-driven learning (DDL), or teaching and learning languages with the help of electronic corpora, has shown that it is both effective and efficient. Nevertheless, DDL is still far from common pedagogical practice, not least because the empirical research on it is still limited and narrowly focused. This study addresses some gaps in that research by exploring the effectiveness of DDL for teaching low-proficiency learners lexico-grammatical constructions (verb-preposition collocations) in German, a morphologically rich language. The study employed a pretest-posttest design with intact third- and fourth-semester classes for German as a foreign language at a US university. The same collocations were taught to each group during one class period, with one group at each course level taking a paper-based DDL lesson with concordance lines from a native-speaker corpus and the other one taking a traditional rule-based lesson with textbook exercises. These constructions were new to third-semester students, whereas fourth-semester students had been exposed to them in the previous semester. The results show that, whereas the DDL method and the traditional method were both effective and resulted in lexical and grammatical gains, DDL was more effective for teaching new collocations. The study thus argues in favor of using paper-based DDL in the classroom at lower proficiency levels and for languages other than English.
Introduction. There is a fascinating interplay and overlap between recursion theory and descriptive set theory. A particularly beautiful source of such interaction has been Martin's conjecture on Turing invariant functions. This longstanding open problem in recursion theory has connected to many problems in descriptive set theory, particularly in the theory of countable Borel equivalence relations.
In this paper, we shall give an overview of some work that has been done on Martin's conjecture, and applications that it has had in descriptive set theory. We will present a long unpublished result of Slaman and Steel that arithmetic equivalence is a universal countable Borel equivalence relation. This theorem has interesting corollaries for the theory of universal countable Borel equivalence relations in general. We end with some open problems, and directions for future research.
Martin's conjecture. Martin's conjecture on Turing invariant functions is one of the oldest and deepest open problems on the global structure of the Turing degrees. Inspired by Sacks’ question on the existence of a degree-invariant solution to Post's problem [Sac66], Martin made a sweeping conjecture that says in essence, the only nontrivial definable Turing invariant functions are the Turing jump and its iterates through the transfinite.
Our basic references for descriptive set theory and effective descriptive set theory are the books of Kechris [Kec95] and Sacks [Sac90]. Let ≤T be Turing reducibility on the Cantor space ω2, and let ≡T be Turing equivalence. Given x ∈ω2, let x′ be the Turing jump of x. The Turing degree of a real x∈ω2 is the ≡T equivalence class of x. A Turing invariant function is a function such that for all reals x, y ∈ ω2, if x ≡T y, then f(x) ≡T f(y). The Turing invariant functions are those which induce functions on the Turing degrees.
With the axiom of choice, we can construct many pathological Turing invariant functions. Martin's conjecture is set in the context of ZF+DC+AD, where AD is the axiom of determinacy. We assume ZF+DC+AD for the rest of this section. The results we will discuss all “localize” so that the assumption of AD essentially amounts to studying definable functions assuming definable determinacy, for instance, Borel functions using Borel determinacy.