To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we consider the problem of modeling and representing trajectories in the context of database systems. Since about 1995 there has been research on moving objects databases (MODs), also termed spatio-temporal databases. The general goal has been to allow one to represent moving entities in databases and to enable a user to ask all kinds of questions about such movements. This requires extensions of the DBMS data model and query language. Further, DBMS implementation needs to be extended at all levels, for example, by providing data structures for representation of moving objects, efficient algorithms for query operations, indexing and join techniques, extensions of the query optimizer, and extensions of the user interface to visualize and animate moving objects.
Moving objects databases come in two types. The first represents a set of currently moving objects. One is interested in maintaining the current locations and asking queries about current and expected near future locations. The second type maintains complete histories of movement. These are sometimes called trajectory databases and are the topic of this chapter.
Whereas spatio-temporal databases had been around for a much longer time, they supported only discrete changes of geometries over time. The emphasis in the new field of moving objects databases is to consider continuously changing geometries. Neither the position of a car on a road nor the shape and location of a hurricane changes in discrete steps; these are clearly continuous phenomena.
The first part of this book established the general validity of our ranking methods, and used them to illustrate grand themes and processes of history: canonization in textbooks, evaluating the precision of selection processes, measuring the flow of time, and quantifying changes in the perception of gender.
Now we will reduce our focus to the particular. We rank the significance of the world's historical figures in terms of the different niches they occupy: politicians, scientists, religious leaders, artists, actors, outlaws, and even dentists. It is instructive to see who rises to the top of each individual heap, both to refresh our memory on old historical friends and to make new ones. You have our blessing to skim through any group that you are not interested in, but sneak a peek at the ranking tables before you move on.
Some may question how we decide which figures belong in a particular group. Defining exactly who is an actor, an outlaw, or a dentist turns out to be very difficult to do in a precise way. We used the following methodology. We would start from a roster assembled in some book or Wikipedia category, and then amend the lists based on general knowledge and our sense of the nature of the category. No doubt certain omissions remain, although we believe that we have captured most of the usual suspects.
This first chapter concerns America's political leaders, from our presidents down to the mayors of our greatest cities.
Previous work has demonstrated that categories are useful and expressive models for databases. In the current paper we build on that model, showing that certain queries and constraints correspond to lifting problems, as found in modern approaches to algebraic topology. In our formulation, each SPARQL graph pattern query corresponds to a category-theoretic lifting problem, whereby the set of solutions to the query is precisely the set of lifts. We interpret constraints within the same formalism, and then investigate some basic properties of queries and constraints. In particular, to any database π, we can associate a certain derived database Qry(π) of queries on π. As an application, we explain how giving users access to certain parts of Qry(π), rather than direct access to π, improves the ability to manage the impact of schema evolution.
In Modern Standard Arabic texts are typically written without diacritical markings. The diacritics are important to clarify the sense and meaning of words. Lack of these markings may lead to ambiguity even for the natives. Often the natives successfully disambiguate the meaning through the context; however, many Arabic applications, such as machine translation, text-to-speech, and information retrieval, are vulnerable due to lack of diacritics. The process of automatically restoring diacritical marks is called diacritization or diacritic restoration. In this paper we discuss the properties of the Arabic language and the issues that are related to the lack of the diacritical marking. It will be followed by a survey of the recent algorithms that were developed to solve the diacritization problem. We also look into the future trend for researchers working in this area.
Due to the growing use of Atomic Force Microscope (AFM) nanorobots in the moving and manipulation of cylindrical nanoparticles (carbon nanotubes and nanowires) and the fact that these processes cannot be simultaneously observed, a computer simulation of the involved forces for the purpose of predicting the outcome of the process becomes highly important. So far, no dynamic 3D model that shows changes in these forces in the course of process implementation has been presented. An algorithm is used in this paper to show in 3D, the manner by which the dynamic forces vary in the mentioned process. The presented model can simulate the forces exerted on the probe tip during the manipulation process in three directions. Because of the nonlinearity of the presented dynamic model, the effective parameters have been also studied. To evaluate the results, the parameters of the 3D case (cylindrical model) are gradually reduced and it is transformed into a 2D model (disk model); and we can observe a good agreement between the results of the two simulations. Next, the simulation results are compared with the experimental results, indicating changes in lateral force. With the help of the offered dynamic model, the cantilever deformation and the forces interacting between probe tip and particle can be determined from the moment the probe tip contacts the nanoparticle to when the nanoparticle dislodges from the substrate surface.
Aliases play an important role in online environments by facilitating anonymity, but also can be used to hide the identity of cybercriminals. Previous studies have investigated this alias matching problem in an attempt to identify whether two aliases are shared by an author, which can assist with identifying users. Those studies create their training data by randomly splitting the documents associated with an alias into two sub-aliases. Models have been built that can regularly achieve over 90% accuracy for recovering the linkage between these ‘random sub-aliases’. In this paper, random sub-alias generation is shown to enable these high accuracies, and thus does not adequately model the real-world problem. In contrast, creating sub-aliases using topic-based splitting drastically reduces the accuracy of all authorship methods tested. We then present a methodology that can be performed on non-topic controlled datasets, to produce topic-based sub-aliases that are more difficult to match. Finally, we present an experimental comparison between many authorship methods to see which methods better match aliases under these conditions, finding that local n-gram methods perform better than others.
We give a constructive proof showing that every finitely generated polynomial ideal has a Gröbner basis, provided the ring of coefficients is Noetherian in the sense of Richman and Seidenberg. That is, we give a constructive termination proof for a variant of the well-known algorithm for computing the Gröbner basis. In combination with a purely order-theoretic result we have proved in a separate paper, this yields a unified constructive proof of the Hilbert basis theorem for all Noether classes: if a ring belongs to a Noether class, then so does the polynomial ring. Our proof can be seen as a constructive reworking of one of the classical proofs, in the spirit of the partial realisation of Hilbert's programme in algebra put forward by Coquand and Lombardi. The rings under consideration need not be commutative, but are assumed to be coherent and strongly discrete: that is, they admit a membership test for every finitely generated ideal. As a complement to the proof, we provide a prime decomposition for commutative rings possessing the finite-depth property.
Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption, and more generally to optimize a given program. Essentially, it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. Unfolding is one of the basic operations used by most program transformation systems and consists of the replacement of a procedure call by its definition. While there is a large body of literature on the transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. This paper defines an unfolding system for Constraint Handling Rules programs. We define an unfolding rule, show its correctness and discuss some conditions that can be used to delete an unfolded rule while preserving the program meaning. We also prove that, under some suitable conditions, confluence and termination are preserved by the above transformation.
We consider structured specifications built from flat specifications using union, translation and hiding with their standard model-class semantics in the context of an arbitrary institution. We examine the alternative of sound property-oriented semantics for such specifications, and study their relationship to model-class semantics. An exact correspondence between the two (completeness) is not achievable in general. We show through general results on property-oriented semantics that the semantics arising from the standard proof system is the strongest sound and compositional property-oriented semantics in a wide class of such semantics. We also sharpen one of the conditions that does guarantee completeness and show that it is a necessary condition.
Let G be a string graph (an intersection graph of continuous arcs in the plane) with m edges. Fox and Pach proved that G has a separator consisting of $O(m^{3/4}\sqrt{\log m})$ vertices, and they conjectured that the bound of $O(\sqrt m)$ actually holds. We obtain separators with $O(\sqrt m \,\log m)$ vertices.
Thank you for the support in getting Network Science up and running. We are deeply appreciative of the work our associate editors, authors, and reviewers have put into realizing this vision of an interdisciplinary journal for network science. And of course, the journal would not be possible without the hard work of the editors of Network Science, who act as action editors for the submitted articles.
This paper presents a novel global path planning method for mobile robots. An improved grid map, called three-dimensional-like map, is developed to represent the global workspace area. The new environment model includes not only contour information of obstacles but also artificial height information. Based on this new model, a simple but efficient obstacle avoidance algorithm is developed to solve robot path planning problems in static environment. The proposed algorithm only requires simple distance calculations and several comparison operations. In addition, unlike other algorithms, the proposed algorithm only needs to deal with some obstacles instead of all. The research results show that this method is computationally efficient and can be used to find an optimal or near optimal path.
Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.
Mobility of people and goods is essential in the global economy. The ability to track the routes and patterns associated with this mobility offers unprecedented opportunities for developing new, smarter applications in different domains. Much of the current research is devoted to developing concepts, models, and tools to comprehend mobility data and make it manageable for these applications. This book surveys the myriad facets of mobility data, from spatio-temporal data modeling, to data aggregation and warehousing, to data analysis, with a specific focus on monitoring people in motion (drivers, airplane passengers, crowds, and even animals in the wild). Written by a renowned group of worldwide experts, it presents a consistent framework that facilitates understanding of all these different facets, from basic definitions to state-of-the-art concepts and techniques, offering both researchers and professionals a thorough understanding of the applications and opportunities made possible by the development of mobility data.
Numerical algorithms, modern programming techniques, and parallel computing are often taught serially across different courses and different textbooks. The need to integrate concepts and tools usually comes only in employment or in research - after the courses are concluded - forcing the student to synthesise what is perceived to be three independent subfields into one. This book provides a seamless approach to stimulate the student simultaneously through the eyes of multiple disciplines, leading to enhanced understanding of scientific computing as a whole. The book includes both basic as well as advanced topics and places equal emphasis on the discretization of partial differential equations and on solvers. Some of the advanced topics include wavelets, high-order methods, non-symmetric systems, and parallelization of sparse systems. The material covered is suited to students from engineering, computer science, physics and mathematics.
The Seismic Analysis Code (SAC) is one of the most widely used analysis packages for regional and teleseismic seismic data. For the first time, this book provides users at introductory and advanced levels with a complete guide to SAC. It leads new users of SAC through the steps of learning basic commands, describes the SAC processing philosophy, and presents its macro language in full, supported throughout with example inputs and outputs from SAC. For more experienced practitioners, the book describes SAC's many hidden features, including advanced graphics aspects, its file structure, how to write independent programs to access and create files, and much more. Tutorial exercises engage users with newly acquired skills, providing data and code to implement the standard methods of teleseismic shear-wave splitting and receiver function analysis. Methodical and authoritative, this is a key resource for researchers and graduate students in global seismology, earthquake seismology and geophysics.
The idea of interfacing minds with machines has long captured the human imagination. Recent advances in neuroscience and engineering are making this a reality, opening the door to restoration and augmentation of human physical and mental capabilities. Medical applications such as cochlear implants for the deaf and neurally controlled prosthetic limbs for the paralyzed are becoming almost commonplace. Brain-computer interfaces (BCIs) are also increasingly being used in security, lie detection, alertness monitoring, telepresence, gaming, education, art, and human augmentation. This introduction to the field is designed as a textbook for upper-level undergraduate and first-year graduate courses in neural engineering or brain-computer interfacing for students from a wide range of disciplines. It can also be used for self-study and as a reference by neuroscientists, computer scientists, engineers, and medical practitioners. Key features include questions and exercises in each chapter and a supporting website.