To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Air traffic management (ATM) under its current paradigm is reaching its structural limits considering the continuously growing demand. The need for a decrease in traffic workload opens numerous problems for optimization, from capacity balancing to conflict solving, using many different degrees of freedom, such as re-routing, flight-level changes, or ground-holding schemes. These problems are usually of a large dimension (there are 30 000 daily flights in Europe in the year 2012) and highly combinatorial, hence challenging for current problem solving technologies. We give brief tutorials on ATM and constraint programming (CP), and survey the literature on deploying CP technology for modelling and solving combinatorial problems that occur in an ATM context.
As acknowledged by the SESAR (Single European Sky ATM (Air Traffic Management) Research) program, current Air Traffic Control (ATC) systems must be drastically improved to accommodate the predicted traffic growth in Europe. In this context, the Episode 3 project aims at assessing the performance of new ATM concepts, like 4D-trajectory planning and strategic deconfliction.
One of the bottlenecks impeding ATC performances is the hourly capacity constraints defined on each en-route ATC sector to limit the rate of aircraft. Previous works were mainly focused on optimizing the current ground holding slot allocation process devised to satisfy these constraints. We propose to estimate the cost of directly solving all conflicts in the upper airspace with ground holding, provided that aircraft were able to follow their trajectories accurately.
We present a Constraint Programming model of this large-scale combinatorial optimization problem and the results obtained with the FaCiLe (Functional Constraint Library). We study the effect of uncertainties on the departure time and estimate the cost of improving the robustness of our solutions with the Complete Air Traffic Simulator (CATS). Encouraging results were obtained without uncertainty but the costs of robust solutions are prohibitive. Our approach may however be improved, for example, with a prior flight level allocation and the dynamic resolution of remaining conflicts with one of CATS’ modules.
This study addresses the kinematics of a six-degrees-of-freedom parallel manipulator whose moving platform is a regular triangular prism. The moving and fixed platforms are connected to each other by means of two identical parallel manipulators. Simple forward kinematics and reduced singular regions are the main benefits offered by the proposed parallel manipulator. The Input–Output equations of velocity and acceleration are systematically obtained by resorting to reciprocal-screw theory. A case study, which is verified with the aid of commercially available software, is included with the purpose to exemplify the application of the method of kinematic analysis.
Morphological analysis and disambiguation are crucial stages in a variety of natural language processing applications, especially when languages with complex morphology are concerned. We present a system which disambiguates the output of a morphological analyzer for Hebrew. It consists of several simple classifiers and a module that combines them under the constraints imposed by the analyzer. We explore several approaches to classifier combination, as well as a back-off mechanism that relies on a large unannotated corpus. Our best result, around 83 percent accuracy, compares favorably with the state of the art on this task.
This paper proposes a precise outdoor localization algorithm with the integration of Global Positioning System (GPS) and Inertial Navigation System (INS). To achieve precise outdoor localization, two schemes are recently proposed, which consist of de-noising the INS signals and fusing the GPS and INS data. To reduce the noise from the internal INS sensors, the discrete wavelet transform and variable threshold method are utilized, and to fuse the GPS and INS data while filtering out the noise caused by the acceleration, deceleration, and unexpected slips, the Unscented Particle Filter (UPF) is adopted. Conventional de-noising methods mainly employ a combination of low-pass and high-pass filters, which results in signal distortion. This newly proposed system also utilizes the vibration information of the actuator according to the fluctuations of the velocity to minimize the signal distortion. The UPF resolves the nonlinearities of the actuator and non-normal distributions of the noise more effectively than the conventional particle filter (PF) or Extended Kalman Filter–PF. The superiority of the proposed algorithm was verified through experiments, and the results are reported.
Safety-critical software must be thoroughly verified before being exploited in commercial applications. In particular, any TCAS (Traffic Alert and Collision Avoidance System) implementation must be verified against safety properties extracted from the anti-collision theory that regulates the controlled airspace. This verification step is currently realized with manual code reviews and testing. In our work, we explore the capabilities of Constraint Programming for automated software verification and testing. We built a dedicated constraint solving procedure that combines constraint propagation with Linear Programming to solve conditional disjunctive constraint systems over bounded integers extracted from computer programs and safety properties. An experience we made on verifying a publicly available TCAS component implementation against a set of safety-critical properties showed that this approach is viable and efficient.
This paper considers the problem of generating conflict-free movement schedules for a set of vehicles that are operating simultaneously in a common airspace. In both civilian air traffic management and military air campaign planning contexts, it is crucial that the movements of different vehicles be coordinated so as to avoid collisions and near misses. Our approach starts from a view of airspace management as a 4D resource allocation problem, where the space in which vehicles must maneuver is itself managed as a capacitated resource. We introduce a linear octree representation of airspace capacity to index vector-based vehicle routes and efficiently detect regions of potential conflict. Generalizing the notion of contention-based search heuristics, we next define a scheduling algorithm that proceeds by first solving a relaxed version of the problem to construct a spatial capacity profile (represented as an octree), and then using spatio-temporal regions where demand exceeds capacity to make conflict-avoiding vehicle routing and scheduling decisions. We illustrate the utility of this basic representation and search algorithm in two ways. First, to demonstrate the overall viability of the approach, we present experimental results using data representing a realistically sized air campaign planning domain. Second, we define a more abstract notion of ‘encounter set’, which tolerates some amount of conflict on the assumption that on-board deconfliction processes can take appropriate avoidance maneuvers at execution time, and show that generation of this more abstract form of predictive guidance can be obtained without loss in computational efficiency.
Spelling errors in digital documents are often caused by operational and cognitive mistakes, or by the lack of full knowledge about the language of the written documents. Computer-assisted solutions can help to detect and suggest replacements. In this paper, we present a new string distance metric for the Persian language to rank respelling suggestions of a misspelled Persian word by considering the effects of keyboard layout on typographical spelling errors as well as the homomorphic and homophonic aspects of words for orthographical misspellings. We also consider the misspellings caused by disregarded diacritics. Since the proposed string distance metric is custom-designed for the Persian language, we present the spelling aspects of the Persian language such as homomorphs, homophones, and diacritics. We then present our statistical analysis of a set of large Persian corpora to identify the causes and the types of Persian spelling errors. We show that the proposed string distance metric has a higher mean average precision and a higher mean reciprocal rank in ranking respelling candidates of Persian misspellings in comparison with other metrics such as the Hamming, Levenshtein, Damerau–Levenshtein, Wagner–Fischer, and Jaro–Winkler metrics.
As mentioned previously, experimental methods can be a matter of dispute: therecan be as many views of the “correct” way to run an experiment asthere are experimenters. Such disagreements are most obvious in the approachtaken to statistical analysis of data: everyone has their own favourite method,there can be many different valid ways to analyse data, and even statisticiansdo not always agree on the best approach.
This chapter is not intended to be a statistics primer: it simply describes thestatistics tests that I find most useful in analysing data and shows examples oftheir application. It does not discuss any theoretical aspects of these tests orwhy they “work.” Rather, it is a practical guide that will enablean experimenter to make considerable headway with some simple analyses, and tobe able to consult a statistics text for more information with confidence.
In most cases, these tests will be sufficient for answering the type of researchquestions discussed so far. Other analyses may require reference to a goodstatistics book or guidance from a statistics consultant.
This book describes the process that takes a researcher from identifying ahuman–computer interaction (HCI) research idea that needs to be tested,to designing and conducting a test, and then analysing and reporting theresults. This first chapter introduces the notion of an “HCI idea”and different approaches to testing.
Assessing the worth of an HCI idea
Imagine that you have an HCI idea, for example, a novel interaction method, a newway of visualising data, an innovative device for moving a cursor, or a newinteractive system for building games. You can implement it, demonstrate it to awide range of people, and even deploy it for use – but is it a“good” idea? Will the interaction method assist users with theirtasks? Will the visualisation make it easier to spot data trends? Will the newdevice make cursor movement quicker? Will users like the new game buildingsystem?
It is your idea, so of course you believe that it is wonderful; however, yoursubjective judgement (or even the views of your friends in the researchlaboratory) is not sufficient to prove its general worth. An objectiveevaluation of the idea (using people not involved in the research) is required.As Zhai (2003) says in his controversial article, “Evaluation is theworst form of HCI research except all those other forms that have beentried,” the true value of the idea cannot be determined simply by“subjective opinion, authority, intimidation, fashion or fad.”
The definition of the conditions, tasks, and experimental objects is the initialfocus of the experimental design, and must be carefully related to the researchquestion, as described in Chapter 2. The experiment itself could be describedsimply as presenting the stimuli to human participants and asking them toperform the tasks. There are, however, still many other decisions to be madeabout the experimental process, as well as additional supporting materials andprocesses to be considered.
This chapter focuses on the nature of the participant experience, that is, whateach participant will do between the start and end times of the experiment– a lot more happens than simply presenting the trials.
Allocating participants to conditions
As highlighted in Chapter 1, the key issue when running experiments is thecomparison of performance between the conditions: does one condition producebetter or worse performance than another? To determine “performance witha condition,” human participants will need to perform tasks associatedwith the HCI idea being investigated, and measurements of the overallperformance for each condition will be taken. Recall that we want to producedata like that in Figure 2.1, which summarises performance according to eachexperimental condition, with no explicit reference to tasks or experimentalobjects.
To illustrate the two approaches to factor analysis, consider awithin-participant experiment that aims to answer the research question,“Which visual form of an image best supports visual search?” Theindependent variable is the visual form of an image with three conditions: Blackand White (BW), Colour (C), and Grey-scale (GS).
Each screen presents forty items, and there is only one task – identifythe largest image. To ensure generalisability of the results, there are threeexperimental objects, each using a different type of image: images of theenvironment (photographs, P), paintings (photographs of paintings, PP), andgraphics (images created using a digital imaging tool, G). Error and responsetime data are collected, but only error data are analysed here. Data for thisexperiment (fabricated for the purposes of illustration) are shown in TableA3.1.
The primary independent variable primary independent variable isvisual form (BW, C, GS) because this is directly related to the researchquestion. A secondary independent variable is image type (withthree secondary conditions, P, PP, G).
To conclude, this chapter summarises the contents of this book by presenting amodel and six key principles for designing and conducting experiments.
A model of the experimental process
The model, presented in Figure 8.1, shows the main stages of the experimentalprocess and the important considerations that need to be addressed at eachstage.
Six key principles for conducting experiments
This book presents specific advice to guide the researcher through theexperimental process, and, subsequently, six key general principles emerge.These are listed as follows:
Principle 1: Define a clear research question and answer it.Doing so will provide a useful focus throughout the process and will ensurethat a good “story” can be told at the end. Many decisionsneed to be made, and making them within the context of a clearly phrasedresearch question will make them easier to decide on and justify.
Principle 2: Plan, prepare, and pilot. Participant time is ascarce resource: insufficient preparation will simply result in wasting theparticipants’ time. You cannot do too much preparation!
Principle 3: Only collect, analyse, and present data that aremeaningful to the research question. Experimenter time is also ascarce resource. Like Principle 1, this principle ensures that your effortsare focussed, that you are not sidetracked into addressing interesting (butirrelevant) issues, and that your own time is not wasted.
Principle 4: Apply the planned analysis method on fabricated databefore running the experiment. Collecting data that are notsufficient for answering your research question wastes your time and theparticipants’ time. Identify the form of data required for answering theresearch question before you start the experiment.
Principle 5: Collect and use both quantitative and qualitativedata. The temptation is to focus on the numbers, whereas “softer”data are often much more revealing. Qualitative data are also useful whenthe numbers do not tell you what you wanted to hear.
Principle 6: Acknowledge the limitations of the experiment.Doing so is not only honest, but ensures that you do not overstate theconclusions. It also helps preempt the criticisms of reviewers.
The first step in running an experiment is defining what you want to discover and how you will do so. This chapter presents an approach to experiments that begins by first defining a research question, and then basing the definition of the conditions, experimental objects, and tasks on that question. These elements will ultimately define the form of the experiment.
Several key concepts used throughout the book are introduced and defined in this chapter:
The research question : a clear question that succinctly states the aim of the research;
Conditions: the ideas of interest – these will be compared against each other;
The independent variable: the set of conditions to be used in the experiment – there will always be more than one condition;
The population : all the people who might use the idea; the sample: the set of people who will take part in the experiment;
Generalisability : the extent to which experimental results can apply to situations not explicitly included in the experiment itself;
Experimental objects : the way in which the ideas are presented to the participants – experimental objects embody the conditions so that they can be perceived;
Experimental stimulus : the combination of an experimental object and a condition;
Experimental tasks : what the participants will actually do with the experimental objects;
Experimental trial : the combination of a condition, an experimental object, and a task.
Designing an experiment is more than creating stimuli and tasks and deciding onthe participant experience. Before conducting the experiment, the exact form ofdata to be collected needs to be decided, and importantly, it needs to beconfirmed as sufficient for answering the research question.
This chapter focusses primarily on data collection. It describes the differenttypes of data that can be collected for different purposes and the means ofcollecting it.
We make the traditional distinction between quantitative data(represented by numbers; e.g., the number of errors, a preference ranking) andqualitative data (not represented by numbers; e.g., averbal description of problems encountered in performing the task, a videoshowing interaction with an interface).
In practise, there are two distinct decisions to be made about data:
What data to collect (a decision made in advance of the experiment),and
How to analyse the data (a decision made after the experiment hasbeen run).
These two decisions are inextricably linked because the potential means ofanalysis will influence the decision on what data to collect. Any discussionabout data collection therefore necessarily entails discussion on how it will beanalysed.