To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents recent work in engaging both students and working professionals from a variety of disciplines and backgrounds with the practice of collective and site-specific electroacoustic music creation. The emphasis is placed on embodied, deep listening in tandem with a manual approach to sonic art creation that bridges an understanding of the interplay between digital sound manipulation, larger composed structures and the physical presentation of a work in a given space. Through a practice-oriented approach, participants gain insights into areas such as the abstract world of digital sound recording and representation, the extreme influence on this content enacted by a given sound delivery system and a given space, and the subjective experience of listening to sounds from a variety of orientations and postures, and with varying levels of understanding of the original source recordings. Finally, through a group approach to composing larger structures, participants begin to understand the often mysterious and unsaid processes involved in the normally solitary act of composing electroacoustic music.
This article presents a framework for describing, understanding and evaluating the experience of voice in acousmatic electroacoustic music and related genres through the maximal–minimal model. This model, which is inspired by literary theory, theories of radiophonic voice as well as theories of electroacoustic music, presents maximal and minimal voice as loosely defined poles constituting end points on a continuum on which experienced voices can be localised. Here, maximal voice, which parallels the informative and clearly articulated speaking voice dominant in the radio medium, is described as the converging fulfilment of seven premises. These premises are seen as partly interconnected conditions related to particular aspects or features of the experience of voice. At the other end of the continuum, minimal voice is defined as a boundary zone between voice and non-voice, a zone which is related to the negative fulfilment of the seven premises. The two poles are presented as centre and periphery, respectively, with the seven premises constituting multiple axes spreading out from the centre. These features, it is argued, parallel Lakoff's cluster model of categorisation. Lastly, the article briefly discusses the use of the framework in analysis of electroacoustic works with voice, and it demonstrates two ways in which the evaluations according to the framework can be visualised.
Let $\mathcal{H}$ be a set of connected graphs. A graph G is said to be $\mathcal{H}$-free if G does not contain any element of $\mathcal{H}$ as an induced subgraph. Let $\mathcal{F}_{k}(\mathcal{H})$ be the set of k-connected $\mathcal{H}$-free graphs. When we study the relationship between forbidden subgraphs and a certain graph property, we often allow a finite exceptional set of graphs. But if the symmetric difference of $\mathcal{F}_{k}(\mathcal{H}_{1})$ and $\mathcal{F}_{k}(\mathcal{H}_{2})$ is finite and we allow a finite number of exceptions, no graph property can distinguish them. Motivated by this observation, we study when we obtain a finite symmetric difference. In this paper, our main aim is the following. If $|\mathcal{H}|\leq 3$ and the symmetric difference of $\mathcal{F}_{1}(\{H\})$ and $\mathcal{F}_{1}(\mathcal{H})$ is finite, then either $H\in \mathcal{H}$ or $|\mathcal{H}|=3$ and H=C3. Furthermore, we prove that if the symmetric difference of $\mathcal{F}_{k}(\{H_{1}\})$ and $\mathcal{F}_{k}(\{H_{2}\})$ is finite, then H1=H2.
This article introduces research on the influence of teaching on the change of inexperienced listeners’ appreciation of electroacoustic music. A curriculum was developed to make Key Stage 3 students (11–14 years old)1 familiar with electroacoustic music. The curriculum introduced music using concepts, such as music with real-world sounds and music with generated sounds. Presented in an online environment and accompanied with a teachers’ handbook, the curriculum can be used online or as classroom-based teaching resource.
The online environment was developed with the help of user-centred design. Following this, the curriculum was tested in a large-scale study including four Key Stage 3 classes within three schools in Leicester, UK. Data were collected using questionnaires, a listening response test and a summary of the teaching (letter written by participants). Qualitative content analysis was used for the data analysis.
Results include the change of the participants’ appreciation of electroacoustic music during the study. Successful learning and a decrease in alienation towards electroacoustic music could be measured. The study shows that the appreciation of electroacoustic music can be enhanced through the acquirement of conceptual knowledge. Especially important was the enhancing of listening skills following a listening training as well as the broadening of the participants’ vocabulary that enabled them to describe their listening experience.
A perfect matching M in an edge-coloured complete bipartite graph Kn,n is rainbow if no pair of edges in M have the same colour. We obtain asymptotic enumeration results for the number of rainbow perfect matchings in terms of the maximum number of occurrences of each colour. We also consider two natural models of random edge-colourings of Kn,n and show that if the number of colours is at least n, then there is with high probability a rainbow perfect matching. This in particular shows that almost every square matrix of order n in which every entry appears n times has a Latin transversal.
Alloy is a declarative language for lightweight modelling and analysis of software. The core of the language is based on first-order relational logic, which offers an attractive balance between analysability and expressiveness. The logic is expressive enough to capture the intricacies of real systems, but is also simple enough to support fully automated analysis with the Alloy Analyzer. The Analyzer is built on a SAT-based constraint solver and provides automated simulation, checking and debugging of Alloy specifications. Because of its automated analysis and expressive logic, Alloy has been applied in a wide variety of domains. These applications have motivated a number of extensions both to the Alloy language and to its SAT-based analysis. This paper provides an overview of Alloy in the context of its three largest application domains, lightweight modelling, bounded code verification and test-case generation, and three recent application-driven extensions, an imperative extension to the language, a compiler to executable code and a proof-capable analyser based on SMT.
We propose a design and verification methodology supporting the early phases of system design for cooperative driver assistance systems, focusing on the realisability of new automotive functions. Specifically, we focus on applications where drivers are supported in complex driving tasks by safe strategies involving the coordinated movements of multiple vehicles to complete the driving task successfully. We propose a divide and conquer approach for formally verifying timed probabilistic requirements on successful completion of the driving task and collision freedom based on formal specifications of a set of given manoeuvring and communication capabilities of the car. In particular, this allows an assessment of whether they are sufficient to implement strategies for successful completion of the driving task.
The papers included in this special issue of Mathematical Structures in Computer Science were selected from a larger set we solicited from leading research groups on both sides of the Atlantic. They cover a wide spectrum of tutorials, recent results and surveys in the area of lightweight and practical formal methods in the design and analysis of safety-critical systems. All the papers we received were submitted to a rigorous process of review and revision, based on which we made our final selection.
Recent accounts of actual causation are stated in terms of extended causal models. These extended causal models contain two elements representing two seemingly distinct modalities. The first element are structural equations which represent the “(causal) laws” or mechanisms of the model, just as ordinary causal models do. The second element are ranking functions which represent normality or typicality. The aim of this paper is to show that these two modalities can be unified. I do so by formulating two constraints under which extended causal models with their two modalities can be subsumed under so called “counterfactual models” which contain just one modality. These two constraints will be formally precise versions of Lewis’ (1979) familiar “system of weights or priorities” governing overall similarity between possible worlds.
Ptolemy‡ is an open-source and extensible modelling and simulation framework. It offers heterogeneous modeling capabilities by allowing different models of computation, both untimed and timed, to be composed hierarchically in an arbitrary fashion. This paper proposes a formal semantics for Ptolemy that is modular in the sense that atomic actors and their compositions are treated in a unified way. In particular, all actors conform to an executable interface that contains four functions: fire (produce outputs given current state and inputs); postfire (update state instantaneously); deadline (how much time the actor is willing to let elapse); and time-update (update the state with the passage of time). Composite actors are obtained using composition operators that in Ptolemy are called directors. Different directors realise different models of computation. In this paper, we formally define the directors for the following models of computation: synchronous- reactive, discrete event, continuous time, process networks and modal models.
The effective use of model-based formal methods in the development of complex embedded systems requires the integration of discrete-event models of controllers with continuous-time models of their environments. This paper proposes a new approach to the development of such combined models (co-models), in which an initial discrete-event model may include approximations of continuous-time behaviour that can subsequently be replaced by couplings to continuous-time models. An operational semantics of co-simulation allows the discrete and continuous models to run on their respective simulators and managed by a coordinating co-simulation engine. This permits the exploration of the composite co-model's behaviour in a range of operational scenarios. The approach has been realised using the Vienna Development Method (VDM) as the discrete-event formalism, and 20-sim as the continuous-time framework, and has been applied successfully to a case study based on the distributed controller for a personal transporter device.
This paper gives a bird's-eye view of the various ingredients that make up a modern, model-checking-based approach to performability evaluation: Markov reward models, temporal logics and continuous stochastic logic, model-checking algorithms, bisimulation and the handling of non-determinism. A short historical account as well as a large case study complete this picture. In this way, we show convincingly that the smart combination of performability evaluation with stochastic model-checking techniques, developed over the last decade, provides a powerful and unified method of performability evaluation, thereby combining the advantages of earlier approaches.
Have formal methods in computer science come of age? While the contributions to this special issue of Mathematical Structures in Computer Science attest to their importance in the design and analysis of particular software systems, their relevance to the field as a whole is far wider. In recent years, formal methods have become more accessible and easier to use, more directly related to practical problems and more adaptable to imperfect and/or approximate specifications in real-life applications. As a result, they are now a central component of computer-science education and research.
The correct and efficient implementation of general real-time applications remains very much an open problem. A key issue is meeting timing constraints whose satisfaction depends on features of the execution platform, in particular its speed. Existing rigorous implementation techniques are applicable to specific classes of systems, for example, with periodic tasks or time-deterministic systems.
We present a general model-based implementation method for real-time systems based on the use of two models:
• An abstract model representing the behaviour of real-time software as a timed automaton, which describes user-defined platform-independent timing constraints. Its transitions are timeless and correspond to the execution of statements of the real-time software.
• A physical model representing the behaviour of the real-time software running on a given platform. It is obtained by assigning execution times to the transitions of the abstract model.
A necessary condition for implementability is time-safety, that is, any (timed) execution sequence of the physical model is also an execution sequence of the abstract model. Time-safety simply means that the platform is fast enough to meet the timing requirements. As execution times of actions are not known exactly, time-safety is checked for the worst-case execution times of actions by making an assumption of time-robustness: time-safety is preserved when the speed of the execution platform increases.
We show that, as a rule, physical models are not time-robust, and that time-determinism is a sufficient condition for time-robustness. For a given piece of real-time software and an execution platform corresponding to a time-robust model, we define an execution engine that coordinates the execution of the application software so that it meets its timing constraints. Furthermore, in the case of non-robustness, the execution engine can detect violations of time-safety and stop execution.
We have implemented the execution engine for BIP programs with real-time constraints and validated the implementation method for two case studies. The experimental results for a module of a robotic application show that the CPU utilisation and the size of the model are reduced compared with existing implementations. The experimental results for an adaptive video encoder also show that a lack of time-robustness may seriously degrade the performance for increasing platform execution speed.
The stringent security requirements of organisations like banks or hospitals frequently adopt role-based access control (RBAC) principles to represent and simplify their internal permission management. While representing a fundamental advanced RBAC concept enabling precise restrictions on access rights, authorisation constraints increase the complexity of the resulting security policies so that tool support for convenient creation and adequate validation is required. A particular contribution of our work is a new approach to developing and analysing RBAC policies using a UML-based domain-specific language (DSL), which allows the hiding of the mathematical structures of the underlying authorisation constraints implemented in OCL. The DSL we present is highly configurable and extensible with respect to new concepts and classes of authorisation constraints, and allows the developer to validate RBAC policies in an effective way. The handling of dynamic (that is, time-dependent) constraints, their visual representation through the RBAC DSL and their analysis all form another part of our contribution. The approach is supported by a UML and OCL validation tool.
A classical result of Robertson and Seymour states that the set of graphs containing a fixed planar graph H as a minor has the so-called Erdős–Pósa property; namely, there exists a function f depending only on H such that, for every graph G and every positive integer k, the graph G has k vertex-disjoint subgraphs each containing H as a minor, or there exists a subset X of vertices of G with |X| ≤ f(k) such that G − X has no H-minor (see Robertson and Seymour, J. Combin. Theory Ser. B41 (1986) 92–114). While the best function f currently known is exponential in k, a O(k log k) bound is known in the special case where H is a forest. This is a consequence of a theorem of Bienstock, Robertson, Seymour and Thomas on the pathwidth of graphs with an excluded forest-minor. In this paper we show that the function f can be taken to be linear when H is a forest. This is best possible in the sense that no linear bound is possible if H has a cycle.
This volume contains nine survey articles based on the invited lectures given at the 24th British Combinatorial Conference, held at Royal Holloway, University of London in July 2013. This biennial conference is a well-established international event, with speakers from around the world. The volume provides an up-to-date overview of current research in several areas of combinatorics, including graph theory, matroid theory and automatic counting, as well as connections to coding theory and Bent functions. Each article is clearly written and assumes little prior knowledge on the part of the reader. The authors are some of the world's foremost researchers in their fields, and here they summarise existing results and give a unique preview of cutting-edge developments. The book provides a valuable survey of the present state of knowledge in combinatorics, and will be useful to researchers and advanced graduate students, primarily in mathematics but also in computer science and statistics.