To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A definition of a property P is impredicative if it quantifies over a domain to which P belongs. Due to influential arguments by Ramsey and Gödel, impredicative mathematics is often thought to possess special metaphysical commitments. The reason is that an impredicative definition of a property P does not have its intended meaning unless P exists, suggesting that the existence of P cannot depend on its explicit definition. Carnap (1937 [1934], p. 164) argues, however, that accepting impredicative definitions amounts to choosing a “form of language” and is free from metaphysical implications. This article explains this view in its historical context. I discuss the development of Carnap’s thought on the foundations of mathematics from the mid-1920s to the mid-1930s, concluding with an account of Carnap’s (1937 [1934]) non-Platonistic defense of impredicativity. This discussion is also important for understanding Carnap’s influential views on ontology more generally, since Carnap’s (1937 [1934]) view, according to which accepting impredicative definitions amounts to choosing a “form of language”, is an early precursor of the view that Carnap presents in “Empiricism, Semantics and Ontology” (1956 [1950]), according to which referring to abstract entities amounts to accepting a “linguistic framework”.
In this note, we use the Perron–Frobenius theorem to obtain the Rényi's entropy rate for a time-inhomogeneous Markov chain whose transition matrices converge to a primitive matrix. As direct corollaries, we also obtain the Rényi's entropy rate for asymptotic circular Markov chain and the Rényi's divergence rate between two time-inhomogeneous Markov chains.
The general problem under investigation is to understand how the complexity of a system which has been adapted to its random environment affects the level of randomness of its output (which is a function of its random input). In this paper, we consider a specific instance of this problem in which a deterministic finite-state decision system operates in a random environment that is modeled by a binary Markov chain. The system interacts with it by trying to match states of inactivity (represented by 0). Matching means that the system selects the (t + 1)th bit from the Markov chain whenever it predicts at time t that the environment will take a 0 value. The actual value at time t + 1 may be 0 or 1 thus the selected sequence of bits (which forms the system's output) may have both binary values. To try to predict well, the system's decision function is inferred based on a sample of the random environment.
We are interested in assessing how non-random the output sequence may be. To do that, we apply the adapted system on a second random sample of the environment and derive an upper bound on the deviation between the average number of 1 bit in the output sequence and the probability of a 1. The bound shows that the complexity of the system has a direct effect on this deviation and hence on how non-random the output sequence may be. The bound takes the form of $O(\sqrt {(2^k/n} ))$ where 2k is the complexity of the system and n is the length of the second sample.
Recent discussions on Fregean and neo-Fregean foundations for arithmetic and real analysis pay much attention to what is called either ‘Application Constraint’ ($AC$) or ‘Frege Constraint’ ($FC$), the requirement that a mathematical theory be so outlined that it immediately allows explaining for its applicability. We distinguish between two constraints, which we, respectively, denote by the latter of these two names, by showing how $AC$ generalizes Frege’s views while $FC$ comes closer to his original conceptions. Different authors diverge on the interpretation of $FC$ and on whether it applies to definitions of both natural and real numbers. Our aim is to trace the origins of $FC$ and to explore how different understandings of it can be faithful to Frege’s views about such definitions and to his foundational program. After rehearsing the essential elements of the relevant debate (§1), we appropriately distinguish $AC$ from $FC$ (§2). We discuss six rationales which may motivate the adoption of different instances of $AC$ and $FC$ (§3). We turn to the possible interpretations of $FC$ (§4), and advance a Semantic $FC$ (§4.1), arguing that while it suits Frege’s definition of natural numbers (4.1.1), it cannot reasonably be imposed on definitions of real numbers (§4.1.2), for reasons only partly similar to those offered by Crispin Wright (§4.1.3). We then rehearse a recent exchange between Bob Hale and Vadim Batitzky to shed light on Frege’s conception of real numbers and magnitudes (§4.2). We argue that an Architectonic version of $FC$ is indeed faithful to Frege’s definition of real numbers, and compatible with his views on natural ones. Finally, we consider how attributing different instances of $FC$ to Frege and appreciating the role of the Architectonic $FC$ can provide a more perspicuous understanding of his foundational program, by questioning common pictures of his logicism (§5).
Let $\left\{ {{\bi X}_k = {(X_{1,k},X_{2,k})}^{\top}, k \ge 1} \right\}$ be a sequence of independent and identically distributed random vectors whose components are allowed to be generally dependent with marginal distributions being from the class of extended regular variation, and let $\left\{ {{\brTheta} _k = {(\Theta _{1,k},\Theta _{2,k})}^{\top}, k \ge 1} \right\}$ be a sequence of nonnegative random vectors that is independent of $\left\{ {{\bi X}_k, k \ge 1} \right\}$. Under several mild assumptions, some simple asymptotic formulae of the tail probabilities for the bidimensional randomly weighted sums $\left( {\sum\nolimits_{k = 1}^n {\Theta _{1,k}} X_{1,k},\sum\nolimits_{k = 1}^n {\Theta _{2,k}} X_{2,k}} \right)^{\rm \top }$ and their maxima $({{\max} _{1 \le i \le n}}\sum\nolimits_{k = 1}^i {\Theta _{1,k}} X_{1,k},{{\max} _{1 \le i \le n}}\sum\nolimits_{k = 1}^i {\Theta _{2,k}} X_{2,k})^{\rm \top }$ are established. Moreover, uniformity of the estimate can be achieved under some technical moment conditions on $\left\{ {{\brTheta} _k, k \ge 1} \right\}$. Direct applications of the results to risk analysis are proposed, with two types of ruin probability for a discrete-time bidimensional risk model being evaluated.
Suppose there are n players, with player i having value vi > 0, and suppose that a game between i and j is won by i with probability vi/(vi + vj). In the winner plays random knockout tournament, we suppose that the players are lined up in a random order; the first two play, and in each subsequent game the winner of the last game plays the next in line. Whoever wins the game involving the last player in line, is the tournament winner. We give bounds on players’ tournament win probabilities and make some conjectures. We also discuss how simulation can be efficiently employed to estimate the win probabilities.
The majority of multi-agent reinforcement learning (MARL) implementations aim to optimize systems with respect to a single objective, despite the fact that many real-world problems are inherently multi-objective in nature. Research into multi-objective MARL is still in its infancy, and few studies to date have dealt with the issue of credit assignment. Reward shaping has been proposed as a means to address the credit assignment problem in single-objective MARL, however it has been shown to alter the intended goals of a domain if misused, leading to unintended behaviour. Two popular shaping methods are potential-based reward shaping and difference rewards, and both have been repeatedly shown to improve learning speed and the quality of joint policies learned by agents in single-objective MARL domains. This work discusses the theoretical implications of applying these shaping approaches to cooperative multi-objective MARL problems, and evaluates their efficacy using two benchmark domains. Our results constitute the first empirical evidence that agents using these shaping methodologies can sample true Pareto optimal solutions in cooperative multi-objective stochastic games.
We present ${{{{$\mathscr{I}$}-}\textsc{dlv}}+{{$\mathscr{MS}$}}}$, a new answer set programming (ASP) system that integrates an efficient grounder, namely ${{{$\mathscr{I}$}-}\textsc{dlv}}$, with an automatic selector that inductively chooses a solver: depending on some inherent features of the instantiation produced by ${{{$\mathscr{I}$}-}\textsc{dlv}}$, machine learning techniques guide the selection of the most appropriate solver. The system participated in the latest (7th) ASP competition, winning the regular track, category SP (i.e., one processor allowed).
This work addresses a new framework that proposes a decentralized strategy for collective and collaborative behaviours of multi-agent systems. This framework includes a new clustering behaviour that causes agents in the swarm to agree on attending a group and allocating a leader for each group, in a decentralized and local manner. The leader of each group employs a vision-based goal detection algorithm to find and acquire the goal in a cluttered environment. As soon as the leader starts moving, each member is enabled to move in the same direction by staying coordinated with the leader and maintaining the desired formation pattern. In addition, an exploration algorithm is designed and integrated into the framework so as to allow each group to be able to explore goals in a collaborative and efficient manner. A series of comprehensive experiments are conducted in order to verify the overall performance of the proposed framework.
Reinforcement learning (RL) algorithms are often used to compute agents capable of acting in environments without prior knowledge of the environment dynamics. However, these algorithms struggle to converge in environments with large branching factors and their large resulting state-spaces. In this work, we develop an approach to compress the number of entries in a Q-value table using a deep auto-encoder. We develop a set of techniques to mitigate the large branching factor problem. We present the application of such techniques in the scenario of a real-time strategy (RTS) game, where both state space and branching factor are a problem. We empirically evaluate an implementation of the technique to control agents in an RTS game scenario where classical RL fails and provide a number of possible avenues of further work on this problem.
We introduce unification in first-order logic. In propositional logic, unification was introduced by S. Ghilardi, see Ghilardi (1997, 1999, 2000). He successfully applied it in solving systematically the problem of admissibility of inference rules in intuitionistic and transitive modal propositional logics. Here we focus on superintuitionistic predicate logics and apply unification to some old and new problems: definability of disjunction and existential quantifier, disjunction and existential quantifier under implication, admissible rules, a basis for the passive rules, (almost) structural completeness, etc. For this aim we apply modified specific notions, introduced in propositional logic by Ghilardi, such as projective formulas, projective unifiers, etc.
Unification in predicate logic seems to be harder than in the propositional case. Any definition of the key concept of substitution for predicate variables must take care of individual variables. We allow adding new free individual variables by substitutions (contrary to Pogorzelski & Prucnal (1975)). Moreover, since predicate logic is not as close to algebra as propositional logic, direct application of useful algebraic notions of finitely presented algebras, projective algebras, etc., is not possible.
As more and more data is being generated by sensor networks, social media and organizations, the Web interlinking this wealth of information becomes more complex. This is particularly true for the so-called Web of Data, in which data is semantically enriched and interlinked using ontologies. In this large and uncoordinated environment, reasoning can be used to check the consistency of the data and of associated ontologies, or to infer logical consequences which, in turn, can be used to obtain new insights from the data. However, reasoning approaches need to be scalable in order to enable reasoning over the entire Web of Data. To address this problem, several high-performance reasoning systems, which mainly implement distributed or parallel algorithms, have been proposed in the last few years. These systems differ significantly; for instance in terms of reasoning expressivity, computational properties such as completeness, or reasoning objectives. In order to provide a first complete overview of the field, this paper reports a systematic review of such scalable reasoning approaches over various ontological languages, reporting details about the methods and over the conducted experiments. We highlight the shortcomings of these approaches and discuss some of the open problems related to performing scalable reasoning.
This research introduces different compositional techniques involving the use of sound spatialisation. These permit the incorporation of sound distortions produced by the real space, the body and the auditory system into low-, middle- and large-scale musical structures, allowing sound spatialisation to become a fundamental parameter of the three compositions presented here. An important characteristic of these pieces is the exclusive use of sine waves and other time-invariant sound signals. Even though these types of signals present no alterations in time, it is possible to perceive pitch, loudness and tone-colour variations when they move in space, due to the psychoacoustic processes involved in spatial hearing. To emphasise the perception of such differences, this research proposes dividing a tone into multiple sound units and spreading these in space using several loudspeakers arranged around the listener. In addition to the perception of sound attribute variations, it is also possible to create dynamic rhythms and textures that depend almost exclusively on how sound units are arranged in space. Such compositional procedures help to overcome to some degree the unnaturalness implicit when using synthetic-generated sounds; through them, it is possible to establish cause–effect relationships between sound movement, on the one hand, and the perception of sound attribute, rhythm and texture variations on the other. Another important consequence is the possibility of producing diffuse sound fields independently of the levels of reverberation in the room, and to create sound spaces of a particular spatial depth without using artificial delay or reverb.
This article shows how the theremin as a new musical medium enacted a double logic throughout its century-old techno-cultural life. On the one hand, in an attempt to be a ‘better’ instrument, the theremin imitated or remediated traditional musical instruments and in this way affirmed the musical values these instruments materialised; simultaneously, by being a new and different medium, with unprecedented flexibility for designing sound and human–machine interaction, it eroded and challenged these same values and gradually enacted change. On the other hand, the theremin inadvertently inaugurated a practice of musical instrument circulation using electronics schematics that allowed for the instrument’s reproduction, starting with the publication of schematics and tutorials in amateur electronics magazines and which can be seen as a predecessor to today’s circulation of open source code. This circulation practice, which I call instrument-code transduction, emerged from and was amplified by the fame the theremin obtained using its touchless interface to imitate or remediate traditional musical instruments, and in turn, this circulation practice has kept the instrument alive throughout the decades. Thus remediation and code-instrument transduction are not just mutually dependent, but are in fact, two interdependent processes of the same media phenomenon. Drawing from early reactions to the theremin documented in the press, from new media theory, and from publications in amateur electronics, this article attempts to use episodes from the history of the theremin to understand the early and profound changes that electric technologies brought to the concept of musical instruments at large.
Western audiences have long been fascinated with music automata. Against this backdrop, it may not be surprising that art and music curators display historical examples of such mechanical instruments together with contemporary sounding art. Yet what exactly do these curators aim to accomplish when combining historical music automata with kinetic sound art? And do visitors understand the connections between the objects on display in the ways intended by the curators? To examine the curators’ ambitions, this article analyses three exhibitions: Für Augen und Ohren (West Berlin 1980), Ballet Mécanique (Maastricht 2002) and Art or Sound (Venice 2014). To unravel visitors’ responses, we focus on the Berlin exhibition, the best documented case. We argue that the curators staged the automated kinetic as a key historical link between mechanical musical instruments and contemporary sound art, and that they tried to tap into specific dimensions of public fascination with musical automata – the magical invisible, mechanical wonder and blurring of boundaries – to open their audiences’ senses to sound art. As we will show with the help of the notion of ‘listening habitus’, visitors’ responses indeed drew on these dimensions, but more often than not displayed a preference for the historical automata rather than contemporary kinetic art.