To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Enigma and Hagelin machines provided a much greater degree of security than any earlier systems of encipherment other than the unbreakable one-time pad. The cryptographic principles on which these two machines were based were quite simple. The Enigma provided a large number of substitution alphabets whilst the Hagelin generated a very long stream of pseudo-random key. In theory either machine could be modified in order to make it even more secure. The number of wheels could be increased and in the Hagelin the wheels could be made longer. In practice, modification of an existing cipher machine may present major difficulties of manufacture, distribution and compatibility with the original machine, which may be vital. A four-wheel Enigma was, in fact, introduced in 1942 and compatibility with the original three-wheel version achieved by arranging that with the new components in specified positions the old and new versions were the same cryptographically. Several new models of the Hagelin were produced by that company in the 1950s with different sized wheels and other features, but these were genuinely different machines and no attempt was made to provide compatibility with the original.
It might seem obvious that increasing the number of components in, or increasing the complexity of, a cipher machine will make it more secure, but this is not necessarily so. The more components there are, the more likely it becomes that operators will make errors. The greater the complexity, the greater the chance of a machine malfunction.
The first general purpose computers were built in the 1940s. They were large, filling big rooms. They used hundreds of valves and consumed many kilowatts of electricity. They performed about a thousand instructions a second, which was considered amazing at that time, and they were popularly referred to as ‘giant brains’. A few people, including Alan Turing, discussed ‘whether machines could think’ and laid bets as to whether a machine would defeat the World Chess Champion in the next 25 years. The former question remains a matter for debate; the latter was settled about 45 years later when a World Chess Champion did lose a match to a computer.
These early machines had very small direct access memories, only a thousand or so ‘words’, based upon cathode ray tubes or mercury delay lines. They rarely functioned for more than a few minutes before breaking down. Their input and output were primitive: paper tape or punched cards and a typewriter. They also cost a great deal of money; £100 000 in 1948 which was equivalent to several millions 30 years later. Very few people knew how to write programs for them. There was virtually no software (as it later became known) and all programs had to be written in ‘absolute machine code’.
Even the instruction codes of these machines were very limited. The first machine at Manchester University in 1948, for example, had no division instruction [12.2], so division had to be programmed by repeated subtraction.
K-k-k-k-k-k. Boof. Chhhhhhh. Hiss, whirr, pfing: 12k's is an onomatopoeic music. The audio equivalent of concrete poetry perhaps, where the sign (there, a letter or a word; here, a click or thud or tone) is elephantised, blown out of proportion, simultaneously stripped of reference and fraught with multivalent meaning. (Is it any coincidence that 12k founder Taylor Deupree is a graphic designer? You can almost hear the flip of a serif in 12k's grainy whoosh, or the nuzzling of kiss-kerned type in the cool brush of two bleeps.) In the releases of this Brooklyn label, notes, pulses, textures bear no immediate relation to the world around them, to a language of melody or tonal narrative, but in their careful melding of pulse and grain, they sketch an abstracted narrative of the development of several phenomena: electronic music, desktop DIY, and -especially -minimalism.
Milch is taken from Nicolai's first solo exhibition in the UK (shown in 2001) and is a special commission for Milch's industrial space. The minimal installation is a repetition of the same groups of objects four times over. There is a CD and amplifier connected to four 9″ speaker cones by thick coils of black cable. A shallow 2 × 1 m tray containing water is placed on top of the speaker cones and sonic frequencies constantly stretch and bend the water's molecules. The surface shifts from absolute flatness through spirographic swirls to geometric grids, effects which are recorded in a series of photographs hung around the gallery.
In this article Thaemlitz posits that the current state of digital audio production's borders are not determined by battles between academia and the marketplace, subsidy and enterprise, nor high-brow and low-brow. However, such binarisms continue to frame our efforts in ways which fuel rhetoric of transformation and revolution, while diffusing our material ability to impact a cultural mainstream. Rather than attempting to resolve such divisiveness and hypocrisy in our behaviour, Thaemlitz proposes an increased awareness of the cultural processes which facilitate our simultaneous participation in such seemingly irreconcilable arenas. In other words, celebrating diversity sometimes means throwing a party for a friend you are not particularly fond of.
The Spring sunshine makes the blind a perfect glowing square, clearly much later than the twenty-three minute duration of the live recording I set in motion, as I lay on my friend's bed alone, drunk on vodka and tonic and giddy with big city kicks after the very . . . Manhattan evening I was taken on. It was dark, I was too full of it to take the subway back to Brooklyn, I remember the cab ride over the bridge, no dog to greet me as I unlocked the heavy steel door. I was laughing at Alan Vega complaining about not being allowed to smoke, there was booing . . . ‘Frankie Teardrop’ had been glitching over Brussels for hours.
This article attempts to place the dynamic of popular electronic music, evident since the late 1970s and extending to the current situation, at the forefront of developing both a strategy of critique and a medium for critical reflection. The genres covered include electronic music in and around the time of punk rock, subsequent moves to the forcing ground of pop music with the era of synth-pop, the dynamic upheavals created through the surging form of techno, up to the prevalent genres of micro-sound and electronica. The essence of both the body of critique formulated by the music, and the critical reflection of such critique (as developed in works when a genre of music is said to have ‘passed’) is not pinned to a rigid model of assessment. Instead, the strength of popular music's ability to strategise and conduct a critique is considered in the self-same music's ability to subvert the definition and ‘metric spaces’ of critique as accumulated through previous genres.
When we ask what noise is, we would do well to remember that no single definition can function timelessly - this may well be the case with many terms, but one of the arguments of this essay is that noise is that which always fails to come into definition. Generally speaking, noise is taken to be a problem: unwanted sound, unorganised sound, excessively loud sound. Metaphorically, when we hear of noise being generated, we understand it to be something extraneous. Historically, though, noise has just as often signalled music, or pleasing sound, as its opposite. In the twentieth century, the notion of a clear line between elements suitable for compositional use (i.e. notes, created on instruments) and the world of noises was broken down. Russolo's ‘noisy machines’, Varèse and Satie's use of ostensibly non-musical machines to generate sounds, musique concrète, Cage's rethinking of sound, noise, music, silence . . .
As part of the evolution of our editorial structures for Volume 6 we were delighted to be able to establish a post on our Editorial Board for a representative of the International Computer Music Association (ICMA). We hoped to support the ICMA in a fruitful way, and I am now delighted once again to be able to announce that Organised Sound will be devoting its third issue annually to an ICMA theme and the work of the association's members (from Volume 7 Number 3). We hope that this will provide an additional dissemination vehicle for the organisation, allow its conference papers to be developed and expanded, provide an opportunity to publish work based upon the ICMA special interest groups and the association's affiliates. A representative of the ICMA will become a Guest Editor alongside the Organised Sound team of Editors and international referees for the third issue in each year.
The label Mille Plateaux focuses on concepts like virtuality, noise, machinism and digitality. In the most simple case, digital music simulates something that does not exist as a reality; it generates something new. It is the result of the teamwork of numerous authorities such as the 'musician', the programmer and the authority of the software program. Today, computer digital music can be seen as screen-based music, i.e. sounds become visible and images audible, but one can often forget that there is no mutual correspondence; and that this is simply a mechanism whereby a given program secretly directs the programmer towards significant ways of performing, creating apparently absolute relationships between image and sound. On the other hand, with the increasing complexity of software, the programmer loses insight into internal communication structures. Such complex programs are full of errors and can even act on their own initiative. Programmers and musicians who navigate through today's systems function as designers. But this is less a question of the design of a program's operation surfaces but of the programming of software and the navigation by its logic. One has to discuss the medial conditions of digital music, the more user-friendly the software, the less transparent is the medium itself; i.e. the more transparent the functions of a computer or a synthesizer (say, with the use of preset sounds), the stronger the medium proves to be non-transparent. Digital music is more about opening up given program structures; internal ramifications and program hierarchies are to be discovered.
In higher-order abstract syntax, the variables and bindings of an object language are represented by variables and bindings of a meta-language. Let us consider the simply typed λ-calculus as object language and Haskell as meta-language. For concreteness, we also throw in integers and addition, but only in this section.
Launchbury and Peyton Jones came up with an ingenious idea for embedding regions of imperative programming in a pure functional language like Haskell. The key idea was based on a simple modification of Hindley-Milner's type system. Our first contribution is to propose a more natural encapsulation construct exploiting higher-order kinds, which achieves the same encapsulation effect, but avoids the ad hoc type parameter of the original proposal. The second contribution is a type safety result for encapsulation of strict state using both the original encapsulation construct and the newly introduced one. We establish this result in a more expressive context than the original proposal, namely in the context of the higher-order lambda-calculus. The third contribution is a type safety result for encapsulation of lazy state in the higher-order lambda-calculus. This result resolves an outstanding open problem on which previous proof attempts failed. In all cases, we formalize the intended implementations as simple big-step operational semantics on untyped terms, which capture interesting implementation details not captured by the reduction semantics proposed previously.
This article describes the implementation of a debugger for lazy functional languages like Haskell. The key idea is to construct a declarative trace which hides the operational details of lazy evaluation. However, to avoid excessive memory consumption, the trace is constructed one piece at a time, as needed during a debugging session, by automatic re-execution of the program being debugged. The article gives a fairly detailed account of both the underlying ideas and of our implementation, and also presents performance figures which demonstrate the feasibility of the approach.
Suppose, you want to implement a structured editor for some term type, so that the user can navigate through a given term and perform edit actions on subterms. In this case you are immediately faced with the problem of how to keep track of the cursor movements and the user's edits in a reasonably efficient manner. In a previous pearl, Huet (1997) introduced a simple data structure, the Zipper, that addresses this problem – we will explain the Zipper briefly in section 2. A drawback of the Zipper is that the type of cursor locations depends on the structure of the term type, i.e. each term type gives rise to a different type of location (unless you are working in an untyped environment). In this pearl, we present an alternative data structure, the web, that serves the same purpose, but that is parametric in the underlying term type. Sections 3–6 are devoted to the new data structure. Before we unravel the Zipper and explore the web, let us first give a taste of their use.
How should we define physicalism or minimal physicalism? In my view, this question calls for stipulation because these are theoretical terms without a uniform use. Different views of psychophysical relations are physicalistic in different ways and to different degrees, and there is an obvious interest in clarifying and distinguishing these views and determining which are true. My aim in this chapter will be to do some of the clarifying and distinguishing. Stipulation of a unique thesis as physicalism or minimal physicalism must come with a rationale, and as I have none to offer I shall not pursue this.
Some regard physicalism as the thesis that all first-order properties instantiated in the spatiotemporal world are physical properties. I shall refer to this as type physicalism or property physicalism. It can be presented in the form of a supervenience thesis – another popular way of defining physicalism – on the assumption that a property is physical if and only if it logically supervenes on microphysical properties. This is one way, or a first approximation of a way, of characterizing a physical property in terms of microphysical properties. But as I think that the following discussion will apply on any reasonable view of physical property, I will mostly continue to talk of physical properties without getting more specific. I shall assume that it will be clear enough to think of a microphysical property as an assignment of fundamental microphysical parameters in some type of spatial or spatiotemporal region, where the fundamental microphysical parameters are those featuring in an ultimate microphysical theory.