To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Addition of extra sensors, especially video cameras and force sensors, under control of appropriate software makes robotic manipulators working in factories suitable for a range of new applications. This paper presents a method of manipulator indirect force control development, in which the force set values are specified in the operational space and the manipulator is equipped with a force sensor in its wrist. Standard control development methods need the estimation of parameters of the detailed model of a manipulator and position servos, which is a complicated and time-consuming task. Hence, in this work a time-efficient hybrid procedure of controller development is proposed consisting of both analytical and experimental stages: proposal of an approximate continuous model of a manipulator, experimental determination and verification of its parameter values using the resonance phenomenon, continuous regulator development, and digitization of the regulator.
Cops and robbers is a turn-based pursuit game played on a graph G. One robber is pursued by a set of cops. In each round, these agents move between vertices along the edges of the graph. The cop number c(G) denotes the minimum number of cops required to catch the robber in finite time. We study the cop number of geometric graphs. For points x1, . . ., xn ∈ ℝ2, and r ∈ ℝ+, the vertex set of the geometric graph G(x1, . . ., xn; r) is the graph on these n points, with xi, xj adjacent when ∥xi − xj∥ ≤ r. We prove that c(G) ≤ 9 for any connected geometric graph G in ℝ2 and we give an example of a connected geometric graph with c(G) = 3. We improve on our upper bound for random geometric graphs that are sufficiently dense. Let (n,r) denote the probability space of geometric graphs with n vertices chosen uniformly and independently from [0,1]2. For G ∈ (n,r), we show that with high probability (w.h.p.), if r ≥ K1 (log n/n)1/4 then c(G) ≤ 2, and if r ≥ K2(log n/n)1/5 then c(G) = 1, where K1, K2 > 0 are absolute constants. Finally, we provide a lower bound near the connectivity regime of (n,r): if r ≤ K3 log n/ then c(G) > 1 w.h.p., where K3 > 0 is an absolute constant.
This paper proposes an image sequence-based navigation method under the teaching-replay framework for robots in piecewise linear routes. Waypoints used by the robot contain either the positions with large heading changes or selected midway positions between junctions. The robot applies local visual homing to move between consecutive waypoints. The arrival at a waypoint is determined by minimizing the average vertical displacements of feature correspondences. The performance of the proposed approach is supported by extensive experiments in hallway and office environments. While the homing speed of robots using other approaches is constrained by the speed in the teaching phase, our robot is not bounded by such limit and can travel much faster without compromising the homing accuracy.
Nested data-parallelism (NDP) is a language mechanism that supports programming irregular parallel applications in a declarative style. In this paper, we describe the implementation of NDP in Parallel ML (PML), which is a part of the Manticore system. One of the main challenges of implementing NDP is managing the parallel decomposition of work. If we have too many small chunks of work, the overhead will be too high, but if we do not have enough chunks of work, processors will be idle. Recently, the technique of Lazy Binary Splitting was proposed to address this problem for nested parallel loops over flat arrays. We have adapted this technique to our implementation of NDP, which uses binary trees to represent parallel arrays. This new technique, which we call Lazy Tree Splitting (LTS), has the key advantage of performance robustness, i.e., it does not require tuning to get the best performance for each program. We describe the implementation of the standard NDP operations using LTS and present experimental data that demonstrate the scalability of LTS across a range of benchmarks.
Reasoning about program equivalence is one of the oldest problems in semantics. In recent years, useful techniques have been developed, based on bisimulations and logical relations, for reasoning about equivalence in the setting of increasingly realistic languages—languages nearly as complex as ML or Haskell. Much of the recent work in this direction has considered the interesting representation independence principles enabled by the use of local state, but it is also important to understand the principles that powerful features like higher-order state and control effects disable. This latter topic has been broached extensively within the framework of game semantics, resulting in what Abramsky dubbed the “semantic cube”: fully abstract game-semantic characterizations of various axes in the design space of ML-like languages. But when it comes to reasoning about many actual examples, game semantics does not yet supply a useful technique for proving equivalences.
In this paper, we marry the aspirations of the semantic cube to the powerful proof method of step-indexed Kripke logical relations. Building on recent work of Ahmed et al. (2009), we define the first fully abstract logical relation for an ML-like language with recursive types, abstract types, general references and call/cc. We then show how, under orthogonal restrictions to the expressive power of our language—namely, the restriction to first-order state and/or the removal of call/cc—we can enhance the proving power of our possible-worlds model in correspondingly orthogonal ways, and we demonstrate this proving power on a range of interesting examples. Central to our story is the use of state transition systems to model the way in which properties of local state evolve over time.
In this paper I examine metaphors of place and place making, with reference to the phenomenological tradition and in particular Edward S. Casey, in relation both to sound-based music and art concerned with environment, and to listening and environmental sound. I do so in order to consider how aspects of place-making activity might be incorporated in aurally perceived works, and elicited in listeners, so that we might perhaps achieve a greater sense of ‘connectedness’ to sound-based music and art that is itself about – in some way – our connectedness to the environment. Three works, by Feld, Monacchi and López, form the basis for investigation.
We show how the binary encoding and decoding of typed data and typed programs can be understood, programmed and verified with the help of question–answer games. The encoding of a value is determined by the yes/no answers to a sequence of questions about that value; conversely, decoding is the interpretation of binary data as answers to the same question scheme. We introduce a general framework for writing and verifying game-based codecs. We present games in Haskell for structured, recursive, polymorphic and indexed types, building up to a representation of well-typed terms in the simply-typed λ-calculus with polymorphic constants. The framework makes novel use of isomorphisms between types in the definition of games. The definition of isomorphisms together with additional simple properties make it easy to prove that codecs derived from games never encode two distinct values using the same code, never decode two codes to the same value and interpret any bit sequence as a valid code for a value or as a prefix of a valid code. Formal properties of the framework have been proved using the Coq proof assistant.
Atoms and de Bruijn indices are two well-known representation techniques for data structures that involve names and binders. However, using either technique, it is all too easy to make a programming error that causes one name to be used where another was intended. We propose an abstract interface to names and binders that rules out many of these errors. This interface is implemented as a library in Agda. It allows defining and manipulating term representations in nominal style and in de Bruijn style. The programmer is not forced to choose between these styles: on the contrary, the library allows using both styles in the same program, if desired. Whereas indexing the types of names and terms with a natural number is a well-known technique to better control the use of de Bruijn indices, we index types with worlds. Worlds are at the same time more precise and more abstract than natural numbers. Via logical relations and parametricity, we are able to demonstrate in what sense our library is safe, and to obtain theorems for free about world-polymorphic functions. For instance, we prove that a world-polymorphic term transformation function must commute with any renaming of the free variables. The proof is entirely carried out in Agda.
We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines for higher-order and imperative programming languages. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman (Proc. of the 14th ACM SIGACT-SIGPLAN Symp. Prin. Program. Langs, 1987, pp. 314–325), a lazy variant of Krivine's machine (Higher-Order Symb. Comput. Vol 20, 2007, pp. 199–207), and the stack-inspecting CM machine of Clements and Felleisen (ACM Trans. Program. Lang. Syst. Vol 26, 2004, pp. 1029–1052) into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique of store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. These machines are parameterized by allocation functions that tune performance and precision and substantially expand the space of analyses that this framework can represent. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, mutation, exceptions, first-class continuations, and even garbage collection. In order to close the gap between formalism and implementation, we provide translations of the mathematics as running Haskell code for the initial development of our method.
Existing macro systems force programmers to make a choice between clarity of specification and robustness. If they choose clarity, they must forgo validating significant parts of the specification and thus produce low-quality language extensions. If they choose robustness, they must write in a style that mingles the implementation with the specification and therefore obscures the latter. This paper introduces a new language for writing macros. With the new macro system, programmers naturally write robust language extensions using easy-to-understand specifications. The system translates these specifications into validators that detect misuses—including violations of context-sensitive constraints—and automatically synthesize appropriate feedback, eliminating the need for ad hoc validation code.
The 15th ACM SIGPLAN International Conference on Functional Programming (ICFP) took place on September 27–29, 2010 in Baltimore, Maryland. After the conference, the programme committee, chaired by Stephanie Weirich, selected several outstanding papers and invited their authors to submit to this special issue of Journal of Functional Programming. Umut A. Acar and James Cheney acted as editors for these submissions. This issue includes the seven accepted papers, each of which provides substantial new material beyond the original conference version. The selected papers reflect a consensus by the program committee that ICFP 2010 had a number of strong papers that link core functional programming ideas with other areas, such as multicore, embedded systems, and data compression.
The value of sketching in engineering design has been widely documented. This paper reviews trends in recent studies on sketching in engineering design and focuses on the encouragement of sketching. The authors present three experimental studies on sketching that look at (1) sketching assignments and their motivation, (2) the impact of a sketching lesson, and (3) the use of Smartpen technology to record sketching; overall these studies address the research question: Can sketching frequency be influenced in engineering education? Influencing sketching frequency is accomplished through motivation, learning, and use of technology for sketching, respectively. Results indicate that these three elements contribute to the encouragement of sketching in engineering design.
This paper presents a preliminary comparison between the role of computer-aided design (CAD) and sketching in engineering through a case study of a senior design project and interviews with industry and academia. The design team consisted of four senior level mechanical engineering students each with less than 1 year of professional experience are observed while completing an industry sponsored mechanical engineering capstone design project across a 17 week semester. Factors investigated include what CAD tools are used, when in the design process they are implemented, the justification for their use from the students' perspectives, the actual knowledge gained from their use, the impact on the final designed artifact, and the contributions of any sketches generated. At each design step, comparisons are made between CAD and sketching. The students implemented CAD tools at the onset of the project, generally failing to realize gains in design efficiency or effectiveness in the early conceptual phases of the design process. As the design became more concrete, the team was able to recognize clear gains in both efficiency and effectiveness through the use of computer assisted design programs. This study is augmented by interviews with novice and experienced industry users and academic instructors to align the trends observed in the case study with industry practice and educational emphasis. A disconnect in the perceived capability of CAD tools was found between novice and experienced user groups. Opinions on the importance of sketching skills differed between novice educators and novice industry professionals, suggesting that there is a change of opinion as to the importance of sketching formed when recent graduates transition from academia to industry. The results suggest that there is a need to emphasize the importance of sketching and a deeper understanding as to the true utility of CAD tools at each stage of the design process.
Although many approaches to digital ink recognition have been proposed, most lack the flexibility and adaptability to provide acceptable recognition rates across a variety of problem spaces. This project uses a systematic approach of data mining analysis to build a gesture recognizer for sketched diagrams. A wide range of algorithms was tested, and those with the best performance were chosen for further tuning and analysis. Our resulting recognizer, RATA.Gesture, is an ensemble of four algorithms. We evaluated it against four popular gesture recognizers with three data sets; one of our own and two from other projects. Except for recognizer–data set pairs (e.g., PaleoSketch recognizer and PaleoSketch data set) the results show that it outperforms the other recognizers. This demonstrates the potential of this approach to produce flexible and accurate recognizers.
The hierarchical construction of solid models with current computer-aided design systems provide little support in creating and editing free-form surfaces commonly encountered in industrial design. In this work, we propose a new design exploration method that enables sketch-based editing of free-form surface geometries where specific modifications can be applied at different levels of detail. This multilevel detail approach allows the designer to work from existing models and make alterations at coarse and fine representations of the geometry, thereby providing increased conceptual flexibility during modeling. At the heart of our approach lies a multiscale representation of the geometry obtained through a spectral analysis on the discrete free-form surface. This representation is accompanied by a sketch-based surface editing algorithm that enables edits to be made at different levels. The seamless transfer of modifications across different levels of detail facilitates a fluid exploration of the geometry by eliminating the need for a manual specification of the shape hierarchy. We demonstrate our method with several design examples.