To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A 3-graph is said to contain a generalized 4-cycle if it contains 4 edges A, B, C, D such that A ∩ B=C ∩ D =∅ and A ∪ B=C ∪ D. We show that a 3-graph in which every pair of vertices is contained in at least 4 edges must contain a generalized 4-cycle. When the number of vertices, n, is equivalent to 1 or 5 modulo 20, this result is optimum, in the sense that for such n there are 3-graphs where every pair of vertices is contained in 3 edges but which do not contain a generalized 4-cycle.
Tipping-over and slipping, which are related to zero moment point (ZMP) and frictional constraint respectively, are the two most common instability forms of biped robotic walking. Conventional criterion of stability is not sufficient in some cases, since it neglects frictional constraint or considers translational friction only. The goal of this paper is to fully address frictional constraints in biped walking and develop corresponding stability criteria. Frictional constraints for biped locomotion are first analyzed and then the method to obtain the closed-form solutions of the frictional force and moment for a biped robot with rectangular and circular feet is presented. The maximum frictional force and moment are calculated in the case of ZMP at the center of contact area. Experiments with a 6-degree of freedom active walking biped robot are conducted to verify the effectiveness of the stability analysis.
A metamorphism is an unfold after a fold, consuming an input by the fold then generating an output by the unfold. It is typically useful for converting data representations, e.g., radix conversion of numbers. (Bird and Gibbons, Lecture Notes in Computer Science, vol. 2638, 2003, pp. 1–26) have shown that metamorphisms can be incrementally processed in streaming style when a certain condition holds because part of the output can be determined before the whole input is given. However, whereas radix conversion of fractions is amenable to streaming, radix conversion of natural numbers cannot satisfy the condition because it is impossible to determine part of the output before the whole input is completed. In this paper, we present a jigsaw model in which metamorphisms can be partially processed for outputs even when the streaming condition does not hold. We start with how to describe the 3-to-2 radix conversion of natural numbers using our model. The jigsaw model allows us to process metamorphisms in a flexible way that includes parallel computation. We also apply our model to other examples of metamorphisms.
In this paper, a novel optimal torque distribution method for a redundantly actuated parallel robot is proposed. Geometric analysis based on screw theory is performed to calculate the stiffness matrix of a redundantly actuated 3-RRR parallel robot. The analysis is performed based on statics focusing on low-speed motions. The stiffness matrix consisting of passive and active stiffness is also derived by the differentiation of Jacobian matrix. Comparing two matrices, we found that null-space vector is related to link geometry. The optimal distribution torque is determined by adapting mean value of minimum and maximum angles as direction angles of null-space vector. The resulting algorithm is validated by comparing the new method with the minimum-norm method and the weighted pseudo-inverse method for two different paths and force conditions. The proposed torque distribution algorithm shows the characteristics of minimizing the maximum torque.
Understanding and mimicking human gait is essential for design and control of biped walking robots. The unique characteristics of normal human gait are described as passive dynamic walking, whereas general human gait is neither completely passive nor always dynamic. To study various walking motions, it is important to quantify the different levels of passivity and dynamicity, which have not been addressed in the current literature. In this paper, we introduce the initial formulations of Passive Gait Measure (PGM) and Dynamic Gait Measure (DGM) that quantify passivity and dynamicity, respectively, of a given biped walking motion, and the proposed formulations will be demonstrated for proof-of-concepts using gait simulation and analysis. The PGM is associated with the optimality of natural human walking, where the passivity weight functions are proposed and incorporated in the minimization of physiologically inspired weighted actuator torques. The PGM then measures the relative contribution of the stance ankle actuation. The DGM is associated with the gait stability, and quantifies the effects of inertia in terms of the Zero-Moment Point and the ground projection of center of mass. In addition, the DGM takes into account the stance foot dimension and the relative threshold between static and dynamic walking. As examples, both human-like and robotic walking motions during single support phase are generated for a planar biped system using the passivity weights and proper gait parameters. The calculated PGM values show more passive nature of human-like walking as compared with the robotic walking. The DGM results verify the dynamic nature of normal human walking with anthropomorphic foot dimension. In general, the DGMs for human-like walking are greater than those for robotic walking. The resulting DGMs also demonstrate their dependence on the stance foot dimension as well as the walking motion; for a given walking motion, smaller foot dimension results in increased dynamicity. Future work on experimental validation and demonstration will involve actual walking robots and human subjects. The proposed results will benefit the human gait studies and the development of walking robots.
Solovay proved the arithmetical completeness theorem for the system GL of propositional modal logic of provability. Montagna proved that this completeness does not hold for a natural extension QGL of GL to the predicate modal logic. Let Th(QGL) be the set of all theorems of QGL, Fr(QGL) be the set of all formulas valid in all transitive and conversely well-founded Kripke frames, and let PL(T) be the set of all predicate modal formulas provable in Tfor any arithmetical interpretation. Montagna’s results are described as Th(QGL) ⊊ (Fr(QGL), PL(PA) ⊈ Fr(QGL), and Th(QGL) ⊊ PL(PA).
In this paper, we prove the following three theorems: (1) Fr(QGL) ⊈ PL(T) for any Σ1-sound recursively enumerable extension T of I Σ1, (2) PL(T) ⊈ Fr(QGL) for any recursively enumerable A-theory T extending I Σ1, and (3) Th(QGL) ⊊ Fr(QGL) ∩ PL(T) for any recursively enumerable A-theory T extending I Σ2.
To prove these theorems, we use iterated consistency assertions and nonstandard models of arithmetic, and we improve Artemov’s lemma which is used to prove Vardanyan’s theorem on the Π02-completeness of PL(T).
A constructive proof is provided for the claim that classical first-order logic admits of a natural deduction formulation featuring the subformula property.
The growing number of publicly available information sources makes it impossible for individuals to keep track of all the various opinions on one topic. The goal of our Fuzzy Believer system presented in this paper is to extract and analyze statements of opinion from newspaper articles. Beliefs are modeled using the fuzzy set theory, applied after Natural Language Processing-based information extraction. The Fuzzy Believer models a human agent, deciding what statements to believe or reject based on a range of configurable strategies.
The study of extremal problems related to independent sets in hypergraphs is a problem that has generated much interest. There are a variety of types of independent sets in hypergraphs depending on the number of vertices from an independent set allowed in an edge. We say that a subset of vertices is j-independent if its intersection with any edge has size strictly less than j. The Kruskal–Katona theorem implies that in an r-uniform hypergraph with a fixed size and order, the hypergraph with the most r-independent sets is the lexicographic hypergraph. In this paper, we use a hypergraph regularity lemma, along with a technique developed by Loh, Pikhurko and Sudakov, to give an asymptotically best possible upper bound on the number of j-independent sets in an r-uniform hypergraph.
Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include Object-oriented programming inheritance, aspect-oriented programming advice, and feature-oriented programming. A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sharing both control and data flows. This makes reasoning about IP mechanisms a notoriously hard problem: modular reasoning about a component becomes very difficult; and it is very hard to tell if two tightly coupled components interfere with each other's control and data flows. This paper presents modular reasoning about interference (MRI), a purely functional model of IP embedded in Haskell. MRI models inheritance with mixins and side effects with monads. It comes with a range of powerful reasoning techniques: equational reasoning, parametricity, and reasoning with algebraic laws about effectful operations. These techniques enable MRI in the presence of side effects. MRI formally captures harmlessness, a hard-to-formalize notion in the interference literature, in two theorems. We prove these theorems with a non-trivial combination of all three reasoning techniques.
This paper presents a formal definition and machine-checked soundness proof for a very expressive type-and-capability system, that is, a low-level type system that keeps precise track of ownership and side effects. The programming language has first-class functions and references. The type system's features include the following: universal, existential, and recursive types; subtyping; a distinction between affine and unrestricted data; support for strong updates; support for naming values and heap fragments via singleton and group regions; a distinction between ordinary values (which exist at runtime) and capabilities (which do not); support for dynamic reorganizations of the ownership hierarchy by disassembling and reassembling capabilities; and support for temporarily or permanently hiding a capability via frame and anti-frame rules. One contribution of the paper is the definition of the type-and-capability system itself. We present the system as modularly as possible. In particular, at the core of the system, the treatment of affinity, in the style of dual intuitionistic linear logic, is formulated in terms of an arbitrary monotonic separation algebra, a novel axiomatization of resources, ownership, and the manner in which they evolve with time. Only the peripheral layers of the system are aware that we are dealing with a specific monotonic separation algebra, whose resources are references and regions. This semi-abstract organization should facilitate further extensions of the system with new forms of resources. The other main contribution is a machine-checked proof of type soundness. The proof is carried out in the Wright and Felleisen's syntactic style. This offers an evidence that this relatively simple-minded proof technique can scale up to systems of this complexity, and constitutes a viable alternative to more sophisticated semantic proof techniques. We do not claim that the syntactic technique is superior: We simply illustrate how it is used and highlight its strengths and shortcomings.
Scientific literature is an important medium for disseminating scientific knowledge. However, in recent times, a dramatic increase in research output has resulted in challenges for the research community. An increasing need is felt for tools that exploit the full content of an article and provide insightful services with value beyond quantitative measures such as impact factors and citation counts. However, the intricacies of language and thought, and the unstructured format of research articles present challenges in providing such services. The identification of sentence contexts that encode the role of specific sentences in advancing an article's scientific argument can facilitate in developing intelligent tools for the research community. This paper describes our research work in this direction. First, we investigate the possibility of identifying contexts associated with sentences and propose a scheme of thirteen context type definitions for sentences, based on the generic rhetorical pattern found in scientific articles. We then present the results of our experiments using sequential classifiers – conditional random fields – for achieving automatic context identification. We also describe our Semantic Web application developed for providing citation context based information services for the research community. Finally, we present a comparison and analysis of our results with similar studies and explain the distinct features of our application.
In this paper, we experiment with several techniques to solve the problem of lexical substitution, both in a lexical sample as well as an all-words setting, and compare the benefits of combining multiple lexical resources using both unsupervised and supervised approaches. Overall in the lexical sample setting, the results obtained through the combination of several resources exceed the current state-of-the-art when selecting the best substitute for a given target word, and place second when selecting the top ten substitutes, thus demonstrating the usefulness of the approach. Further, we put forth a novel exploration in all-words lexical substitution and set ground for further explorations of this more generalized setting.
This paper introduces and studies a categorical analogue of the familiar monoid semiring construction. By introducing an axiomatisation of summation that unifies notions of summation from algebraic program semantics with various notions of summation from the theory of analysis, we demonstrate that the monoid semiring construction generalises to cases where both the monoid and the semiring are categories. This construction has many interesting and natural categorical properties, and natural computational interpretations.
It is well known that an intersecting family of subsets of an n-element set can contain at most 2n−1 sets. It is natural to wonder how ‘close’ to intersecting a family of size greater than 2n−1 can be. Katona, Katona and Katona introduced the idea of a ‘most probably intersecting family’. Suppose that is a family and that 0 < p < 1. Let (p) be the (random) family formed by selecting each set in independently with probability p. A family is most probably intersecting if it maximizes the probability that (p) is intersecting over all families of size ||.
Katona, Katona and Katona conjectured that there is a nested sequence consisting of most probably intersecting families of every possible size. We show that this conjecture is false for every value of p provided that n is sufficiently large.
In this chapter, we show how to implement the simplified DLX. The implementation consists of two parts: a finite state machine, called the control, and a circuit containing registers and functional modules, called the datapath. The separation of the design into a controller and a datapath greatly simplifies the task of designing the simplified DLX.
The datapath contains all the modules needed to execute instructions. These modules include registers, a shifter, and an arithmetic logic unit. The control is the brain that uses the datapath to execute the instructions.
DATAPATH
In this section, we outline an implementation of the datapath of a simplified DLX, as depicted in Figure 22.1. We outline the implementation by specifying the inputs, outputs, and functionality of every module in the datapath. The implementation of every module is done by using the memory modules and the combinational circuits that we have implemented throughout this book. Note that Figure 22.1 is not complete: (i) inputs and outputs of the control FSM are not presented and (ii) some of the input–output ports, and their corresponding wires, are not presented. In fact, only wires that are 32-bit wide are presented in Figure 22.1.
The Outside World: The Memory Controller
We begin with the outside world, that is, the (external) memory. Recall that both the executed program and the data are stored in the memory.
The memory controller is a circuit that is positioned between theDLX and the main memory.
Consider the following problem. We need a combinational circuit that controls many devices numbered 0, 1, …, 2k − 1. At every moment, the circuit instructs exactly one device to work while the others must be inactive. The input to the circuit is a k-bit string that represents the number i of the device to be active. Now, the circuit has 2k outputs, one for each device, and only the ith output should equal 1; the other outputs must equal zero. How do we design such a circuit? The circuit described previously is known as a decoder. The circuit that implements the inverse Boolean function is called an encoder.
In this chapter, we specify and design decoders and encoders. We also prove that the combinational circuit are correct, namely, they satisfy the specification. Moreover, we prove that these designs are asymptotically optimal.
BUSES
We begin this section by describing what buses are. Consider a circuit that contains an adder and a register (a memory device). The output of the adder should be stored by the register. Suppose that the adder outputs 8 bits. This means that there are eight different wires that emanate from the output of the adder to the input of the register. These eight wires are distinct and must have distinct names. Instead of naming the wires a, b, c, …, we often use names such as a[0], a[1], …, a[7].