To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is about how to tell whether your program is correct. It discusses systematic testing, and how this can be automated, e.g. using a unit testing framework such as JUnit. It covers what tests you should have, and when you should write them, mentioning Test-Driven Development, an approach to programming in which you write the tests before you write the code that is to be tested. Finally it discusses property-based testing, commonly used in Haskell. In this approach you specify and test something about the relationship between the inputs and outputs of your program, and that relationship is tested on many randomly-chosen input examples.
This chapter discusses how, and why, to write your program in a way which is as easy as possible for another human to understand. For example, we discuss how to use comments, choose informative names, lay out your code clearly, and structure it so that it does not resemble spaghetti.
A key choice is where you will build your program: in a basic editor, a more sophisticated editor, or an integrated development environment. This chapter discusses how to make this choice and get the most out of your chosen tool.
Writing a program that does what you want is a great achievement – but this is only the first life-stage of a successful program. This chapter discusses how you can improve your program so that it will be more maintainable and more efficient, without breaking it in the process. We discuss how to improve your own skills, e.g. using katas.
Despite their profound and growing influence on our lives, algorithms remain a partial “black box.” Keeping the risks that arise from rule-based and learning systems in check is a challenging task for both: society and the legal system. This chapter examines existing and adaptable legal solutions and complements them with further proposals. It designs a regulatory model in four steps along the time axis: preventive regulation instruments; accompanying risk management; ex post facto protection; and an algorithmic responsibility code. Together, these steps form a legislative blueprint to further regulate artificial intelligence applications.
Sometimes your program has some specific problem, or bug: perhaps it fails to compile, or perhaps there is a situation in which it does not do what you want. This chapter helps you approach this difficult situation systematically: localising, understanding and removing the bug, and finally, taking action to reduce your chance of introducing a similar bug again. It discusses common problems, including non-termination and null pointer exceptions, and helpful techniques such as cardboard debugging and defensive programming.
Most students, at some stage, need help. Perhaps you are stuck on some specific point, or perhaps you feel generally confused. This chapter helps you to sort out what your problem is, and make a plan to fix it. When, and how, should you approach someone else – on your course, or on the wider internet – for help?
The legal consideration of a robot machine as a ‘product’ has led to the application of civil liability rules for producers. Nevertheless, some aspects of the relevant European regulation suggest special attention should be devoted to a review in this field in relation to robotics. Types of defect, the meanings of the term ‘producer’, the consumer expectation test and non-pecuniary damages are some of the aspects that could give rise to future debate. The inadequacy of the current Directive 85/374/EEC for regulating damages caused by robots, particularly those with self-learning capability, is highlighted by the document ‘Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on Robotics’. Other relevant documents are the Report on “Liability for AI and other emerging digital technologies” prepared by the Expert Group on Liability and New Technologies, the “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics” [COM(2020) 64 final, 19.2.2020] and the White Paper “On Artificial Intelligence – A European approach to excellence and trust” [COM(2020) 65 final, 19.2.2020].
This paper presents an experimental study on new paradigms of haptic-based teleoperated navigation of underwater vehicles. Specifically, the work is focused on investigating the possibility of enhancing the user interaction by introducing haptic cues at the level of the user wrist providing a force feedback that reflects dynamic forces on the remotely operated underwater vehicle. Different typologies of haptic controllers are conceived and integrated with a real-time simulated model of an underwater robotic vehicle. An experimental test is designed to evaluate the usability of the system and to provide information on the global performance during the execution of simple tasks. Experiments are conducted with 7 candidates testing 12 different controllers. Among these, the most effective strategies have been identified and selected on the basis of minimization of errors on the vehicle trajectory and of the quality of the user’s interaction in terms of perceived comfort during operation. Overall, the results obtained with this study underline that haptic navigation control can have a positive influence on the performance of remotely controlled underwater vehicles.
The prevalent interpretation of Gödel’s Second Theorem states that a sufficiently adequate and consistent theory does not prove its consistency. It is however not entirely clear how to justify this informal reading, as the formulation of the underlying mathematical theorem depends on several arbitrary formalisation choices. In this paper I examine the theorem’s dependency regarding Gödel numberings. I introduce deviant numberings, yielding provability predicates satisfying Löb’s conditions, which result in provable consistency sentences. According to the main result of this paper however, these “counterexamples” do not refute the theorem’s prevalent interpretation, since once a natural class of admissible numberings is singled out, invariance is maintained.
The triangle packing number v(G) of a graph G is the maximum size of a set of edge-disjoint triangles in G. Tuza conjectured that in any graph G there exists a set of at most 2v(G) edges intersecting every triangle in G. We show that Tuza’s conjecture holds in the random graph G = G(n, m), when m ⩽ 0.2403n3/2 or m ⩾ 2.1243n3/2. This is done by analysing a greedy algorithm for finding large triangle packings in random graphs.
A celebrated theorem of Pippenger states that any almost regular hypergraph with small codegrees has an almost perfect matching. We show that one can find such an almost perfect matching which is ‘pseudorandom’, meaning that, for instance, the matching contains as many edges from a given set of edges as predicted by a heuristic argument.
We consider the Lambek calculus, or noncommutative multiplicative intuitionistic linear logic, extended with iteration, or Kleene star, axiomatised by means of an $\omega $-rule, and prove that the derivability problem in this calculus is $\Pi _1^0$-hard. This solves a problem left open by Buszkowski (2007), who obtained the same complexity bound for infinitary action logic, which additionally includes additive conjunction and disjunction. As a by-product, we prove that any context-free language without the empty word can be generated by a Lambek grammar with unique type assignment, without Lambek’s nonemptiness restriction imposed (cf. Safiullin, 2007).
This paper investigates and develops generalizations of two-dimensional modal logics to any finite dimension. These logics are natural extensions of multidimensional systems known from the literature on logics for a priori knowledge. We prove a completeness theorem for propositional n-dimensional modal logics and show them to be decidable by means of a systematic tableau construction.
This paper clarifies, revises, and extends the account of the transmission of truthmakers by core proofs that was set out in chap. 9 of Tennant (2017). Brauer provided two kinds of example making clear the need for this. Unlike Brouwer’s counterexamples to excluded middle, the examples of Brauer that we are dealing with here establish the need for appeals to excluded middle when applying, to the problem of truthmaker-transmission, the already classical metalinguistic theory of model-relative evaluations.
We investigate the modal logic of stepwise removal of objects, both for its intrinsic interest as a logic of quantification without replacement, and as a pilot study to better understand the complexity jumps between dynamic epistemic logics of model transformations and logics of freely chosen graph changes that get registered in a growing memory. After introducing this logic (MLSR) and its corresponding removal modality, we analyze its expressive power and prove a bisimulation characterization theorem. We then provide a complete Hilbert-style axiomatization for the logic of stepwise removal in a hybrid language enriched with nominals and public announcement operators. Next, we show that model-checking for MLSR is PSPACE-complete, while its satisfiability problem is undecidable. Lastly, we consider an issue of fine-structure: the expressive power gained by adding the stepwise removal modality to fragments of first-order logic.
Rendering of rigid objects with high stiffness while guaranteeing system stability remains a major and challenging issue in haptics. Being a part of the haptic system, the behavior of human operators, represented as the mechanical impedance of arm, has an inevitable influence on system performance. This paper first verified that the human arm impedance can unconsciously be modified through imposing background forces and resist unstable motions arising from external disturbance forces. Then, a reliable impedance tuning (IT) method for improving the stability and performance of haptic systems is proposed, which tunes human arm impedance by superimposing a position-based background force over the traditional haptic workspace. Moreover, an adaptive IT algorithm, adjusting the maximum background force based on the velocity of the human arm, is proposed to achieve a reasonable trade-off between system stability and transparency. Based on a three-degrees-of-freedom haptic device, maximum achievable stiffness and transparency grading experiments are carried out with 12 subjects, which verify the efficacy and advantage of the proposed method.
A central area of current philosophical debate in the foundations of mathematics concerns whether or not there is a single, maximal, universe of set theory. Universists maintain that there is such a universe, while Multiversists argue that there are many universes, no one of which is ontologically privileged. Often model-theoretic constructions that add sets to models are cited as evidence in favor of the latter. This paper informs this debate by developing a way for a Universist to interpret talk that seems to necessitate the addition of sets to V. We argue that, despite the prima facie incoherence of such talk for the Universist, she nonetheless has reason to try and provide interpretation of this discourse. We present a method of interpreting extension-talk (V-logic), and show how it captures satisfaction in ‘ideal’ outer models and relates to impredicative class theories. We provide some reasons to regard the technique as philosophically virtuous, and argue that it opens new doors to philosophical and mathematical discussions for the Universist.
The aim of the paper is to argue that all—or almost all—logical rules have exceptions. In particular, it is argued that this is a moral that we should draw from the semantic paradoxes. The idea that we should respond to the paradoxes by revising logic in some way is familiar. But previous proposals advocate the replacement of classical logic with some alternative logic. That is, some alternative system of rules, where it is taken for granted that these hold without exception. The present proposal is quite different. According to this, there is no such alternative logic. Rather, classical logic retains the status of the ‘one true logic’, but this status must be reconceived so as to be compatible with (almost) all of its rules admitting of exceptions. This would seem to have significant repercussions for a range of widely held views about logic: e.g., that it is a priori, or that it is necessary. Indeed, if the arguments of the paper succeed, then such views must be given up.
Visual simultaneous localization and mapping (VSLAM) is a relevant solution for vehicle localization and mapping environments. However, it is computationally expensive because it demands large computational effort, making it a non-real-time solution. The VSLAM systems that employ geometric reconstructions are based on the parallel processing paradigm developed in the Parallel Tracking and Mapping (PTAM) algorithm. This type of system was created for processors that have exactly two cores. The various SLAM methods based on the PTAM were also not designed to scale to all the cores of modern processors nor to function as a distributed system. Therefore, we propose a modification to the pipeline for the execution of well-known VSLAM systems so that they can be scaled to all available processors during execution, thereby increasing their performance in terms of processing time. We explain the principles behind this modification via a study of the threads in the SLAM systems based on PTAM. We validate our results with experiments describing the behavior of the original ORB-SLAM system and the modified version.