To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
High-profile privacy breaches have trained the spotlight of public attention on data privacy. Until recently privacy, a relative laggard within computer security, could be enhanced only weakly by available technologies, when releasing aggregate statistics on sensitive data: until the mid-2000s definitions of privacy were merely syntactic. Contrast this with the state of affairs within cryptography that has long offered provably strong guarantees on maintaining the secrecy of encrypted information, based on computational limitations of attackers. Proposed as an answer to the challenge of bringing privacy on equal footing with cryptography, differential privacy (Dwork et al. 2006; Dwork & Roth 2014) has quickly grown in stature due to its formal nature and guarantees against powerful attackers. This chapter continues the discussion begun in Section 3.7, including a case study on the release of trained support vector machine (SVM) classifiers while preserving training data privacy. This chapter builds on (Rubinstein, Bartlett, Huang, & Taft 2012).
Privacy Breach Case Studies
We first review several high-profile privacy breaches achieved by privacy researchers. Together these have helped shape the discourse on privacy and in particular have led to important advancements in privacy-enhancing technologies. This section concludes with a discussion of lessons learned.
Massachusetts State Employees Health Records
An early privacy breach demonstrated the difficulty in defining the concept of personally identifiable information (PII) and led to the highly influential development of kanonymity (Sweeney 2002).
In the mid-1990s the Massachusetts Group Insurance Commission released private health records of state employees, showing individual hospital visits, for the purpose of fostering health research. To mitigate privacy risks to state employees, the Commission scrubbed all suspected PII: names, addresses, and Social Security numbers. What was released was pure medical information together with (what seemed to be innocuous) demographics: birthdate, gender, and zipcode.
Security researcher Sweeney realized that the demographic information not scrubbed was in fact partial PII. To demonstrate her idea, Sweeney obtained readily available public voter information for the city of Cambridge, Massachusetts, which included birthdates, zipcodes, and names. She then linked this public data to the “anonymized” released hospital records, thereby re-identifying many of the employees including her target, then Governor William Weld who originally oversaw the release of the health data.
In this chapter, we explore a theoretical model for quantifying the difficulty of Exploratory attacks against a trained classifier. Unlike the previous work, since the classifier has already been trained, the adversary can no longer exploit vulnerabilities in the learning algorithm to mistrain the classifier as we demonstrated in the first part of this book. Instead, the adversary must exploit vulnerabilities that the classifier accidentally acquired from training on benign data (or at least data not controlled by the adversary in question). Most nontrivial classification tasks will lead to some form of vulnerability in the classifier. All known detection techniques are susceptible to blind spots (i.e., classes of miscreant activity that fail to be detected), but simply knowing that they exist is insufficient. The principal question is how difficult it is for an adversary to discover a blind spot that is most advantageous for the adversary. In this chapter, we explore a framework for quantifying how difficult it is for the adversary to search for this type of vulnerability in a classifier.
At first, it may appear that the ultimate goal of these Exploratory attacks is to reverse engineer the learned parameters, internal state, or the entire boundary of a classifier to discover its blind spots. However, in this work, we adopt a more refined strategy; we demonstrate successful Exploratory attacks that only partially reverse engineer the classifier. Our techniques find blind spots using only a small number of queries and yield near-optimal strategies for the adversary. They discover data points that the classifier will classify as benign and that are close to the adversary's desired attack instance.
While learning algorithms allow the detection algorithm to adapt over time, realworld constraints on the learning algorithm typically allow an adversary to programmatically find blind spots in the classifier. We consider how an adversary can systematically discover blind spots by querying the filter to find a low-cost (for some cost function) instance that evades the filter. Consider, for example, a spammer who wishes to minimally modify a spam message so it is not classified as spam (here cost is a measure of how much the spam must be modified). By observing the responses of the spam detector, the spammer can search for a modification while using few queries.
An n × n partial Latin square P is called α-dense if each row and column has at most αn non-empty cells and each symbol occurs at most αn times in P. An n × n array A where each cell contains a subset of {1,…, n} is a (βn, βn, βn)-array if each symbol occurs at most βn times in each row and column and each cell contains a set of size at most βn. Combining the notions of completing partial Latin squares and avoiding arrays, we prove that there are constants α, β > 0 such that, for every positive integer n, if P is an α-dense n × n partial Latin square, A is an n × n (βn, βn, βn)-array, and no cell of P contains a symbol that appears in the corresponding cell of A, then there is a completion of P that avoids A; that is, there is a Latin square L that agrees with P on every non-empty cell of P, and, for each i, j satisfying 1 ≤ i, j ≤ n, the symbol in position (i, j) in L does not appear in the corresponding cell of A.
A sound gradual type system ensures that untyped components of a program can never break the guarantees of statically typed components. This assurance relies on runtime checks, which in turn impose performance overhead in proportion to the frequency and nature of interaction between typed and untyped components. The literature on gradual typing lacks rigorous descriptions of methods for measuring the performance of gradual type systems. This gap has consequences for the implementors of gradual type systems and developers who use such systems. Without systematic evaluation of mixed-typed programs, implementors cannot precisely determine how improvements to a gradual type system affect performance. Developers cannot predict whether adding types to part of a program will significantly degrade (or improve) its performance. This paper presents the first method for evaluating the performance of sound gradual type systems. The method quantifies both the absolute performance of a gradual type system and the relative performance of two implementations of the same gradual type system. To validate the method, the paper reports on its application to 20 programs and 3 implementations of Typed Racket.
Robust positioning and navigation of a mobile robot in an urban environment is implemented by fusing the Global Positioning System (GPS) and Inertial Navigation System (INS) data with the aid of a motion estimator. To select and isolate malicious satellite signals and guarantee the minimum number of GPS signals for the localization, an enhanced fault detection and isolation (FDI) algorithm with a short-term memory has been developed in this research. When there are sufficient satellite signals for positioning, the horizontal dilution of precision (HDOP) has been applied for selecting the best four satellite signals to localize the mobile robot. Then, the GPS data are fused with INS data by a Kalman filter (KF) for a straight path and a curved motion estimator (CME) for a curved path. That is, the INS data are properly fused to the GPS data through the KF or CME process. To verify the effectiveness of the proposed algorithm, experiments using a mobile robot have been carried out on a university campus.
Stream processing has reached the mainstream in the last years, as a new generation of open-source distributed stream processing systems, designed for scaling horizontally on commodity hardware, has brought the capability for processing high-volume and high-velocity data streams to companies of all sizes. In this work, we propose a combination of temporal logic and property-based testing (PBT) for dealing with the challenges of testing programs that employ this programming model. We formalize our approach in a discrete time temporal logic for finite words, with some additions to improve the expressiveness of properties, which includes timeouts for temporal operators and a binding operator for letters. In particular, we focus on testing Spark Streaming programs written with the Spark API for the functional language Scala, using the PBT library ScalaCheck. For that we add temporal logic operators to a set of new ScalaCheck generators and properties, as part of our testing library sscheck.
Robotic swimmers are currently a subject of extensive research and development for several underwater applications. Clever design and planning must rely on simple theoretical models that account for the swimmer’s hydrodynamics in order to optimize its structure and control inputs. In this work, we study a planar snake-like multi-link swimmer by using the “perfect fluid” model that accounts for inertial hydrodynamic forces while neglecting viscous drag effects. The swimmer’s dynamic equations of motion are formulated and reduced into a first-order system due to symmetries and conservation of generalized momentum variables. Focusing on oscillatory inputs of joint angles, we study optimal gaits for 3-link and 5-link swimmers via numerical integration. For the 3-link swimmer, we also provide a small-amplitude asymptotic solution which enables obtaining closed-form approximations for optimal gaits. The theoretical results are then corroborated by experiments and motion measurement of untethered robotic prototypes with three and five links floating in a water pool, showing a reasonable agreement between the experiments and the theoretical model.
When using parallel manipulators as machine tools, their stiffness is an important factor in the quality of the produced products. This paper presents an overall approximate stiffness model for a heavy-load parallel manipulator, which considers the effects of actuator stiffness, joint clearance, joint contact deformation, and limb deformation. Based on the principle of virtual work and the introduced modified parameters, the proposed overall compliance matrix successfully takes four factors into a unified expression. To obtain the overall compliance matrix, the approximate stiffness models of the joint clearance, joint contact deformation, and limb deformation are given. In addition, by combining the statistical simulation including the random uncertainties and the proposed approximate stiffness models as the basis of the magnitudes for each random variable, an approach based on the expected trajectory and external load is also proposed for stiffness defect identification such that the estimation is more accurate and reliable. Finally, a numerical example of the 1PU+3UPS parallel manipulator and a discussion are presented to demonstrate the practicability of the proposed stiffness model and defect identification approach. After modifying the structure parameters of the defective components, the prototype experiences a significant stiffness improvement.
The fundamental cause for the statically indeterminate problem in the force analysis of overconstrained parallel mechanisms (PMs) is found to be the presence of the linearly dependent overconstrained wrenches. Based on the fundamental cause, a unified expression of the solution for the magnitudes of the constraint wrenches of both the limb stiffness decoupled and limb stiffness coupled overconstrained PMs is derived. When the weight of each link is considered, depending on whether additional component forces are generated along the axes of the overconstrained wrenches, two different situations should be considered. One situation is that no additional component force is generated along the axes of the overconstrained wrenches under the weight of the links in the corresponding limb. In this case, the added constraint wrenches at the limb’s end can be calculated directly, and used as a part of the generalized external wrench. The other situation is that additional component forces are generated. In this case, the elastic deformations in the axes of the overconstrained wrenches generated by those component forces should be considered, and the deformation compatibility equations between the overconstrained wrenches are reformulated.
Answer set programming (ASP) is one of the major declarative programming paradigms in the area of logic programming and non-monotonic reasoning. Despite that ASP features a simple syntax and an intuitive semantics, errors are common during the development of ASP programs. In this paper we propose a novel debugging approach allowing for interactive localization of bugs in non-ground programs. The new approach points the user directly to a set of non-ground rules involved in the bug, which might be refined (up to the point in which the bug is easily identified) by asking the programmer a sequence of questions on an expected answer set. The approach has been implemented on top of the ASP solver wasp. The resulting debugger has been complemented by a user-friendly graphical interface, and integrated in aspide, a rich integrated development environment (IDE) for answer set programs. In addition, an empirical analysis shows that the new debugger is not affected by the grounding blowup limiting the application of previous approaches based on meta-programming.
The present paper discusses on development and implementation of back-propagation neural network integrated modified DAYANI method for path control of a two-wheeled self-balancing robot in an obstacle cluttered environment. A five-layered back-propagation neural network has been instigated to find out the intensity of various weight factors considering seven navigational parameters as obtained from the modified DAYANI method. The intensity of weight factors is found out using the neural technique with input parameters such as number of visible intersecting obstacles along the goal direction, minimum visible front obstacle distances as obtained from the sensors, minimum left side obstacle distance within the visible left side range of the robot, average of left side obstacle distances, minimum right side obstacle distance within the visible right side range of the robot, average of right side obstacle distances and goal distance from the robot’s probable next position. Comparison between simulation and experimental exercises is carried out for verifying the robustness of the proposed controller. Also, the authenticity of the proposed controller is verified through a comparative analysis between the results obtained by other existing techniques with the current technique in an exactly similar test scenario and an enhancement of the results is witnessed.
This paper presents a unified formulation for the kinematics, singularity and workspace analyses of parallel delta robots with prismatic actuation. Unlike the existing studies, the derivations presented in this paper are made by assuming variable angles and variable link lengths. Thus, the presented scheme can be used for all of the possible linear delta robot configurations including the ones with asymmetric kinematic chains. Referring to a geometry-based derivation, the paper first formulates the position and the velocity kinematics of linear delta robots with non-iterative exact solutions. Then, all of the singular configurations are identified assuming a parametric content for the Jacobian matrix derived in the velocity kinematics section. Furthermore, a benchmark study is carried out to determine the linear delta robot configuration with the maximum cubic workspace among symmetric and semi-symmetric kinematic chains. In order to show the validity of the proposed approach, two sets of experiments are made, respectively, on the horizontal and the Keops type of linear delta robots. The experiment results for the confirmation of the presented kinematic analysis and the simulation results for the determination of the maximum cubic workspace illustrate the efficacy and the flexible applicability of the proposed framework.