To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
AI has the potential to overcome problems concerning the existing approaches to contract drafting, management and implementation, whilst also having the potential to exacerbate these problems. To deal with this risk and to create AI which is trustworthy in relation to contracting, such systems require channelling in a new direction, termed ‘transactional responsibility’. Legal regulation must be structured around the entirety of the socio-technical system which underpins AI.
AI will disrupt the existing tort settlement. Tort law should be tech-impartial – that is, it should not encourage or discourage the adoption of new technologies where they generate the same level of risk, and victim rights should not be eroded by the use of new technologies in place of existing systems of work. Existing tort law is poorly suited to address some AI challenges, and a liability gap will emerge as systems replace employees since AI does not have legal personality and cannot commit a tort. A form of AI statutory vicarious liability should apply in commercial settings to address the liability gap and as the tech-impartial solution.
AI will greatly assist in the administration of express and charitable trusts and also be of significant benefit to trust law in acting as an adjudicator. AI should be able to act as an acceptable trustee of an express trust, and resulting trusts do not insurmountably challenge AI, either as trustees or adjudicators. The proposition that discretionary trusts are unsuited to AI administration can be rejected along with the notion that the discretionary nature of remedies makes this area of law unsuited to AI adjudication. Although constructive trusts may pose some difficulties for AI, this may be solved through legal reform. Further, the difficulties that AI trustees will create are not incapable of practical solutions.
Law reform is needed to recognise the impact of automation and machine learning systems on the services provided by intermediaries while requiring intermediaries to minimise illicit or harmful content.
The ever-growing involvement of InsurTech in the insurance operations requires us to consider the question of the level of disruption that the new technologies may have had on the insurance services, and therefore on society. To answer this question, the areas of insurance services in which InsurTech has been predominantly employed are explored. Following that, the social and economic impact of InsurTech is discussed together with the fundamental principles that guide the business and legal operation of insurance services.
This chapter introduces basic concepts of AI to lawyers, deals with key concepts, the capabilities and limitations of AI, and identifies technological challenges which might require legal responses.
AI will greatly challenge product liability since it is based on assumptions as to physical objects distributed through organised linear value chains which do not necessarily apply in the AI context. AI systems further challenge both liability compartmentalisation based on separate risk spheres and the notion of defectiveness. The European Product Liability Regime is based on a linear value chain, whereas with AI, systems may be distributed differently. The realities of new value chains call for a number of adjustments to central product liability concepts, which will widen the scope of product liability rules. Further, AI may in fact have the potential to ultimately dissolve the very notion of product liability itself.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
Whether AI should be given legal personhood should not be framed in binary terms. Instead, this issue should be analysed in terms of a sliding-scale spectrum. On one axis, there is the quantity and quality of the bundle of rights and obligations that legal personhood entails. The other axis is the level of the relevant characteristics that courts may include in conferring legal personhood.
The conferral of personhood is a choice made by legal systems, but just because it can be done, does not mean that it should. Analogies made between AI systems and corporations are superficial and flawed. For instance, the demand for asset partitioning does not apply to AI systems in the same way that it does with corporations and may lead to moral hazards. Conferring personhood on AI systems would also need to be accompanied with governance structures equivalent to those that accompany corporate legal personhood. Further, the metaphorical ghost of data as property needs to be exorcised.
The venous blood test is a prevalent auxiliary medical diagnostic method. Venous blood collection equipment can improve blood collection’s success rate and stability, reduce the workload of medical staff, and improve the efficiency of diagnosis and treatment. This study proposed a rigid-flexible composite puncture (RFCP) strategy, based on which a small 7-degree-of-freedom (DOF) auxiliary venipuncture blood collection (VPBC) robot using a trocar needle was designed. The robot consists of a position and orientation adjustment mechanism and a RFCP end-effector, which can perform RFCP to avoid piercing the blood vessel’s lower wall during puncture. The inverse kinematics solution and validation of the robot were analyzed based on the differential evolution algorithm, after which the quintic polynomial interpolation algorithm was applied to achieve the robot trajectory planning control. Finally, the VPBC robot prototype was developed for experiments. The trajectory planning experiment verified the correctness of the inverse kinematics solution and trajectory planning, and the composite puncture blood collection experiment verified the feasibility of the RFCP strategy.
We present a data-driven emulator, a stochastic weather generator (SWG), suitable for estimating probabilities of prolonged heat waves in France and Scandinavia. This emulator is based on the method of analogs of circulation to which we add temperature and soil moisture as predictor fields. We train the emulator on an intermediate complexity climate model run and show that it is capable of predicting conditional probabilities (forecasting) of heat waves out of sample. Special attention is payed that this prediction is evaluated using a proper score appropriate for rare events. To accelerate the computation of analogs, dimensionality reduction techniques are applied and the performance is evaluated. The probabilistic prediction achieved with SWG is compared with the one achieved with a convolutional neural network (CNN). With the availability of hundreds of years of training data, CNNs perform better at the task of probabilistic prediction. In addition, we show that the SWG emulator trained on 80 years of data is capable of estimating extreme return times of order of thousands of years for heat waves longer than several days more precisely than the fit based on generalized extreme value distribution. Finally, the quality of its synthetic extreme teleconnection patterns obtained with SWG is studied. We showcase two examples of such synthetic teleconnection patterns for heat waves in France and Scandinavia that compare favorably to the very long climate model control run.
Aiming at the problems of small good workspace, many singular configurations, and limited carrying capacity of non-redundant parallel mechanisms, a full-redundant drive parallel mechanism is designed and developed, and its performance evaluation, good workspace identification, and scale optimization design are studied. First, the kinematics analysis of the planar 6R parallel mechanism is completed. Then, the motion/force transmission performance evaluation index of the mechanism is established, and the singularity analysis of the mechanism is completed. Based on this, the fully redundant driving mode of the mechanism is determined, and the good transmission workspace of the mechanism in this mode is identified. Then, the mapping relationship between the performance and scale of the mechanism is established by using the space model theory, and the scale optimization of the mechanism is completed. Finally, the robot prototype is made according to the optimal scale, and the performance verification is carried out based on the research of dynamics and control strategy. The results show that the fully redundant actuation parallel mechanism obtained by design optimization has high precision and large bearing capacity. The position repeatability and position accuracy are 0.053 mm and 0.635 mm, respectively, and the load weight ratio can reach 15.83%. The research results of this paper complement and improve the performance evaluation and scale optimization system of redundantly actuated parallel mechanisms.
In order to clarify and visualize the real state of the structural performances of ships in operation and establish a more optimal, data-driven framework for ship design, construction and operation, an industry-academia joint R&D project on the digital twin for ship structures (DTSS) was conducted in Japan. This paper presents the major achievements of the project. The DTSS aims to grasp the stress responses over the whole ship structure in waves by data assimilation that merges hull monitoring and numerical simulation. Three data assimilation methods, namely, the wave spectrum method, Kalman filter method, and inverse finite element method were used, and their effectiveness was examined through model and full-scale ship measurements. Methods for predicting short-term extreme responses and long-term cumulative fatigue damage were developed for navigation and maintenance support using statistical approaches. In comparison with conventional approaches, response predictions were significantly improved by DTSS using real response data in encountered waves. Utilization scenarios for DTSS in the maritime industry were presented from the viewpoints of navigation support, maintenance support, rule improvement, and product value improvement, together with future research needs for implementation in the maritime industry.
Despite the growing availability of sensing and data in general, we remain unable to fully characterize many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as “big,” they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a derivation path for a spectrum of possible Gaussian process models is introduced and used to highlight how and where different levels of expert knowledge of a system is likely best exploited. Each of the models highlighted in the spectrum have been explored in different ways across communities; novel examples in a structural assessment context here demonstrate how these approaches can significantly reduce reliance on expensive data collection. The increased interpretability of the models shown is another important consideration and benefit in this context.