To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter considers how AI threatens to diminish the value proposition of IP rights, focusing specifically on trademarks and copyright. It discusses how the intangible nature of these rights relies on a shared societal understanding and belief in their existence and value. AI, however, has the potential to undermine this shared understanding, leading to a decrease in the perceived value of IP. The chapter argues that AI challenges the traditional function of trademarks as indicators of source and quality. As AI-generated content proliferates online, it becomes increasingly difficult to distinguish between authentic and artificial sources, eroding consumer trust and confidence in trademarks. This erosion is exacerbated by AI’s ability to manipulate language and imagery, creating a world where consumers may no longer be able to rely on trademarks as reliable signals of origin or quality. Similarly, AI may challenge the value proposition of copyright by blurring the lines between human and machine creativity. As AI-generated works become more sophisticated and indistinguishable from human-created works, it becomes difficult to assess the originality and authorship of creative content, potentially diminishing the value of copyright protection.
This chapter explores key elements of AI as relevant to intellectual property law. Understanding how artificial intelligence works is crucial for applying legal regimes to it. Legal practitioners, especially IP lawyers, need a deep understanding of AI’s technical nuances. Intellectual property doctrines aim to achieve practical ends, and their application to AI is highly fact-dependent. Patent law, for example, requires technical expertise in addition to legal knowledge. This chapter tracks the development of AI from simple programming to highly sophisticated learning algorithms. It emphasizes how AI is rapidly evolving and that many of these systems are already being widely adopted in society. AI is transforming fields like education, law, healthcare, and finance. While AI offers numerous benefits, it also raises concerns about bias and transparency, among numerous other ethical implications.
This introductory chapter explores the foundation of intellectual property (IP) in the United States, specifically focusing on the history and purpose of copyright, patent, trademark, and trade secret. It highlights how these pillars have maintained their utilitarian character despite major technological revolutions and emphasizes the disruptive potential of artificial intelligence (AI). As AI technologies increasingly influence creative processes, they raise significant questions about the nature of human contribution and the value of IP. This chapter introduces some of the legal implications of generative AI, including concerns over copyright infringement and the potential need for new IP protections for AI-generated works. It outlines how the rise of AI challenges the traditional metrics of progress and the standards by which human contributions are evaluated. The author suggests that rather than resisting these changes, society should adapt its understanding of IP in a way that reflects the evolving technological landscape. Ultimately, the author argues for a nuanced approach to IP law that recognizes the shifting boundaries of what constitutes valuable innovation, advocating for humility in navigating the complexities of this ongoing transformation. The discussion sets the stage for the rest of the book.
While the existence of a stable matching for the stable roommates problem possibly with incomplete preference lists (SRI) can be decided in polynomial time, SRI problems with some fairness criteria are intractable. Egalitarian SRI that tries to maximize the total satisfaction of agents if a stable matching exists, is such a hard variant of SRI. For experimental evaluations of methods to solve these hard variants of SRI, several well-known algorithms have been used to randomly generate benchmark instances. However, these benchmark instances are not always satisfiable and usually have a small number of stable matchings if one exists. For such SRI instances, despite the NP-hardness of Egalitarian SRI, it is practical to find an egalitarian stable matching by enumerating all stable matchings. In this study, we introduce a novel algorithm to generate benchmark instances for SRI that have very large numbers of solutions, and for which it is hard to find an egalitarian stable matching by enumerating all stable matchings.
We investigate the expressive power of Higher-Order $Datalog^\neg$ under both the well-founded and the stable model semantics, establishing tight connections with complexity classes. We prove that under the well-founded semantics, for all $k\geq 1$, $(k+1)$-Order $Datalog^\neg$ captures $k-\textsf {EXP}$, a result that holds without explicit ordering of the input database. The proof of this fact can be performed either by using the powerful existential predicate variables of the language or by using partially applied relations and relation enumeration. Furthermore, we demonstrate that this expressive power is retained within a stratified fragment of the language. Under the stable model semantics, we show that $(k+1)$-Order $Datalog^\neg$ captures $\textsf {co}-(k-\textsf {NEXP})$ using cautious reasoning and $k-\textsf {NEXP}$ using brave reasoning, again with analogous results for the stratified fragment augmented with choice rules. Our results establish a hierarchy of expressive power, highlighting an interesting trade-off between order and non-determinism in the context of higher-order logic programing: increasing the order of programs under the well-founded semantics can surpass the expressive power of lower-order programs under the stable model semantics.
Capturing dynamic targets is particularly challenging for either rigid or soft grippers, as impact buffering should be completed in a short time to ensure the reliability of the robotic system. At collision onset, to deal with relatively low contact forces, adopting low stiffness and damping can effectively mitigate the rebound of the dynamic targets. As the contact area and forces increase, employing high stiffness and damping becomes necessary for absorbing high energy. This paper proposed a novel robotic gripper whose stiffness and damping follow a predefined profile “low stiffness and damping for low impact and high stiffness and damping for high impact.” The variable effects of impact buffering and energy dissipation in a collision process were modeled and analyzed. Then, a passive variable stiffness and damping regulator (P-VSDR) was developed where tendons and pulleys are used to generate a nonlinear motion from a linear spring-damper unit. The contact dynamics model of the robotic gripper equipped with P-VSDR was established. Simulated and experimental results show that this gripper enables reliable capture of dynamic targets with different velocities.
While LGBTQIA+ identities are already mostly invisible in the Italian education system, the current anti-gender policies proposed by right-wing and far-right politicians risk further hindering an inclusive education. However, recent Italian graphic novels pave the way for a multifaceted representation of the LGBTQIA+ community and an alternative form of education. For instance, Nicoz Balboa’s Play with Fire (2020) and Alec Trenta’s Barba (2022) are two autofictional graphic novels that depict the authors’ discovery of their trans identity and their experiences in the cis-heteronormative society. The article argues that the two works by Balboa and Trenta are not just examples of autofiction but also constitute an archive of memory and activism. First, the article traces the damaging effects of a lack of education around LGBTQIA+ themes. Then, it explores how Balboa and Trenta understand their lives by reading LGBTQIA+ stories and histories. Crucially, the article investigates how both authors become a point of reference themselves by representing their own bodies and including explanations about gender and sexuality topics. Documenting the way Balboa and Trenta build a counter-educational space in their graphic novels and chart a literary queer and trans genealogy, the article ultimately suggests that their works are a form of activist practice.
The effectiveness of robotic grippers is critical for the secure and damage-free manipulation of objects with diverse geometries and material properties. This paper presents the design, analysis, and experimental evaluation of a novel reconfigurable four-finger robotic gripper. The proposed design incorporates two stationary fingers fixed to a circular base and two movable fingers repositioned and reoriented via a face gear mechanism, enabling multiple finger configurations to enhance adaptability. A single geared motor drives the opening and closing motions of all four fingers, simplifying the actuation mechanism. The robotic gripper was fabricated using 3D printing technology, ensuring cost-effective and precise manufacturing. Experimental tests were conducted to evaluate the robotic gripper’s reconfigurability and grasping performance across a range of objects, demonstrating its effectiveness in various configurations. Additionally, a closed-loop force control system was implemented to assess the grasping performance of a soft reconfigurable variant. Grasping force measurements were performed on three distinct objects, yielding a grasping curve that confirmed successful adaptation and secure handling. While the results validate the robotic gripper’s performance, further refinement of the control algorithm is recommended to optimize its capabilities. Compared to conventional three-finger designs, the proposed robotic gripper offers superior reconfigurability and adaptability, making it suitable for a broader range of industrial and research applications. The innovative face gear mechanism and modular design expand the robotic gripper’s functionality, positioning it as a versatile tool for advanced robotic manipulation tasks.
We developed a cloud microphysics parameterization for the icosahedral nonhydrostatic modeling framework (ICON) model based on physics-informed machine learning (ML). By training our ML model on high-resolution simulation data, we enhance the representation of cloud microphysics in Earth system models (ESMs) compared to traditional parameterization schemes, in particular by considering the influence of high-resolution dynamics that are not resolved in coarse ESMs. We run a global, kilometer-scale ICON simulation with a one-moment cloud microphysics scheme, the complex graupel scheme, to generate 12 days of training data. Our ML approach combines a microphysics trigger classifier and a regression model. The microphysics trigger classifier identifies the grid cells where changes due to the cloud microphysical parameterization are expected. In those, the workflow continues by calling the regression model and additionally includes physical constraints for mass positivity and water mass conservation to ensure physical consistency. The microphysics trigger classifier achieves an F1 score of 0.93 on classifying unseen grid cells. The regression model reaches an $ {R}^2 $ score of 0.72 averaged over all seven microphysical tendencies on simulated days used for validation only. This results in a combined offline performance of 0.78. Using explainability techniques, we explored the correlations between input and output features, finding a strong alignment with the graupel scheme and, hence, physical understanding of cloud microphysical processes. This parameterization provides the foundation to advance the representation of cloud microphysical processes in climate models with ML, leading to more accurate climate projections and improved comprehension of the Earth’s climate system.
Reasoning about dynamic systems with a fine-grained temporal and numeric resolution presents significant challenges for logic-based approaches like Answer Set Programming (ASP). To address this, we introduce and elaborate upon a novel temporal and constraint-based extension of the logic of Here-and-There and its nonmonotonic equilibrium extension, representing, to the best of our knowledge, the first approach to nonmonotonic temporal reasoning with constraints specifically tailored for ASP. This expressive system is achieved by a synergistic combination of two foundational ASP extensions: the linear-time logic of Here-and-There, providing robust nonmonotonic temporal reasoning capabilities, and the logic of Here-and-There with constraints, enabling the direct integration and manipulation of numeric constraints, among others. This work establishes the foundational logical framework for tackling complex dynamic systems with high resolution within the ASP paradigm.
This paper presents four new monolithic continuum robot designs that can be 3D printed in a single piece and with TPU or similar elastic filaments for either educational or experimental applications. Similar tendon-driven continuum robots are usually made of a flexible backbone (often in NiTi alloys) and rigid vertebrae, with tens of components in a robot segment resulting in time-consuming manual assembly and high costs. Conversely, the proposed designs achieve equivalent functionality while avoiding the manufacturing challenges. Additionally, by removing the need for coupled features for assembly and 3D-printing backbones and vertebrae as a single part, new geometries are possible and can be explored to tailor robot performance to specific requirements. To validate the proposed design, four sample prototypes have been manufactured and experimentally tested. The obtained results, when compared to the piecewise constant curvature model, demonstrate a 3.06% tip positioning error and limited reduction of the workspace area of 23.07%, which compares favorably to similar but more expensive and complex tendon-driven robots.
This paper makes a twofold contribution to the study of expressivity. First, we introduce and study the novel concept of conditional expressivity. Taking a universal logic perspective, we characterize conditional expressivity both syntactically and semantically. We show that our concept of conditional expressivity is related to, but different from, the concept of explicit definability in Beth’s definability theorem. Second, we use the concept to explore inferential relations between collective deontic admissibility statements for different groups. Negative results on conditional expressivity are stronger than standard (unconditional) inexpressivity results: we show that the well-known inexpressivity results from epistemic logic on distributed knowledge and on common knowledge only concern unconditional expressivity. By contrast, we prove negative results on conditional expressivity in the deontic logic of collective agency. In particular, we consider the full formal language of the deontic logic of collective agency, define a natural class of sublanguages of the full language, and prove that a collective deontic admissibility statement about a particular group is conditionally expressible in a sublanguage from the class if and only if that sublanguage includes a collective deontic admissibility statement about a supergroup of that group. Our negative results on conditional expressivity may serve as a proof of concept for future studies.
The control of shipborne stabilisation platforms is challenging due to the effects of platform dynamic characteristics and unpredictable wave disturbances in operational environments. This paper proposes an integrated control strategy that combines dynamic feedforward and fuzzy gain control. Based on the derived dynamic model of the shipborne stabilisation platform, a dynamic feedforward controller is designed to mitigate the effects of platform dynamics on motion accuracy. In the fuzzy gain control design, scaling modules are proposed to enhance the fuzzy controller’s adaptability to varying operating conditions and unpredictable wave disturbances. The motion of the stabilisation platform is simulated by taking the motion of the lower platform calculated based on the wave fluctuations in marine environments as the input. The prototype experiment is conducted by using a large-scale parallel mechanism to simulate the wave environments. Simulation and experimental results indicate that the proposed control strategy achieves real-time disturbance compensation without precise mathematical modelling or pre-training, and demonstrates good adaptability.
Climate change poses an existential threat, necessitating effective climate policies to enact impactful change. Decisions in this domain are incredibly complex, involving conflicting entities and evidence. In the last decades, policymakers increasingly use simulations and computational methods to guide some of their decisions. Integrated Assessment Models (IAMs) are one of such methods, which combine social, economic, and environmental simulations to forecast potential policy effects. For example, the UN uses outputs of IAMs for their recent Intergovernmental Panel on Climate Change (IPCC) reports. Traditionally these have been solved using recursive equation solvers, but have several shortcomings, e.g. struggling at decision making under uncertainty. Recent preliminary work using Reinforcement Learning (RL) as an alternative to traditional solvers shows promising results in decision making in uncertain and noisy scenarios. We extend on this work by introducing multiple interacting RL agents as a preliminary analysis on modelling the complex interplay of socio-interactions between various stakeholders or nations that drives much of the current climate crisis. Our findings show that cooperative agents in this framework can consistently chart pathways towards more desirable futures in terms of reduced carbon emissions and improved economy. However, upon introducing competition between agents, for instance by using opposing reward functions, desirable climate futures are rarely reached. Modelling competition is key to increased realism in these simulations, as such we employ policy interpretation by visualizing what states lead to more uncertain behavior, to understand algorithm failure. Finally, we highlight the current limitations and avenues for further work to ensure future technology uptake for policy derivation.
The development of intelligent control-oriented solutions for building energy systems is a promising research field. The development of effective systems relies on seldom available large data sets or on simulation environments, either for training or execution phases. The creation of simulation environments based on thermal models is a challenging task, requiring the usage of third-party solutions and high levels of expertise in the energy engineering field, which poses relevant restrictions to the development of control-oriented research.
In this work, a training workbench is presented, integrating an accurate but lightweight lumped capacitance model with proven accuracy to represent the thermal dynamics of buildings, engineering models for energy systems in buildings, and user behavior models into an overall building energy performance forecasting model. It is developed in such a way that it can be easily integrated into control-oriented applications, with no requirements to use complex, third-party tools.
In this paper, we compare four different semantics for disjunction in Answer Set Programming that, unlike stable models, do not adhere to the principle of model minimality. Two of these approaches, Cabalar and Muñiz’ Justified Models and Doherty and Szalas’ Strongly Supported Models, directly provide an alternative non-minimal semantics for disjunction. The other two, Aguado et al’s Forks and Shen and Eiter’s Determining Inference (DI) semantics, actually introduce a new disjunction connective, but are compared here as if they constituted new semantics for the standard disjunction operator. We are able to prove that three of these approaches (Forks, Justified Models and a reasonable relaxation of the DI-semantics) actually coincide, constituting a common single approach under different definitions. Moreover, this common semantics always provides a superset of the stable models of a programme (in fact, modulo any context) and is strictly stronger than the fourth approach (Strongly Supported Models), that actually treats disjunctions as in classical logic.
This paper continues an established line of research about the relations between argumentation theory, particularly assumption-based argumentation, and different kinds of logic programs. In particular, we extend known result of Bondarenko, Dung, Kowalski and Toni, and of Caminada and Schulz, by showing that assumption-based argumentation can represent not only normal logic programs, but also disjunctive logic programs under the stable model semantics. For this, we consider some inference rules for disjunction that the core logic of the argumentation frameworks should respect, and show the correspondence to the handling of disjunctions in the heads of the logic programs’ rules.
Answer Set Programming (ASP) provides a powerful declarative paradigm for knowledge representation and reasoning. Recently, counting answer sets has emerged as an important computational problem with applications in probabilistic reasoning, network reliability analysis, and other domains. This has motivated significant research into designing efficient ASP counters. While substantial progress has been made for normal logic programs, the development of practical counters for disjunctive logic programs remains challenging. We present $\mathsf{sharpASP}$-$\mathcal{SR}$, a novel framework for counting answer sets of disjunctive logic programs based on subtractive reduction to projected propositional model counting. Our approach introduces an alternative characterization of answer sets that enables efficient reduction while ensuring the intermediate representations remain polynomial in size. This allows $\mathsf{sharpASP}$-$\mathcal{SR}$ to leverage recent advances in projected model counting technology. Through extensive experimental evaluation on diverse benchmarks, we demonstrate that $\mathsf{sharpASP}$-$\mathcal{SR}$ significantly outperforms existing counters on instances with large answer set counts. Building on these results, we develop a hybrid counting approach that combines enumeration techniques with $\mathsf{sharpASP}$-$\mathcal{SR}$ to achieve state-of-the-art performance across the full spectrum of disjunctive programs. The extended version of the paper is available at: https://arxiv.org/abs/2507.11655.
VR sketching tools have matured to a practical level, enabling use across various 3D design disciplines. Studies into VR sketching in design report beneficial affordances but are based on brief testing of tools in simulated tasks. Consequently, there is a knowledge deficit in understanding how to effectively integrate VR sketching into design projects. We address this gap with a case study on the sustained use of VR sketching in 10 automotive concept design projects over 10 months. In analysing designers’ logbooks, which captured design development, and post-study reflections, we show how the affordances of VR sketching outlined in literature manifest in practice. Specifically, we show how and when designers can exploit the precedence of 3D geometry embodied in VR sketches to advance the design process in terms of several dimensions of design fidelity. We highlight where process advantages are realised through (1) increased spatial fidelity, reducing the time required to iterate 2D sketches, (2) operational fidelity supporting dynamic testing of concept functionality via animation and (3) environmental fidelity supporting contextualising components and storytelling. As such, our findings highlight how and when practitioners can realise the comparative benefits of VR sketching alongside traditional sketching and 3d modelling during the concept design process.
Can we quantify over absolutely every set? Absolutists typically affirm, while relativists typically deny, the possibility of unrestricted quantification (in set theory). In the first part of this article, I develop a novel and intermediate philosophical position in the absolutism versus relativism debate in set theory. In a nutshell, the idea is that problematic sentences related to paradoxes cannot be interpreted with unrestricted quantifier domains, while prima facie absolutist sentences (e.g., “no set is contained in the empty set”) are unproblematic in this respect and can be interpreted over a domain containing all sets. In the second part of the paper, I develop a semantic theory that can implement the intermediate position. The resulting framework allows us to distinguish between inherently absolutist and inherently relativist sentences of the language of set theory.