To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Allison Hoffman (University of Pennsylvania Carey Law), an expert on health care regulation, focuses on tectonic changes to health care in recent decades. She offers a bracing account of these shifts, arguing that American doctors may have overreached in their efforts at influencing health care regulation. In so doing, physicians created profit pools that corporate interests proved all too adept at capturing, leaving doctors with lower professional status than they might have otherwise enjoyed. Hoffman suggests that lawyers, and legal reformers more generally, might learn from physicians’ cautionary tale of protectionism and profit.
Philip G. Peters, Jr. (University of Missouri Law) examines whether nurse practitioners (NPs) and physician assistants (PAs) offer a promising template for limited license legal professionals. He interrogates the rise of these professions in the medical field, asking, among other things: Do they deliver quality services despite training that is significantly shorter and less expensive than the training of physicians? Do they reduce consumer costs? And do they improve access to care for underserved populations? The chapter also outlines the strategic factors underlying the remarkable success that the NP and PA professions have had, at least until recently, in statehouses across the country and then notes the arguments being made now by physicians against freeing NPs and PAs from all physician oversight. The chapter ends by identifying key lessons from this history for those seeking to create new categories of limited license legal professionals.
Genevieve Lakier (University of Chicago Law) examines Upsolve v. James, where a district court enjoined the application of New York state’s unauthorized practice of law statutes to the Justice Advocates that the nonprofit organization, Upsolve, planned to train, to assist low-income New Yorkers file for bankruptcy. The opinion represents a clear victory for the access-to-justice movement. But it also represents a potentially significant change in how courts understand the First Amendment to apply in unauthorized-practice-of-law cases. Although the decision may be overturned on appeal, the logic of the opinion thus makes clear the promise that what critics have sometimes described as a “Lochnerized” First Amendment holds out to access-to-justice advocates, as well as some of its perils. In this chapter, Lakier explains why the decision is significant, embeds it within a broader story of doctrinal transformation, and spells out some of the benefits and costs of using a Lochner-like First Amendment to promote access to justice.
This chapter establishes the foundation for network machine learning. We begin with network fundamentals: adjacency matrices, edge directionality, node loops, and edge weights. We then explore node-specific properties such as degree and path length, followed by network-wide metrics including density, clustering coefficients, and average path lengths. The chapter progresses to advanced matrix representations, notably degree matrices and various Laplacian forms, which are crucial for spectral analysis methods. We examine subnetworks and connected components, tools for focusing on relevant network structures. The latter half of the chapter delves into preprocessing techniques. We cover node pruning methods to manage outliers and low-degree nodes. Edge regularization techniques, including thresholding and sparsification, address issues in weighted and dense networks. Finally, we explore edge-weight rescaling methods such as z-score standardization and ranking-based approaches. Throughout, we emphasize practical applications, illustrating concepts with examples and code snippets. These preprocessing steps are vital for addressing noise, sparsity, and computational challenges in network data. By mastering these concepts and techniques, readers will be well-equipped to prepare network data for sophisticated machine learning tasks, setting the stage for the advanced methods presented in subsequent chapters.
This chapter explores deep learning methods for network analysis, focusing on graph neural networks (GNNs) and diffusion-based approaches. We introduce GNNs through a drug discovery case study, demonstrating how molecular structures can be analyzed as networks. The chapter covers GNN architecture, training processes, and their ability to learn complex network representations without explicit feature engineering. We then examine diffusion-based methods, which use random walks to develop network embeddings. These techniques are compared and contrasted with earlier spectral approaches, highlighting their capacity to capture nonlinear relationships and local network structures. Practical implementations using frameworks such as PyTorch Geometric illustrate the application of these methods to large-scale network datasets, showcasing their power in addressing complex network problems across various domains.
Sam Issacharoff (NYU Law), a leading law professor and litigator, and Hon. Beverly Martin (NYU Law), formerly of the US Court of Appeals for the Eleventh Circuit, after sketching a bracing account of the origins of the current access-to-justice crisis, ask why changing legal services regulation won’t suffice to solve it. Focusing on debt collection lawsuits – currently the modal case in the entire American civil legal system – they show how much of the current crisis stems from adversarial asymmetries resulting from new species of institutional litigants that leverage scale economies and potent new technologies to assembly-line cases through the legal system. They outline a number of potential solutions to better way to contend with the stunning scale of the current access challenges.
This chapter explores advanced applications of network machine learning for multiple networks. We introduce anomaly detection in time series of networks, identifying significant structural changes over time. The chapter then focuses on signal subnetwork estimation for network classification tasks. We present both incoherent and coherent approaches, with incoherent methods identifying edges that best differentiate between network classes, and coherent methods leveraging additional network structure to improve classification accuracy. Practical applications, such as classifying brain networks, are emphasized throughout. These techniques apply to collections of networks, providing a toolkit for analyzing and classifying complex, multinetwork datasets. By integrating previous concepts with new methodologies, we offer a framework for extracting insights and making predictions from diverse network structures with associated attributes.
The legal services marketplace sits on the cusp of a revolution. For nearly a century, American lawyers have enjoyed a monopoly over the provision of legal services. Sweeping unauthorized practice of law (UPL) laws have prohibited (and in some cases, criminalized) the practice of law by anyone other than a licensed attorney – and these rules have further mandated that lawyers work as solo practitioners or in lawyer-owned law firms.1 This one-two punch has meant that only lawyers can provide legal advice and that even lawyers can’t practice law in nonlawyer-owned entities.2
Cable-driven snake-like robots have been widely applied in various fields. However, some of these robots may have poor operational precision and payload capacity. Moreover, an excessive number of motors would increase the complexity of motion control, and the configuration of driving cables at the distal joints is severely limited by the drive system. Therefore, this study introduces a driving model of cable-driven rolling joints through coordinate-based analysis, and proposes a novel two-degree-of-freedom planar synergy drive system to enable variable curvature at the distal joints. A prototype was designed based on the proposed system, which was put into a precision experiment. Two quantifiable parameters were proposed to demonstrate the advantages of the proposed system. The distal joint precision and the driving model precision were employed as indicators to quantify the performance of the prototype. The maximum mean absolute error of the two indicators was 1.52% and 1.88$^{\circ }$, respectively, and the maximum root mean square error was 1.66% and 2.02$^{\circ }$, respectively. The experimental results demonstrate the feasibility of our approach, which offers increased flexibility in the cable configuration at the distal joints.
Visual Simultaneous Localization and Mapping (vSLAM) is essentially limited by the static world assumption, which makes its application in dynamic environments challenging. This paper proposes a robust vSLAM system, RFN-SLAM, which is based on ORB-SLAM3 and does not require preset dynamic labels and weighted features to process dynamic scenes. In the feature extraction stage, an enhanced efficient binary image BAD descriptor is used to improve the accuracy of static feature point matching. Through the improved RT-DETR target detection network and FAST-SAM instance segmentation network, RFN-SLAM obtains semantic information and uses a novel dynamic box detection algorithm to identify and eliminate the feature points of dynamic objects. When optimizing the pose, the static feature points are weighted according to the dynamic information, which significantly reduces the mismatch and improves the accuracy of positioning. Meanwhile, 3D rendering of the neural radiation field is used to remove dynamic objects and render them. Experiments were conducted on the TUM RGB-D dataset, Bonn dataset, and self-collected dataset. The results show that in terms of positioning accuracy, RFN-SLAM significantly outperforms ORB-SLAM3 in dynamic environments. It also achieves more accurate positioning than other advanced dynamic SLAM methods and successfully realizes accurate 3D reconstruction of static scenes. In addition, on the premise of ensuring accuracy, the real-time performance of RFN-SLAM is effectively guaranteed.
Robotic rehabilitation requires personalized, versatile, and efficient devices to accommodate the diverse needs of patients recovering from motor impairments. In this paper, we focus on hand rehabilitation and analyse a tendon-driven, modular, and adaptable robotic glove actuated by twisted string actuators (TSAs). The proposed solution exploits flexibility in design, allowing customization based on individual patient needs while ensuring effective assistance in hand movements.
Specifically, in this paper we investigate the kinematic relationships between tendon-driven actuators and hand motion. We provide a detailed implementation of multiple functional modules within the glove, designed to accommodate various rehabilitation exercises and adapt to different degrees of motor impairment. In addition, we present experimental tests involving a user to evaluate the system’s performance, usability, and effectiveness in facilitating hand movement. The results provide insights into the potential of TSA-driven robotic gloves for enhancing rehabilitation outcomes through a combination of precise actuation and adaptability to user’s needs.
Logical inferentialists have expected identity to be susceptible of harmonious introduction and elimination rules in natural deduction. While Read and Klev have proposed rules they argue are harmonious, Griffiths and Ahmed have criticized these rules as insufficient for harmony. These critics, moreover, suggest that no harmonious rules are forthcoming. We argue that these critics are correct: the logical inferentialist should abandon hope for harmonious rules for identity. The paper analyzes the three major uses of identity in presumed-logical languages: variable coordination, definitional substitution, and co-reference. We show that identity qua variable coordination is not logical by providing a harmonious natural-deduction system that captures this use through the quantifiers. We then argue that identity qua definitional substitution or co-reference faces a dilemma: either its rules are harmonious but they obscure its actual use in inference, or its rules are not harmonious but they make its actual use in inference plain. We conclude that the inferentialist may have harmonious rules for identity only by disrespecting its inferential use.
Upper limb motor dysfunction significantly impacts daily activities and quality of life for individuals with stroke. Existing assistive robots often struggle to balance portability, ease of use, and motion assistance. This research presents WELiBot, a novel wearable end-effector-type upper limb assistive robot, designed with a 4R-5R parallel mechanism and an arc-shaped guide rail to provide controlled assistance in lifting, reaching, and circumferential motions. The study introduces the conception of the robot design, focusing on its functional requirements and mechanical structure. The kinematic and static characteristics of WELiBot were analyzed to evaluate its feasibility and effectiveness. Based on this design, a prototype with a 1/4 arc-shaped guide rail was fabricated to test motion feasibility and assistance effects. To assess its performance, electromyography experiments were conducted with four healthy participants. The results showed a significant reduction in biceps brachii muscle activity, confirming the robot’s ability to reduce user effort. Future work will focus on attaching the guide rail to the body for improved usability and refining the control strategy to enhance motion assistance and adaptability in daily life support applications.
As social media continues to grow, understanding the impact of storytelling on stakeholder engagement becomes increasingly important for policymakers and organizations who wish to influence policymaking. While prior research has explored narrative strategies in advertising and branding, researchers have paid scant attention to the specific influence of stories on social media stakeholder engagement. This study addresses this gap by employing Narrative Transportation Theory (NTT) and leveraging Natural Language Processing (NLP) to analyze the intricate textual data generated by social media platforms. The analysis of 85,075 Facebook publications from leading Canadian manufacturing companies, using Spearman’s rank correlation coefficient, underscores that individual storytelling components—character, sequence of events, and setting—along with the composite narrative structure significantly enhance stakeholder engagement. This research contributes to a deeper understanding of storytelling dynamics in social media, emphasizing the importance of crafting compelling stories to drive meaningful stakeholder engagement in the digital realm. The results of our research can prove useful for those who wish to influence policymakers or for policymakers who want to promote new policies.
The grounding bottleneck poses one of the key challenges that hinders the widespread adoption of answer set programming in industry. Hybrid grounding is a step in alleviating the bottleneck by combining the strength of standard bottom-up grounding with recently proposed techniques where rule bodies are decoupled during grounding. However, it has remained unclear when hybrid grounding shall use body-decoupled grounding (BDG) and when to use standard bottom-up grounding. In this paper, we address this issue by developing automated hybrid grounding: we introduce a splitting algorithm based on data-structural heuristics that detects when to use BDG and when standard grounding is beneficial. We base our heuristics on the structure of rules and an estimation procedure that incorporates the data of the instance. The experiments conducted on our prototypical implementation demonstrate promising results, which show an improvement on hard-to-ground scenarios, whereas on hard-to-solve instances, we approach state-of-the-art performance.
Accurate prediction of nondispatchable renewable energy sources is essential for grid stability and price prediction. Regional power supply forecasts are usually indirect through a bottom-up approach of plant-level forecasts, incorporate lagged power values, and do not use the potential of spatially resolved data. This study presented a comprehensive methodology for predicting solar and wind power production at a country scale in France using machine learning models trained with spatially explicit weather data combined with spatial information about production sites’ capacity. A dataset is built spanning from 2012 to 2023, using daily power production data from Réseau de Transport d’Electricité (the national grid operator) as the target variable, with daily weather data from ECMWF Re-Analysis v5, production sites capacity and location, and electricity prices as input features. Three modeling approaches are explored to handle spatially resolved weather data: spatial averaging over the country, dimension reduction through principal component analysis, and a computer vision architecture to exploit complex spatial relationships. The study benchmarks state-of-the-art machine learning models as well as hyperparameter tuning approaches based on cross-validation methods on daily power production data. Results indicate that cross-validation tailored to time series is best suited to reach low error. We found that neural networks tend to outperform traditional tree-based models, which face challenges in extrapolation due to the increasing renewable capacity over time. Model performance ranges from 4% to 10% in normalized root-mean-squared error for midterm horizon, achieving similar error metrics to local models established at a single-plant level, highlighting the potential of these methods for regional power supply forecasting.
Despite significant advances in Building Information Modeling (BIM) and increased adoption, numerous challenges remain. Discipline-specific BIM software tools with file storage have unresolved interoperability issues and do not capture or express interdisciplinary design intent. This hobbles machines’ ability to process design information. The lack of suitable data representation hinders the application of machine learning and other data-centric applications in building design. We propose Building Information Graphs (BIGs) as an alternative modeling method. In BIGs, discipline-specific design models are compiled as subgraphs in which nodes and edges model objects and their relationships. Additional nodes and edges in a meta-graph link the building objects across subgraphs. Capturing both intradisciplinary and interdisciplinary relationships, BIGs provide a dimension of contextual data for capturing design intent and constraints. BIGs are designed for computation and applications. The explicit relationships enable advanced graph functionalities, such as across-domain change propagation and object-level version control. BIGs preserve multimodal design data (geometry, attributes, and topology) in a graph structure that can be embedded into high-dimensional vectors, in which learning algorithms can detect statistical patterns and support a wide range of downstream tasks, such as link prediction and graph generation. In this position article, we highlight three key challenges: encapsulating and formalizing object relationships, particularly design intent and constraints; designing graph learning techniques; and developing innovative domain applications that leverage graph structures and learning. BIGs represent a paradigm shift in design technologies that bridge artificial intelligence and building design to enable intelligent and generative design tools for architects, engineers, and contractors.