To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this article, we study the complexity of weighted team definability for logics with team semantics. This problem is a natural analog of one of the most studied problems in parameterized complexity, the notion of weighted Fagin-definability, which is formulated in terms of satisfaction of first-order formulas with free relation variables. We focus on the parameterized complexity of weighted team definability for a fixed formula $\varphi$ of central team-based logics. Given a first-order structure $\mathcal{A}$ and the parameter value $k\in \mathbb N$ as input, the question is to determine whether $\mathcal{A},T\models \varphi$ for some team T of size k. We show several results on the complexity of this problem for dependence, independence, and inclusion logic formulas. Moreover, we also relate the complexity of weighted team definability to the complexity classes in the well-known W-hierarchy as well as paraNP.
This study aimed to identify and understand the major topics of discussion under the #sustainability hashtag on Twitter (now known as “X”) and understand user engagement. The sharp increase in social media usage combined with a rise in climate anomalies in recent years makes the area of sustainability with respect to social media a critical topic. Python was used to gather Twitter posts between January 1, 2023, and March 1, 2023. User engagement metrics were analyzed using a variety of statistical analysis methods, including keyword-frequency analysis and Latent Dirichlet Allocation (LDA), which were used to identify significant topics of discussion under the #sustainability hashtag. Additionally, histograms and scatter plots were used to visualize user engagement. LDA analysis was conducted with 7 topics after trials were run with various topics and results were analyzed to determine which number of topics best fit the dataset. The frequency analysis provided a basic overview of the discourse surrounding #sustainability with the topics of technology, business and industry, environmental awareness, and discussion of the future. The LDA model provided a more comprehensive view, including additional topics such as Environmental, Social, and Governance (ESG) and infrastructure, investing, collaboration, and education. These findings have implications for researchers, businesses, organizations, and politicians seeking to align their strategies and actions with the major topics surrounding sustainability on Twitter to have a greater impact on their audience. Researchers can use the results of this study to guide further research on the topic or contextualize their study with existing literature within the field of sustainability.
Differentially flat under-actuated robots are characterized by more degrees of freedom (DOF) than actuators: this makes possible the design of lightweight cheap robots with high dexterity. The main issue of such robots is the control of the passive joint, which requires accurate dynamic modeling of the robot.
Friction is usually discarded to simplify the models, especially in the case of low-speed trajectories. However, this simplification leads to oscillations of the end-effector about the final position, which are incompatible with fast and accurate motions.
This paper focuses on planar $n$-DOF serial robotic arms with $n-1$ actuated rotational joints plus one final passive rotational joint with stiffness and friction properties. These robots, if properly balanced, are differentially flat. When the non-actuated joint can be considered frictionless, differentially flat robots can be controlled in open loop, calculating the motor torques demanded by point-to-point motions. This paper extends the open-loop control to robots with a passive joint with viscous friction adopting a Laplace transform method. This method can be adopted by exploiting the particular structure of the equations of motion of differentially flat under-actuated robots in which the last equations are linear. Analytical expressions of the motor torques are obtained. The work is enriched by an experimental validation of a $2$-DOF under-actuated robot and by numerical simulations of the $2$- and $4$-DOF robots showing the suppression of unwanted oscillations.
In mathematics, it simply is not true that 'you can't prove a negative'. Many revolutionary impossibility theorems reveal profound properties of logic, computation, fairness and the universe, and form the mathematical background of new technologies and Nobel prizes. But to fully appreciate these theorems and their impact on mathematics and beyond, you must understand their proofs.This book is the first to present these proofs for a broad, lay audience. It fully develops the simplest rigorous proofs found in the literature, reworked to contain less jargon and notation, and more background, intuition, examples, explanations, and exercises. Amazingly, all of the proofs in this book involve only arithmetic and basic logic – and are elementary, starting only from first principles and definitions. Very little background knowledge is required, and no specialized mathematical training – all you need is the discipline to follow logical arguments and a pen in your hand.
Introducing Stone–Priestley duality theory and its applications to logic and theoretical computer science, this book equips graduate students and researchers with the theoretical background necessary for reading and understanding current research in the area. After giving a thorough introduction to the algebraic, topological, logical, and categorical aspects of the theory, the book covers two advanced applications in computer science, namely in domain theory and automata theory. These topics are at the forefront of active research seeking to unify semantic methods with more algorithmic topics in finite model theory. Frequent exercises punctuate the text, with hints and references provided.
For graphs $G$ and $H$, the Ramsey number $r(G,H)$ is the smallest positive integer $N$ such that any red/blue edge colouring of the complete graph $K_N$ contains either a red $G$ or a blue $H$. A book $B_n$ is a graph consisting of $n$ triangles all sharing a common edge.
Recently, Conlon, Fox, and Wigderson conjectured that for any $0\lt \alpha \lt 1$, the random lower bound $r(B_{\lceil \alpha n\rceil },B_n)\ge (\sqrt{\alpha }+1)^2n+o(n)$ is not tight. In other words, there exists some constant $\beta \gt (\sqrt{\alpha }+1)^2$ such that $r(B_{\lceil \alpha n\rceil },B_n)\ge \beta n$ for all sufficiently large $n$. This conjecture holds for every $\alpha \lt 1/6$ by a result of Nikiforov and Rousseau from 2005, which says that in this range $r(B_{\lceil \alpha n\rceil },B_n)=2n+3$ for all sufficiently large $n$.
We disprove the conjecture of Conlon, Fox, and Wigderson. Indeed, we show that the random lower bound is asymptotically tight for every $1/4\leq \alpha \leq 1$. Moreover, we show that for any $1/6\leq \alpha \le 1/4$ and large $n$, $r(B_{\lceil \alpha n\rceil }, B_n)\le \left (\frac 32+3\alpha \right ) n+o(n)$, where the inequality is asymptotically tight when $\alpha =1/6$ or $1/4$. We also give a lower bound of $r(B_{\lceil \alpha n\rceil }, B_n)$ for $1/6\le \alpha \lt \frac{52-16\sqrt{3}}{121}\approx 0.2007$, showing that the random lower bound is not tight, i.e., the conjecture of Conlon, Fox, and Wigderson holds in this interval.
A system experiences random shocks over time, with two critical levels, d1 and d2, where $d_{1} \lt d_{2}$. k consecutive shocks with magnitudes between d1 and d2 partially damaging the system, causing it to transition to a lower, partially working state. Shocks with magnitudes above d2 have a catastrophic effect, resulting in complete failure. This theoretical framework gives rise to a multi-state system characterized by an indeterminate quantity of states. When the time between successive shocks follows a phase-type distribution, a detailed analysis of the system’s dynamic reliability properties such as the lifetime of the system, the time it spends in perfect functioning, as well as the total time it spends in partially working states are discussed.
Nowadays public policymakers are offered with opportunities to take data-driven evidence-based decisions by analyzing the very large volumes of policy-related data that are generated through different channels (e.g., e-services, mobile apps, social media). Machine learning (ML) and artificial intelligence (AI) tehcnologies ease and automate the analysis of large policy-related datasets, which helps policymakers to realize a shift toward data-driven decisions. Nevertheless, the deployment and use of AI tools for public policy development is also associated with significant technical, political, and operation challenges. For instance, AI-based policy development solutions must be transparent and explainable to policymakers, while at the same time adhering to the mandates of emerging regulations such as the AI Act of the European Union. This paper introduces some of the main technical, operational, regulatory compliance challenges of AI-based policymaking. Accordingly, it introduces technological solutions for overcoming them, including: (i) a reference architecture for AI-based policy development, (ii) a virtualized cloud-based tool for the specification and implementation of ML-based data-driven policies, (iii) a ML framework that enables the development of transparent and explainable ML models for policymaking, and (iv) a set of guidelines for using the introduced technical solutions to achieve regulatory compliance. The paper ends up illustrating the validation and use of the introduced solutions in real-life public policymaking cases for various local governments.
This chapter gives a quick tour of classic material in univariate analytic combinatorics, including rational and meromorphic generating functions, Darboux’s method, the transfer theorems of singularity analysis, and saddle point methods for essential singularities.
This appendix contains a compressed version of standard graduate topics in topology such as chain complexes, homology, cohomology, relative homology, and excision.
This chapter develops methods to compute asymptotics of multivariate Fourier–Laplace integrals in order to derive general saddle point approximations for use in later chapters. Our approach uses contour deformation, differing from common treatments relying on integration by parts: this requires analyticity rather than just smoothness but is better suited to integration over complex manifolds.
This chapter gives a high-level overview of analytic combinatorics in several variables. Stratified Morse theory reduces the derivation of coefficient asymptotics for a multivariate generating function to the study of asymptotic expansions of local integrals near certain critical points on the generating function’s singular set. Determining exactly which critical points contribute to asymptotic behavior is a key step in the analysis . The asymptotic behavior of each local integral depends on the local geometry of the singular variety, with three special cases treated in later chapters.
This first chapter motivates our detailed study of the behavior of multivariate sequences, and overviews the techniques we derive using the Cauchy Integral Formula, residues, topological arguments, and asymptotic approximations. Basic asymptotic notation and concepts are introduced, including the background necessary to discuss multivariate expansions.