To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The betweenness centrality of a graph vertex measures how often this vertex is visited on shortest paths between other vertices of the graph. In the analysis of many real-world graphs or networks, the betweenness centrality of a vertex is used as an indicator for its relative importance in the network. In particular, it is among the most popular tools in social network analysis. In recent years, a growing number of real-world networks have been modeled as temporal graphs instead of conventional (static) graphs. In a temporal graph, we have a fixed set of vertices and there is a finite discrete set of time steps, and every edge might be present only at some time steps. While shortest paths are straightforward to define in static graphs, temporal paths can be considered “optimal” with respect to many different criteria, including length, arrival time, and overall travel time (shortest, foremost, and fastest paths). This leads to different concepts of temporal betweenness centrality, posing new challenges on the algorithmic side. We provide a systematic study of temporal betweenness variants based on various concepts of optimal temporal paths.
Computing the betweenness centrality for vertices in a graph is closely related to counting the number of optimal paths between vertex pairs. While in static graphs computing the number of shortest paths is easily doable in polynomial time, we show that counting foremost and fastest paths is computationally intractable (#P-hard), and hence, the computation of the corresponding temporal betweenness values is intractable as well. For shortest paths and two selected special cases of foremost paths, we devise polynomial-time algorithms for temporal betweenness computation. Moreover, we also explore the distinction between strict (ascending time labels) and non-strict (non-descending time labels) time labels in temporal paths. In our experiments with established real-world temporal networks, we demonstrate the practical effectiveness of our algorithms, compare the various betweenness concepts, and derive recommendations on their practical use.
The continuum has been one of the most controversial topics in mathematics since the time of the Greeks. Some mathematicians, such as Euclid and Cantor, held the position that a line is composed of points, while others, like Aristotle, Weyl, and Brouwer, argued that a line is not composed of points but rather a matrix of a continued insertion of points. In spite of this disagreement on the structure of the continuum, they did distinguish the temporal line from the spatial line. In this paper, we argue that there is indeed a difference between the intuition of the spatial continuum and the intuition of the temporal continuum. The main primary aspect of the temporal continuum, in contrast with the spatial continuum, is the notion of orientation.
The continuum has usually been mathematically modeled by Cauchy sequences and the Dedekind cuts. While in the first model, each point can be approximated by rational numbers, in the second one, that is not possible constructively. We argue that points on the temporal continuum cannot be approximated by rationals as a temporal point is a flow that sinks to the past. In our model, the continuum is a collection of constructive Dedekind cuts, and we define two topologies for temporal continuum: 1. oriented topology and 2. the ordinary topology. We prove that every total function from the oriented topological space to the ordinary one is continuous.
Data-driven learning (DDL) form-focused tasks are a relatively new concept. These tasks involve using concordance lines to teach language in a way that integrates discovery learning, authentic language use, consciousness-raising, and the communicative use of language. Given their novelty, there haven’t been many studies on how they impact learners’ engagement. Therefore, this study sought to study whether DDL form-focused tasks influence English as a foreign language (EFL) learners’ task engagement. A total of 114 Iranian EFL learners were randomly divided between comparison and intervention groups in a study that utilized an experimental (comparison group, pretest, and post-test) design within a sequential explanatory mixed-methods design. The comparison group completed 10 non-DDL form-focused tasks, whereas the intervention group completed 10 DDL form-focused tasks. The results of t-tests and repeated-measures ANOVA indicated that incorporating DDL form-focused tasks into English classes enhanced EFL learners’ task engagement in the short run. However, the impact of DDL form-focused tasks on EFL learners’ task engagement was not durable. Moreover, analyzing semi-structured interview data suggested that using DDL-enhanced tasks with a form-focused approach increases EFL learners’ task engagement by triggering their curiosity, improving their autonomy, enhancing their concentration and interest, and facilitating their discovery learning. The present study lends more credence to the application of such tasks. The paper ends with implications for English language teaching and materials development.
The protection number of a vertex $v$ in a tree is the length of the shortest path from $v$ to any leaf contained in the maximal subtree where $v$ is the root. In this paper, we determine the distribution of the maximum protection number of a vertex in simply generated trees, thereby refining a recent result of Devroye, Goh, and Zhao. Two different cases can be observed: if the given family of trees allows vertices of outdegree $1$, then the maximum protection number is on average logarithmic in the tree size, with a discrete double-exponential limiting distribution. If no such vertices are allowed, the maximum protection number is doubly logarithmic in the tree size and concentrated on at most two values. These results are obtained by studying the singular behaviour of the generating functions of trees with bounded protection number. While a general distributional result by Prodinger and Wagner can be used in the first case, we prove a variant of that result in the second case.
Dynamic simulations of the cable-driven parallel robots (CDPRs) with cable models closer to reality can predict the motions of moving platforms more accurately than those with idealisations. Hence, the present work proposes an efficient and modular computational framework for this purpose. The primary focus is on the developments required in the context of CDPRs actuated by moving the exit points of cables while the lengths are held constant. Subsequently, the framework is extended to those cases where simultaneous changes in the lengths of cables are employed. Also, the effects due to the inertia, stiffness and damping properties of the cables undergoing 3D motions are included in their dynamic models. The efficient recursive forward dynamics algorithms from the prior works are utilised to minimise the computational effort. Finally, the efficacy of the proposed framework and the need for such an inclusive dynamic model are illustrated by applying it to different application scenarios using the spatial $4$-$4$ CDPR as an example.
The Internet of Things (IoT) and wearable computing are crucial elements of modern information systems and applications in which advanced features for user interactivity and monitoring are required. However, in the fields of pervasive gaming, IoT has had limited real-world applications. In this work, we present a prototype of a wearable platform for pervasive games that combines IoT with wearable computing to enable the real-time monitoring of physical activity. The main objective of the solution is to promote the utilization of gamification techniques to enhance the physical activity of users through challenges and quests. This aims to create a symbolic link between the virtual gameplay and the real-world environment without the requirement of a smartphone. With the integration of sensors and wearable devices by design, the platform has the capability of real-time monitoring the users’ physical activity during the game. The system performance results highlight the efficiency and attractiveness of the wearable platform for gamifying physical activity.
The integration of camera and LiDAR technologies has the potential to significantly enhance construction robots’ perception capabilities by providing complementary construction information. Structured light cameras (SLCs) are a desirable alternative as they provide comprehensive information on construction defects. However, fusing these two types of information depends largely on the sensors’ relative positions, which can only be established through extrinsic calibration. This paper introduces a novel calibration algorithm considering a customized board for SLCs and repetitive LiDARs, which are designed to facilitate the automation of construction robots. The calibration board is equipped with four symmetrically distributed hemispheres, whose centers are obtained by fitting the spheres and adoption with the geometric constraints. Subsequently, the spherical centers serve as reference features to estimate the relationship between the sensors. These distinctive features enable our proposed method to only require one calibration board pose and minimize human intervention. We conducted both simulation and real-world experiments to assess the performance of our algorithm. And the results demonstrate that our method exhibits enhanced accuracy and robustness.
In this paper, the notion of locally algebraic intersection structure is introduced for algebraic L-domains. Essentially, every locally algebraic intersection structure is a family of sets, which forms an algebraic L-domain ordered by inclusion. It is shown that there is a locally algebraic intersection structure which is order-isomorphic to a given algebraic L-domain. This result extends the classic Stone’s representation theorem for Boolean algebras to the case of algebraic L-domains. In addition, it can be seen that many well-known representations of algebraic L-domains, such as logical algebras, information systems, closure spaces, and formal concept analysis, can be analyzed in the framework of locally algebraic intersection structures. Then, a set-theoretic uniformity across different representations of algebraic L-domains is established.
Compressible anisothermal flows, which are commonly found in industrial settings such as combustion chambers and heat exchangers, are characterized by significant variations in density, viscosity, and heat conductivity with temperature. These variations lead to a strong interaction between the temperature and velocity fields that impacts the near-wall profiles of both quantities. Wall-modeled large-eddy simulations (LESs) rely on a wall model to provide a boundary condition, for example, the shear stress and the heat flux that accurately represents this interaction despite the use of coarse cells near the wall, and thereby achieve a good balance between computational cost and accuracy. In this article, the use of graph neural networks for wall modeling in LES is assessed for compressible anisothermal flow. Graph neural networks are a type of machine learning model that can learn from data and operate directly on complex unstructured meshes. Previous work has shown the effectiveness of graph neural network wall modeling for isothermal incompressible flows. This article develops the graph neural network architecture and training to extend their applicability to compressible anisothermal flows. The model is trained and tested a priori using a database of both incompressible isothermal and compressible anisothermal flows. The model is finally tested a posteriori for the wall-modeled LES of a channel flow and a turbine blade, both of which were not seen during training.
This paper examines the potential role of network analysis in understanding the powerful elites that pose a significant threat to peace and state-building within post-conflict contexts. This paper makes a threefold contribution. First, it identifies a caveat in the scholarship surrounding international interventions, shedding light on shortcomings in their design and implementation strategies, and elucidating the influence these elites wield in the political and economic realms. Next, it delineates the essentials of the network analysis approach, addressing the information and data requirements and limitations inherent in its application in conflict environments. Finally, the paper provides valuable insights gleaned from the international operation in Guatemala known as the International Commission for Impunity in Guatemala, which specifically targeted illicit networks. The argument asserts that network analysis functions as a dual-purpose tool—serving as both a descriptive instrument to reveal, identify, and address the root causes of conflict and a predictive tool to enhance peace agreement implementation and improve decision-making. Simultaneously, it underscores the challenge of data analysis and translating network interventions into tangible real-life consequences for long-lasting results.
We introduce a novel preferential attachment model using the draw variables of a modified Pólya urn with an expanding number of colors, notably capable of modeling influential opinions (in terms of vertices of high degree) as the graph evolves. Similar to the Barabási-Albert model, the generated graph grows in size by one vertex at each time instance; in contrast however, each vertex of the graph is uniquely characterized by a color, which is represented by a ball color in the Pólya urn. More specifically at each time step, we draw a ball from the urn and return it to the urn along with a number of reinforcing balls of the same color; we also add another ball of a new color to the urn. We then construct an edge between the new vertex (corresponding to the new color) and the existing vertex whose color ball is drawn. Using color-coded vertices in conjunction with the time-varying reinforcing parameter allows for vertices added (born) later in the process to potentially attain a high degree in a way that is not captured in the Barabási-Albert model. We study the degree count of the vertices by analyzing the draw vectors of the underlying stochastic process. In particular, we establish the probability distribution of the random variable counting the number of draws of a given color which determines the degree of the vertex corresponding to that color in the graph. We further provide simulation results presenting a comparison between our model and the Barabási-Albert network.
Design education prepares novice designers to solve complex and challenging problems requiring diverse skill sets and an interdisciplinary approach. Hackathons, for example, offer a hands-on, collaborative learning approach in a limited time frame to gain practical experience and develop problem-solving skills quickly. They enable collaboration, prototyping and testing among interdisciplinary teams. Typically, hackathons strongly focus on the solution, assuming that this will support learning. However, building the best product and achieving a strong learning effect may not be related. This paper presents the results of an empirical study that examines the relationship between product quality, learning effect and effort spent in an academic 2-week hackathon. Thirty teams identified user problems in this course and developed hardware and mechatronic products. This study collected the following data: (1) effort spent during the hackathon through task tracking, (2) learning effect through self-assessment by the participants and (3) product quality after the hackathon by an external jury. The study found that the team effort spent has a statistically significant but moderate correlation with product quality. The correlation between product quality and learning effect is statistically insignificant, suggesting that for this setting, there is no relevant association.
We give algorithms for approximating the partition function of the ferromagnetic $q$-color Potts model on graphs of maximum degree $d$. Our primary contribution is a fully polynomial-time approximation scheme for $d$-regular graphs with an expansion condition at low temperatures (that is, bounded away from the order-disorder threshold). The expansion condition is much weaker than in previous works; for example, the expansion exhibited by the hypercube suffices. The main improvements come from a significantly sharper analysis of standard polymer models; we use extremal graph theory and applications of Karger’s algorithm to count cuts that may be of independent interest. It is #BIS-hard to approximate the partition function at low temperatures on bounded-degree graphs, so our algorithm can be seen as evidence that hard instances of #BIS are rare. We also obtain efficient algorithms in the Gibbs uniqueness region for bounded-degree graphs. While our high-temperature proof follows more standard polymer model analysis, our result holds in the largest-known range of parameters $d$ and $q$.
We study a fundamental efficiency benefit afforded by delimited control, showing that for certain higher-order functions, a language with advanced control features offers an asymptotic improvement in runtime over a language without them. Specifically, we consider the generic count problem in the context of a pure functional base language ${\lambda_{\textrm{b}}}$ and an extension ${\lambda_{\textrm{h}}}$ with general effect handlers. We prove that ${\lambda_{\textrm{h}}}$ admits an asymptotically more efficient implementation of generic count than any implementation in ${\lambda_{\textrm{b}}}$. We also show that this gap remains even when ${\lambda_{\textrm{b}}}$ is extended to a language ${{{{{{\lambda_{\textrm{a}}}}}}}}$ with affine effect handlers, which is strong enough to encode exceptions, local state, coroutines and single-shot continuations. This locates the efficiency difference in the gap between ‘single-shot’ and ‘multi-shot’ versions of delimited control.
To our knowledge, these results are the first of their kind for control operators.
In situations ranging from border control to policing and welfare, governments are using automated facial recognition technology (FRT) to collect taxes, prevent crime, police cities, and control immigration. FRT involves the processing of a person’s facial image, usually for identification, categorisation, or counting. This ambitious handbook brings together a diverse group of legal, computer, communications, and social and political science scholars to shed light on how FRT has been developed, used by public authorities, and regulated in different jurisdictions across five continents. Informed by their experiences working on FRT across the globe, chapter authors analyse the increasing deployment of FRT in public and private life. The collection argues for the passage of new laws, rules, frameworks, and approaches to prevent harms of FRT in the modern state and advances the debate on scrutiny of power and accountability of public authorities which use FRT. This book is also available as Open Access on Cambridge Core.
Ellen Balka, Simon Fraser University, British Columbia,Ina Wagner, Universität Siegen, Germany,Anne Weibert, Universität Siegen, Germany,Volker Wulf, Universität Siegen, Germany
This chapter revisits the ethical-political perspective on technology design. Feminist/intersectional approaches to the design of IT artifacts build on practices developed in participatory design and action research, enriching them with norm-critical, norm-creative, and social justice-oriented perspectives. Practice-based design adds experiences with designing flexible, malleable systems that are open to end-user development, offering technological tools for designing systems that are open to other ways of thinking and doing (work). Decolonizing approaches contribute to doing justice to parts of the world that experience(d) oppression and marginalization, discarding the needs of people and disrespecting their knowledge. Among the specific challenges of a feminist/intersectional approach to design are the need to make invisible aspects of work visible; to recognize women’s skills without falling into the trap of gender stereotyping; to engage in improving working conditions; to defend care against a managerial logic, take care of the many overlooked and undervalued aspects of work in design, but also to care for research subjects and create safe spaces.
This chapter analyses the legal framework for the use of facial recognition technology (FRT) in the public sector in Germany, with a particular emphasis on the pertinent German data protection and police laws. Under German law, a legal basis is required for these real-world applications of FRT. The article discusses whether the pertinent laws provide such legal basis and what limits they impose.
This chapter introduces the reader to facial recognition technology (FRT) history and the development of FRT from the perspective of science and technologies studies. Beginning with the traditionally accepted origins of FRT in 1964–1965, developed by Woody Bledsoe, Charles Bisson, and Helen Wolf Chan in the United States, Simon Taylor discusses how FRT builds on earlier applications in mug shot profiling, imaging, biometrics, and statistical categorisation. Grounded in the history of science and technology, the chapter demonstrates how critical aspects of FRT infrastructure are aided by scientific and cultural innovations from different times of locations: that is, mugshots in eighteenth-century France; mathematical analysis of caste in nineteenth-century British India; innovations by Chinese closed-circuit television companies and computer vision start-ups conducting bio-security experiments on farm animals. This helps to understand FRT development beyond the United States-centred narrative. The aim is to deconstruct historical data, mathematical, and digital materials that act as ‘back-stage elements’ to FRT and are not so easily located in infrastructure yet continue to shape uses today. Taylor’s analysis lays a foundation for the kinds of frameworks that can better help regulate and govern FRT as a means for power over populations in the following chapters.