To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The problem at hand is that vast amount of data on industrial changes is captured and stored; yet the present challenge is to systematically retrieve and use them in a purposeful way. This paper presents an industrial case study where complex product development processes are modeled using the design structure matrix (DSM) to analyze engineering change requests sequences. Engineering change requests are documents used to initiate a change process to enhance a product. Due to the amount of changes made in different projects, engineers want to be able to analyze these change processes to identify patterns and propose the best practices. The previous work has not specifically explored modeling engineering change requests in a DSM to holistically analyze sequences. This case study analyzes engineering change request sequences from four recent industrial product development projects and compares patterns among them. In the end, this research can help to identify and guide process improvement work within projects.
The original ideas on design abduction, inspired by treatments in philosophy of science, had a narrow conception on how novelty emerges in design, when looked at in terms of logic. The authors have previously presented a re-proposed notion of abduction in design, taking the differences between science and design into account. Now, in this article, the invention of the airplane by the Wright brothers is analyzed as a retrospective case study. Key parts of the re-proposed notion of design abduction are demonstrated, and two new types of design abduction are identified, namely strategic abduction and dynamic abduction. Perhaps even more importantly, a new hypothesis on the cognitive basis of design abduction is reached. While the importance of model-based abduction (and reasoning) is confirmed, the case also pinpoints the central role of verbalization and discussion in supporting design reasoning in general and especially abduction. All in all, it seems that an improved understanding of design abduction and its cognitive basis would be instrumental in promoting more effective and efficient designing.
Getting numbers is easy; getting numbers you can trust is hard. This practical guide by experimentation leaders at Google, LinkedIn, and Microsoft will teach you how to accelerate innovation using trustworthy online controlled experiments, or A/B tests. Based on practical experiences at companies that each run more than 20,000 controlled experiments a year, the authors share examples, pitfalls, and advice for students and industry professionals getting started with experiments, plus deeper dives into advanced topics for practitioners who want to improve the way they make data-driven decisions.Learn how toUse the scientific method to evaluate hypotheses using controlled experiments Define key metrics and ideally an Overall Evaluation CriterionTest for trustworthiness of the results and alert experimenters to violated assumptionsBuild a scalable platform that lowers the marginal cost of experiments close to zeroAvoid pitfalls like carryover effects and Twyman's lawUnderstand how statistical issues play out in practice.
The authors summarize the main dynamic balancing methods of robotic mechanisms in this paper. The majority of dynamic balancing methods have been presented, and there may be other dynamic balancing methods that are not included in this paper. Each of the balancing methods is reviewed and discussed. The advantages and disadvantages of each method are presented and compared. The goal of this paper is to provide an overview of recent research in balancing. The authors hope that this study can provide an informative reference for future research in the direction of dynamic balancing of robotic mechanisms.
In this paper, the problem of goodness of fit test for the inverse Rayleigh distribution based on progressively Type-II censored samples is studied. We develop two test statistics via entropy and propose one new non-entropy test statistic via a pivotal method. We also study the properties of these test statistics. Critical values are obtained by simulations. Then, we do power analysis of these test statistics against various alternatives under different censoring schemes. We conclude that the tests we proposed perform well against various alternatives, especially for non-monotone hazard alternatives. Finally, one real data set is analyzed.
In the context of cross-disciplinary and cross-company cooperation, several challenges in developing manufacturing systems are revealed through industrial use cases. To tackle these challenges, two propositions are used in parallel. First, coupling technical models representing different content areas facilitates the detection of boundary crossing consequences, either by using a posteriori or a priori connection. Second, it is necessary to enrich these coupled technical models with team and organizational models as interventions focusing on the collaboration between individuals and teams within broader organizational conditions. Accordingly, a combined interdisciplinary approach is proposed. The feasibility and benefits of the approach is proven with an industrial use case. The use case shows that inconsistencies among teams can be identified by coupling engineering models and that an integrated organizational model can release the modelling process from communication barriers.
This paper sheds light on the new possibilities for user research activities facilitated by access to makerspaces. We present four case studies of user research conducted in two university-based makerspaces as examples of makerspace-driven user research. Further, by comparing the cases to three classical user research activities, namely observation, prototyping, and user journey mapping, we highlight the main aspects of this new context of user research. We find that accessibility to makerspaces enables user researchers to build low-fidelity yet high-functionality prototypes for exploring users’ preferences and motivations in controlled and repeatable ways. These prototypes fall into the category of experience prototypes, but they have greater functionality than the prototypes previously used in this field. Thus, a user researcher can explore a topic more systematically and in a more hypothesis-driven manner. In summary, this study encourages stakeholders in the early stages of product development to consider a makerspace as a resource for user-related requirement elicitation rather than for only specific product iteration.
The rise of social media empowers people to interact and communicate with anyone anywhere in the world. The possibility of being anonymous avoids censorship and enables freedom of expression. Nevertheless, this anonymity might lead to cybersecurity issues, such as opinion spam, sexual harassment, incitement to hatred or even terrorism propaganda. In such cases, there is a need to know more about the anonymous users and this could be useful in several domains beyond security and forensics such as marketing, for example. In this paper, we focus on a fine-grained analysis of language varieties while considering also the authors’ demographics. We present a Low-Dimensionality Statistical Embedding method to represent text documents. We compared the performance of this method with the best performing teams in the Author Profiling task at PAN 2017. We obtained an average accuracy of 92.08% versus 91.84% for the best performing team at PAN 2017. We also analyse the relationship of the language variety identification with the authors’ gender. Furthermore, we applied our proposed method to a more fine-grained annotated corpus of Arabic varieties covering 22 Arab countries and obtained an overall accuracy of 88.89%. We have also investigated the effect of the authors’ age and gender on the identification of the different Arabic varieties, as well as the effect of the corpus size on the performance of our method.
This paper proposes a robust text classification and correspondence analysis approach to identification of similar languages. In particular, we propose to use the readily available information of clauses and word length distribution to model similar languages. The modeling and classification are based on the hypothesis that languages are self-adaptive complex systems and hence can be classified by dynamic features describing the system, especially in terms of distributional relations of constituents of a system. For similar languages whose grammatical differences are often subtle, classification based on dynamic system features should be more effective. To test this hypothesis, we considered both regional and genre varieties of Mandarin Chinese for classification. The data are extracted from two comparable balanced corpora to minimize possible confounding factors. The two corpora are the Sinica Corpus from Taiwan and the Lancaster Corpus of Mandarin Chinese from Mainland China, and the two genres are reportage and review. Our text classification and correspondence analysis results show that the linguistically felicitous two-level constituency model combining power functions between word and clauses effectively classifies the two varieties of Chinese for both genres. In addition, we found that genres do have compounding effect on classification of regional varieties. In particular, reportage in two varieties is more likely to be classified than review, corroborating the complex system view of language variations. That is, language variations and changes typically do not take place evenly across the board for the complete language system. This further enhances our hypothesis that dynamic complex system features, such as the power functions captured by the Menzerath–Altmann law, provide effective models in classifications of similar languages.
Biodiversity research studies the variability and diversity of organisms, including variability within and between species with particular focus on the functional diversity of traits and their relationship to environment. Managing biodiversity data implies dealing with its heterogeneous nature using semantics and tailored ontologies. These are themselves differently conceived, and combining them in semantically enabled applications necessitates an effective alignment between their concepts. This paper describes the ontology matching of biodiversity- and ecology-related ontologies. We illustrate diverse challenges introduced by this kind of ontologies to ontology matching in general. Real use cases requiring pairwise alignments between environment and trait ontologies are introduced. We describe our experience creating a new track at the Ontology Alignment Evaluation Initiative designed for this specific domain and report on the results obtained by state-of-the-art participating systems. The biodiversity and ecology use case turns out to be a strong one for ontology matching, introducing new interesting challenges. Even if most of the matching systems perform relatively well in the proposed matching tasks, there is still room for improvement. We highlight possible directions in that matter and elaborate on our plan to further progress with the track.
Numerical computation is traditionally performed using floating-point arithmetic and truncated forms of infinite series, a methodology which allows for efficient computation at the cost of some accuracy. For most applications, these errors are entirely acceptable and the numerical results are considered trustworthy, but for some operations, we may want to have guarantees that the numerical results are correct, or explicit bounds on the errors. To obtain rigorous calculations, floating-point arithmetic is usually replaced by interval arithmetic and truncation errors are explicitly contained in the result. We may then ask the question of which mathematical operations can be implemented in a way in which the exact result can be approximated to arbitrary known accuracy by a numerical algorithm. This is the subject of computable analysis and forms a theoretical underpinning of rigorous numerical computation. The aim of this article is to provide a straightforward introduction to this subject that is powerful enough to answer questions arising in dynamic system theory.
For an integer q ⩾ 2, a graph G is called q-Ramsey for a graph H if every q-colouring of the edges of G contains a monochromatic copy of H. If G is q-Ramsey for H yet no proper subgraph of G has this property, then G is called q-Ramsey-minimal for H. Generalizing a statement by Burr, Nešetřil and Rödl from 1977, we prove that, for q ⩾ 3, if G is a graph that is not q-Ramsey for some graph H, then G is contained as an induced subgraph in an infinite number of q-Ramsey-minimal graphs for H as long as H is 3-connected or isomorphic to the triangle. For such H, the following are some consequences.
For 2 ⩽ r < q, every r-Ramsey-minimal graph for H is contained as an induced subgraph in an infinite number of q-Ramsey-minimal graphs for H.
For every q ⩾ 3, there are q-Ramsey-minimal graphs for H of arbitrarily large maximum degree, genus and chromatic number.
The collection $\{\mathcal M_q(H) \colon H \text{ is 3-connected or } K_3\}$ forms an antichain with respect to the subset relation, where $\mathcal M_q(H)$ denotes the set of all graphs that are q-Ramsey-minimal for H.
We also address the question of which pairs of graphs satisfy $\mathcal M_q(H_1)=\mathcal M_q(H_2)$, in which case H1 and H2 are called q-equivalent. We show that two graphs H1 and H2 are q-equivalent for even q if they are 2-equivalent, and that in general q-equivalence for some q ⩾ 3 does not necessarily imply 2-equivalence. Finally we indicate that for connected graphs this implication may hold: results by Nešetřil and Rödl and by Fox, Grinshpun, Liebenau, Person and Szabó imply that the complete graph is not 2-equivalent to any other connected graph. We prove that this is the case for an arbitrary number of colours.
Who is more likely to be isolated from society in terms of political beliefs? To answer this question, we measure whether individuals’ beliefs are “out of sync”—the extent to which their views differ with their contemporaries—and examine how the level of synchronization is associated with the size of important-matter and political-matter discussion networks. The results show that people with weaker belief synchronization are more likely to have smaller important-matter discussion networks. However, additional analyses of political-matter discussion networks show that weaker belief synchronization is associated with smaller networks only among those without a high school diploma and even provides some advantage in maintaining larger networks for the college-educated. Overall, the results imply that political beliefs that are “out of sync” correspond to the individual being “out of society,” whereas the aspects of “out of society” are quite different among educational groups.
This paper deals with stochastic comparisons of the largest order statistics arising from two sets of independent and heterogeneous gamma samples. It is shown that the weak supermajorization order between the vectors of scale parameters together with the weak submajorization order between the vectors of shape parameters imply the reversed hazard rate ordering between the corresponding maximum order statistics. We also establish sufficient conditions of the usual stochastic ordering in terms of the p-larger order between the vectors of scale parameters and the weak submajorization order between the vectors of shape parameters. Numerical examples and applications in auction theory and reliability engineering are provided to illustrate these results.
Face-to-face interactions in social groups are a central aspect of human social lives. Although the composition of such groups has received ample attention in various fields—e.g., sociology, social psychology, management, and educational science—their micro-level dynamics are rarely analyzed empirically. In this article, we present a new statistical network model (DyNAM-i) that can represent the dynamics of conversation groups and interpersonal interaction in different social contexts. Taking an actor-oriented perspective, this model can be applied to test how individuals’ interaction patterns differ and how they choose and change their interaction groups. It moves beyond dyadic interaction mechanisms and translates central social network mechanisms—such as homophily, transitivity, and popularity—to the context of interactions in group settings. The utility and practical applicability of the new model are illustrated in two social network studies that investigate face-to-face interactions in a small party and an office setting.
Technical writing in professional environments, such as user manual authoring, requires the use of uniform language. Nonuniform language refers to sentences in a technical document that are intended to have the same meaning within a similar context, but use different words or writing style. Addressing this nonuniformity problem requires the performance of two tasks. The first task, which we named nonuniform language detection (NLD), is detecting such sentences. We propose an NLD method that utilizes different similarity algorithms at lexical, syntactic, semantic and pragmatic levels. Different features are extracted and integrated by applying a machine learning classification method. The second task, which we named nonuniform language correction (NLC), is deciding which sentence among the detected ones is more appropriate for that context. To address this problem, we propose an NLC method that combines contraction removal, near-synonym choice, and text readability comparison. We tested our methods using smartphone user manuals. We finally compared our methods against state-of-the-art methods in paraphrase detection (for NLD) and against expert annotators (for both NLD and NLC). The experiments demonstrate that the proposed methods achieve performance that matches expert annotators.
The problem of traffic congestion incurs numerous social and economical repercussions and has thus become a central issue in every major city in the world. For this work we look at the transportation domain from a multiagent system perspective, where every driver can be seen as an autonomous decision-making agent. We explore how learning approaches can help achieve an efficient outcome, even when agents interact in a competitive environment for sharing common resources. To this end, we consider the route choice problem, where self-interested drivers need to independently learn which routes minimise their expected travel costs. Such a selfish behaviour results in the so-called user equilibrium, which is inefficient from the system’s perspective. In order to mitigate the impact of selfishness, we present Toll-based Q-learning (TQ-learning, for short). TQ-learning employs the idea of marginal-cost tolling (MCT), where each driver is charged according to the cost it imposes on others. The use of MCT leads agents to behave in a socially desirable way such that the is attainable. In contrast to previous works, however, our tolling scheme is distributed (i.e., each agent can compute its own toll), is charged a posteriori (i.e., at the end of each trip), and is fairer (i.e., agents pay exactly their marginal costs). Additionally, we provide a general formulation of the toll values for univariate, homogeneous polynomial cost functions. We present a theoretical analysis of TQ-learning, proving that it converges to a system-efficient equilibrium (i.e., an equilibrium aligned to the system optimum) in the limit. Furthermore, we perform an extensive empirical evaluation on realistic road networks to support our theoretical findings, showing that TQ-learning indeed converges to the optimum, which translates into a reduction of the congestion levels by 9.1%, on average.
Acquired MDM2 inhibitor resistance is commonly caused by loss-of-function TP53 mutations. In addition to the selection of TP53-mutant cells by MDM2 inhibitors, MDM2 inhibitor-induced DNA damage may promote the formation of TP53 mutations. Here, we cultivated 12 sublines of the intrinsically MDM2 inhibitor-resistant TP53 wild-type acute myeloid leukaemia cell line PL21 for 52 passages in the presence of ineffective concentrations of the MDM2 inhibitor nutlin-3 but did not observe loss-of-function TP53 mutations. This suggests that MDM2 inhibitors select TP53-mutant cells after mutations have occurred, but do not directly promote TP53 mutations. Unexpectedly, many sublines displayed increased sensitivity to the anti-cancer drugs cytarabine, doxorubicin, or gemcitabine. Consequently, therapies can affect the outcome of next-line treatments, even in the absence of a therapy response. This finding is conceptually novel. A better understanding of such processes will inform the design of improved therapy protocols in the future.
Scedosporium and Lomentospora species are ubiquitous saprophytic filamentous fungi that emerged as human pathogens with impressive multidrug-resistance profile. The ability to form biofilm over several biotic and abiotic surfaces is one of the characteristics that contributes to their resistance patterns against almost all currently available antifungals. Herein, we have demonstrated that Scedosporium apiospermum, Scedosporium minutisporum, Scedosporium aurantiacum and Lomentospora prolificans were able to form biofilm, in similar amounts, when conidial cells were incubated in a polystyrene substrate containing Sabouraud medium supplemented or not with different concentrations (2%, 5% and 10%) of glucose, fructose, sucrose and lactose. Likewise, the glucose supplementation of culture media primarily composed of amino acids (SCFM, synthetic cystic fibrosis medium) and salts (YNB, yeast nitrogen base) did not modulate the biofilm formation of Scedosporium/Lomentospora species. Collectively, the present data reinforce the ability of these opportunistic fungi to colonize and to build biofilm structures under different environmental conditions.
The glial fibrillary acidic protein (GFAP) is an intermediate filament widely used to identify and label astroglial cells, a very abundant and relevant glial cell type in the central nervous system. A major hurdle in studying its behavior and function arises from the fact that GFAP does not tolerate well the addition of protein tags to its termini. Here, we tagged human GFAP (hGFAP) with an enhanced green fluorescent protein (EGFP) for the first time, and substituted a previously reported EGFP tag on mouse GFAP (mGFAP) by a more versatile Halo Tag. Both versions of tagged GFAP were able to incorporate into the normal GFAP filamentous network in glioma cells, and Alexander disease-related mutations or pharmacological disruption of microtubules and actin filaments interfered with GFAP dynamics. These new tools could provide new fruitful venues for the study of GFAP oligomerization, aggregation and dynamics in living cells.