To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Computational modeling of the human sequential design process and successful prediction of future design decisions are fundamental to design knowledge extraction, transfer, and the development of artificial design agents. However, it is often difficult to obtain designer-related attributes (static data) in design practices, and the research based on combining static and dynamic data (design action sequences) in engineering design is still underexplored. This paper presents an approach that combines both static and dynamic data for human design decision prediction using two different methods. The first method directly combines the sequential design actions with static data in a recurrent neural network (RNN) model, while the second method integrates a feed-forward neural network that handles static data separately, yet in parallel with RNN. This study contributes to the field from three aspects: (a) we developed a method of utilizing designers’ cluster information as a surrogate static feature to combine with a design action sequence in order to tackle the challenge of obtaining designer-related attributes; (b) we devised a method that integrates the function–behavior–structure design process model with the one-hot vectorization in RNN to transform design action data to design process stages where the insights into design thinking can be drawn; (c) to the best of our knowledge, it is the first time that two methods of combining static and dynamic data in RNN are compared, which provides new knowledge about the utility of different combination methods in studying sequential design decisions. The approach is demonstrated in two case studies on solar energy system design. The results indicate that with appropriate kernel models, the RNN with both static and dynamic data outperforms traditional models that only rely on design action sequences, thereby better supporting design research where static features, such as human characteristics, often play an important role.
This paper presents a framework for studying design thinking. Three paradigmatic approaches are described to measure design cognitive processes: design cognition, design physiology and design neurocognition. Specific tools and methods serve each paradigmatic approach. Design cognition is explored through protocol analysis, black-box experiments, surveys and interviews. Design physiology is measured with eye tracking, electrodermal activity, heart rate and emotion tracking. Design neurocognition is measured using electroencephalography, functional near infrared spectroscopy and functional magnetic resonance imaging. Illustrative examples are presented to describe the types of results each method provides about the characteristics of design thinking, such as design patterns, design reasoning, design creativity, design collaboration, the co-evolution of the problem solution space, or design analysis and evaluation. The triangulation of results from the three paradigmatic approaches to studying design thinking provides a synergistic foundation for the understanding of design cognitive processes. Results from such studies generate a source of feedback to designers, design educators and researchers in design science. New models, new tools and new research questions emerge from the integrated approach proposed and lay down future challenges in studying design thinking.
Empathic design highlights the relevance of understanding users and their circumstances in order to obtain good design outcomes. However, theory-based quantitative methods, which can be used to test user understanding, are hard to find in the design science literature. Here, we introduce a validated method used in social psychological research – the empathic accuracy method – into design to explore how well two designers perform in a design task and whether the designers’ empathic accuracy performance and the physiological synchrony between the two designers and a group of users can predict the designers’ success in two design tasks. The designers could correctly identify approximately 50% of the users’ reported mental content. We did not find a significant correlation between the designers’ empathic accuracy and their (1) performance in design tasks and (2) physiological synchrony with users. Nevertheless, the empathic accuracy method is promising in its attempts to quantify the effect of empathy in design.
Is knowledge definable as justified true belief (“JTB”)? We argue that one can legitimately answer positively or negatively, depending on whether or not one’s true belief is justified by what we call adequate reasons. To facilitate our argument we introduce a simple propositional logic of reason-based belief, and give an axiomatic characterization of the notion of adequacy for reasons. We show that this logic is sufficiently flexible to accommodate various useful features, including quantification over reasons. We use our framework to contrast two notions of JTB: one internalist, the other externalist. We argue that Gettier cases essentially challenge the internalist notion but not the externalist one. Our approach commits us to a form of infallibilism about knowledge, but it also leaves us with a puzzle, namely whether knowledge involves the possession of only adequate reasons, or leaves room for some inadequate reasons. We favor the latter position, which reflects a milder and more realistic version of infallibilism.
We show that the replacement rule of the sequent calculi ${\bf G3[mic]}^= $ in [8] can be replaced by the simpler rule in which one of the principal formulae is not repeated in the premiss.
Neural Networks applied to Machine Translation need a finite vocabulary to express textual information as a sequence of discrete tokens. The currently dominant subword vocabularies exploit statistically-discovered common parts of words to achieve the flexibility of character-based vocabularies without delegating the whole learning of word formation to the neural network. However, they trade this for the inability to apply word-level token associations, which limits their use in semantically-rich areas and prevents some transfer learning approaches e.g. cross-lingual pretrained embeddings, and reduces their interpretability. In this work, we propose new hybrid linguistically-grounded vocabulary definition strategies that keep both the advantages of subword vocabularies and the word-level associations, enabling neural networks to profit from the derived benefits. We test the proposed approaches in both morphologically rich and poor languages, showing that, for the former, the quality in the translation of out-of-domain texts is improved with respect to a strong subword baseline.
The quality of a dataset used for evaluating data linking methods, techniques, and tools depends on the availability of a set of mappings, called reference alignment, that is known to be correct. In particular, it is crucial that mappings effectively represent relations between pairs of entities that are indeed similar due to the fact that they denote the same object. Since the reliability of mappings is decisive in order to perform a fair evaluation of automatic linking methods and tools, we call this property of mappings as mapping fairness. In this article, we propose a crowd-based approach, called Crowd Quality (CQ), for assessing the quality of data linking datasets by measuring the fairness of the mappings in the reference alignment. Moreover, we present a real experiment, where we evaluate two state-of-the-art data linking tools before and after the refinement of the reference alignment based on the CQ approach, in order to present the benefits deriving from the crowd assessment of mapping fairness.
Quantum set theory (QST) and topos quantum theory (TQT) are two long running projects in the mathematical foundations of quantum mechanics (QM) that share a great deal of conceptual and technical affinity. Most pertinently, both approaches attempt to resolve some of the conceptual difficulties surrounding QM by reformulating parts of the theory inside of nonclassical mathematical universes, albeit with very different internal logics. We call such mathematical universes, together with those mathematical and logical structures within them that are pertinent to the physical interpretation, ‘Q-worlds’. Here, we provide a unifying framework that allows us to (i) better understand the relationship between different Q-worlds, and (ii) define a general method for transferring concepts and results between TQT and QST, thereby significantly increasing the expressive power of both approaches. Along the way, we develop a novel connection to paraconsistent logic and introduce a new class of structures that have significant implications for recent work on paraconsistent set theory.
We show that Dependent Choice is a sufficient choice principle for developing the basic theory of proper forcing, and for deriving generic absoluteness for the Chang model in the presence of large cardinals, even with respect to$\mathsf {DC}$-preserving symmetric submodels of forcing extensions. Hence,$\mathsf {ZF}+\mathsf {DC}$ not only provides the right framework for developing classical analysis, but is also the right base theory over which to safeguard truth in analysis from the independence phenomenon in the presence of large cardinals. We also investigate some basic consequences of the Proper Forcing Axiom in$\mathsf {ZF}$, and formulate a natural question about the generic absoluteness of the Proper Forcing Axiom in$\mathsf {ZF}+\mathsf {DC}$ and$\mathsf {ZFC}$. Our results confirm$\mathsf {ZF} + \mathsf {DC}$ as a natural foundation for a significant portion of “classical mathematics” and provide support to the idea of this theory being also a natural foundation for a large part of set theory.
Since 2013, federal research-funding agencies have been required to develop and implement broad data sharing policies. Yet agencies today continue to grapple with the mechanisms necessary to enable the sharing of a wide range of data types, from genomic and other -omics data to clinical and pharmacological data to survey and qualitative data. In 2016, the National Cancer Institute (NCI) launched the ambitious $1.8 billion Cancer Moonshot Program, which included a new Public Access and Data Sharing (PADS) Policy applicable to funding applications submitted on or after October 1, 2017. The PADS Policy encourages the immediate public release of published research results and data and requires all Cancer Moonshot grant applicants to submit a PADS plan describing how they will meet these goals. We reviewed the PADS plans submitted with approximately half of all funded Cancer Moonshot grant applications in fiscal year 2018, and found that a majority did not address one or more elements required by the PADS Policy. Many such plans made no reference to the PADS Policy at all, and several referenced obsolete or outdated National Institutes of Health (NIH) policies instead. We believe that these omissions arose from a combination of insufficient education and outreach by NCI concerning its PADS Policy, both to potential grant applicants and among NCI’s program staff and external grant reviewers. We recommend that other research funding agencies heed these findings as they develop and roll out new data sharing policies.
A set of graphs are called cospectral if their adjacency matrices have the same characteristic polynomial. In this paper we introduce a simple method for constructing infinite families of cospectral regular graphs. The construction is valid for special cases of a property introduced by Schwenk. For the case of cubic (3-regular) graphs, computational results are given which show that the construction generates a large proportion of the cubic graphs, which are cospectral with another cubic graph.
Given graphs H1, H2, a graph G is (H1, H2) -Ramsey if, for every colouring of the edges of G with red and blue, there is a red copy of H1 or a blue copy of H2. In this paper we investigate Ramsey questions in the setting of randomly perturbed graphs. This is a random graph model introduced by Bohman, Frieze and Martin [8] in which one starts with a dense graph and then adds a given number of random edges to it. The study of Ramsey properties of randomly perturbed graphs was initiated by Krivelevich, Sudakov and Tetali [30] in 2006; they determined how many random edges must be added to a dense graph to ensure the resulting graph is with high probability (K3, Kt) -Ramsey (for t ≽ 3). They also raised the question of generalizing this result to pairs of graphs other than (K3, Kt). We make significant progress on this question, giving a precise solution in the case when H1 = Ks and H2 = Kt where s, t ≽ 5. Although we again show that one requires polynomially fewer edges than in the purely random graph, our result shows that the problem in this case is quite different to the (K3, Kt) -Ramsey question. Moreover, we give bounds for the corresponding (K4, Kt) -Ramsey question; together with a construction of Powierski [37] this resolves the (K4, K4) -Ramsey problem.
We also give a precise solution to the analogous question in the case when both H1 = Cs and H2 = Ct are cycles. Additionally we consider the corresponding multicolour problem. Our final result gives another generalization of the Krivelevich, Sudakov and Tetali [30] result. Specifically, we determine how many random edges must be added to a dense graph to ensure the resulting graph is with high probability (Cs, Kt) -Ramsey (for odd s ≽ 5 and t ≽ 4).
To prove our results we combine a mixture of approaches, employing the container method, the regularity method as well as dependent random choice, and apply robust extensions of recent asymmetric random Ramsey results.
A diregular bipartite tournament is a balanced complete bipartite graph whose edges are oriented so that every vertex has the same in- and out-degree. In 1981 Jackson showed that a diregular bipartite tournament contains a Hamilton cycle, and conjectured that in fact its edge set can be partitioned into Hamilton cycles. We prove an approximate version of this conjecture: for every ε > 0 there exists n0 such that every diregular bipartite tournament on 2n ≥ n0 vertices contains a collection of (1/2–ε)n cycles of length at least (2–ε)n. Increasing the degree by a small proportion allows us to prove the existence of many Hamilton cycles: for every c > 1/2 and ε > 0 there exists n0 such that every cn-regular bipartite digraph on 2n ≥ n0 vertices contains (1−ε)cn edge-disjoint Hamilton cycles.
We prove Bogolyubov–Ruzsa-type results for finite subsets of groups with small tripling, |A3| ≤ O(|A|), or small alternation, |AA−1A| ≤ O(|A|). As applications, we obtain a qualitative analogue of Bogolyubov’s lemma for dense sets in arbitrary finite groups, as well as a quantitative arithmetic regularity lemma for sets of bounded VC-dimension in finite groups of bounded exponent. The latter result generalizes the abelian case, due to Alon, Fox and Zhao, and gives a quantitative version of previous work of the author, Pillay and Terry.
For acquiring a broad view in an unknown environment, we proposed a control strategy based on the Bézier curve for the snake robot raising its head. Then, an improved discretization method was developed to accommodate the backbone curves with more complex shapes. Besides, in order to determine the condition of using the improved discretization method, energy of framed space curve is introduced originally to estimate the shape complexity of the backbone curve. At last, based on degree elevation of the Bézier curve, an obstacle avoidance strategy of the head-raising motion was proposed and validated through simulation.
Automatic detection of negated content is often a prerequisite in information extraction systems in various domains. In the biomedical domain especially, this task is important because negation plays an important role. In this work, two main contributions are proposed. First, we work with languages which have been poorly addressed up to now: Brazilian Portuguese and French. Thus, we developed new corpora for these two languages which have been manually annotated for marking up the negation cues and their scope. Second, we propose automatic methods based on supervised machine learning approaches for the automatic detection of negation marks and of their scopes. The methods show to be robust in both languages (Brazilian Portuguese and French) and in cross-domain (general and biomedical languages) contexts. The approach is also validated on English data from the state of the art: it yields very good results and outperforms other existing approaches. Besides, the application is accessible and usable online. We assume that, through these issues (new annotated corpora, application accessible online, and cross-domain robustness), the reproducibility of the results and the robustness of the NLP applications will be augmented.
Monotonic surfaces spanning finite regions of ℤd arise in many contexts, including DNA-based self-assembly, card-shuffling and lozenge tilings. One method that has been used to uniformly generate these surfaces is a Markov chain that iteratively adds or removes a single cube below the surface during a step. We consider a biased version of the chain, where we are more likely to add a cube than to remove it, thereby favouring surfaces that are ‘higher’ or have more cubes below it. We prove that the chain is rapidly mixing for any uniform bias in ℤ2 and for bias λ > d in ℤd when d > 2. In ℤ2 we match the optimal mixing time achieved by Benjamini, Berger, Hoffman and Mossel in the context of biased card shuffling [2], but using much simpler arguments. The proofs use a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile. We show that the chain continues to be rapidly mixing if the biases are close to uniform, but that the chain can converge exponentially slowly in the general setting.
Motor adaptation is a process by which the brain gradually reduces error induced by a predictable change in the environment, e.g., pointing while wearing prism glasses. It is thought to occur via largely implicit processes, though explicit strategies are also thought to contribute. Research suggests a role of the cerebellum in the implicit aspects of motor adaptation. Using non-invasive brain stimulation, we sought to investigate the involvement of the cerebellum in implicit motor adaptation in healthy participants. Inhibition of the cerebellum was attained through repetitive transcranial magnetic stimulation (rTMS), after which participants performed a visuomotor-rotation task while using an explicit strategy. Adaptation and aftereffects of the TMS group showed no difference in behaviour compared to a Sham stimulation group, therefore this study did not provide any further evidence of a specific role of the cerebellum in implicit motor adaptation. However, our behavioral findings replicate those in the seminal study by Mazzoni and Krakauer (2006).
The chase procedure for existential rules is an indispensable tool for several database applications, where its termination guarantees the decidability of these tasks. Most previous studies have focused on the skolem chase variant and its termination analysis. It is known that the restricted chase variant is a more powerful tool in termination analysis provided a database is given. But all-instance termination presents a challenge since the critical database and similar techniques do not work. In this paper, we develop a novel technique to characterize the activeness of all possible cycles of a certain length for the restricted chase, which leads to the formulation of a framework of parameterized classes of the finite restricted chase, called $k$-$\mathsf{safe}(\Phi)$ rule sets. This approach applies to any class of finite skolem chase identified with a condition of acyclicity. More generally, we show that the approach can be applied to the hierarchy of bounded rule sets previously only defined for the skolem chase. Experiments on a collection of ontologies from the web show the applicability of the proposed methods on real-world ontologies.