To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In many cases, the use of damping technologies is the only option to reduce undesired vibrations. Despite various damping techniques available on the market, the design of a precise damping behaviour still needs a lot of experimental testing and engineering experience. This is also the case for particle damping. However, for lightweight structures, technologies such as particle damping provide an opportunity to improve the structural dynamic behaviour without a large mass gain. With respect to this conflict, a hybrid numerical and experimental design approach is presented based on frequency based substructuring (FBS). With this technique, the use of experimental data for design optimization is possible and detailed modelling of the nonlinear particle damping system can be avoided. Moreover, based on the FBS, an approach to optimize damping and weight is proposed. All results are compared to experiments, and a subsequent discussion shows that the predictions for particle damping with FBS are accurate for defined operating points from which realistic designs can be derived. Generally, it is shown that methodical design approaches may strongly improve not only product development processes but also structural mechanical design.
Set-based design (SBD), sometimes referred to as set-based concurrent engineering (SBCE), has emerged as an important component of lean product development (LPD) with all researchers describing it as a core enabler of LPD. Research has explored the principles underlying LPD and SBCE, but methodologies for the practical implementation need to be better understood. A review of SBD is performed in this article in order to discover and analyse the key aspects to consider when developing a model and methodology to transition to SBCE. The publications are classified according to a new framework, which allows us to map the topology of the relevant SBD literature from two perspectives: the research paradigms and the coverage of the generic creative design process (Formulation–Synthesis–Analysis–Evaluation–Documentation–Reformulation). It is found that SBD has a relatively low theoretical development, but there is a steady increase in the diversity of contributions. The literature abounds with methods, guidelines and tools to implement SBCE, but they rarely rely on a model that is in the continuum of a design process model, product model or knowledge-based model with the aim of federating the three Ps (People–Product–Process) towards SBCE and LPD in traditional industrial contexts.
BPIFA2 (PSP, SPLUNC2, C20orf70) is a major salivary protein of uncertain physiological function. BPIFA2 is downregulated in salivary glands of spontaneously hypertensive rats, pointing to a role in blood pressure regulation. This study used a novel Bpifa2 knockout mouse model to test the role of BPIFA2 in sodium preference and blood pressure. Blood pressure did not differ between wild-type male and female mice but was significantly lower in male knockout mice compared to male wild-type mice. In contrast, blood pressure was increased in female knockout mice compared to female wild-type mice. Female wild-type mice showed a significant preference for 0.9% saline compared to male mice. This difference was reduced in the knockout mice. BPIFA2 is an LPS-binding protein but it remains to be determined if the reported effects are mediated by the LPS-binding activity of BPIFA2.
We develop a model of strategic network formation of collaborations to analyze the consequences of an understudied but consequential form of heterogeneity: differences between actors in the form of their production functions. We also address how this interacts with resource heterogeneity, as a way to measure the impact actors have as potential partners on a collaborative project. Some actors (e.g., start-up firms) may exhibit increasing returns to their investment into collaboration projects, while others (e.g., established firms) may face decreasing returns. Our model provides insights into how actor heterogeneity can help explain well-observed collaboration patterns. We show that if there is a direct relation between increasing returns and resources, start-ups exclude mature firms and networks become segregated by types of production function, portraying dominant group architectures. On the other hand, if there is an inverse relation between increasing returns and resources, networks portray core-periphery architectures, where the mature firms form a core and start-ups with low-resources link to them.
The wearable product market is growing rapidly and is full of products with similar functions and features. Engaging users at an emotional level may be the key to differentiating a product and encouraging long-term use. While researchers have proposed various design approaches to realize design qualities for wearable devices, emotional needs are often extracted by analysis-heavy methods and disconnected in the design process. To bridge this gap, we developed a new approach that uses a two-axis interactive collage tool for users to compare and evaluate wearable products with targeted emotion-related descriptive words. This approach enabled designers to explore how users perceive products and identify types of emotions that were associated with users’ preferences for and perception of the product’s form and visible characteristics. The example study demonstrated this approach by exploring the relationships between product characteristics and design goals, such as user comfort, user delight, and perceived product usefulness. The results showed that products that resemble clothing were perceived as more delightful and comfortable. The approach can be further used to explore other design concepts or goals.
Differential categories axiomatize the basics of differentiation and provide categorical models of differential linear logic. A differential category is said to have antiderivatives if a natural transformation , which all differential categories have, is a natural isomorphism. Differential categories with antiderivatives come equipped with a canonical integration operator such that generalizations of the Fundamental Theorems of Calculus hold. In this paper, we show that Blute, Ehrhard, and Tasson's differential category of convenient vector spaces has antiderivatives. To help prove this result, we show that a differential linear category – which is a differential category with a monoidal coalgebra modality – has antiderivatives if and only if one can integrate over the monoidal unit and such that the Fundamental Theorems of Calculus hold. We also show that generalizations of the relational model (which are biproduct completions of complete semirings) are also differential linear categories with antiderivatives.
We introduce the notion of implicative algebra, a simple algebraic structure intended to factorize the model-theoretic constructions underlying forcing and realizability (both in intuitionistic and classical logic). The salient feature of this structure is that its elements can be seen both as truth values and as (generalized) realizers, thus blurring the frontier between proofs and types. We show that each implicative algebra induces a (Set-based) tripos, using a construction that is reminiscent from the construction of a realizability tripos from a partial combinatory algebra. Relating this construction with the corresponding constructions in forcing and realizability, we conclude that the class of implicative triposes encompasses all forcing triposes (both intuitionistic and classical), all classical realizability triposes (in the sense of Krivine), and all intuitionistic realizability triposes built from partial combinatory algebras.
This paper proposes a novel laser beam tracking mechanism for a mobile target robot that is used in shooting ranges. Compared with other traditional tracking mechanisms and modules, the proposed laser beam tracking mechanism is more flexible and low cost in use. The mechanical design and the working principle of the tracking module are illustrated, and the complete control system of the mobile target robot is introduced in detail. The tracking control includes two main steps: localizing the mobile target robot with regards to the position of the laser beam and tracking the laser beam by the linear quadratic regulator (LQR). First of all, the state function of the control system is built for this tracking system; second, the control law is deduced according to the discretized state function; lastly, the stability of the control method is proved by the Lyapunov theory. The experimental results demonstrate that the Hue, Saturation, Value feature-extracting method is robust and is qualified to be used for localization in the laser beam tracking control. It is verified through experiments that the LQR method is of better performance than the conventional Proportional Derivative control in the aspect of converge time, lateral error control, and distance error control.
Computational modeling of the human sequential design process and successful prediction of future design decisions are fundamental to design knowledge extraction, transfer, and the development of artificial design agents. However, it is often difficult to obtain designer-related attributes (static data) in design practices, and the research based on combining static and dynamic data (design action sequences) in engineering design is still underexplored. This paper presents an approach that combines both static and dynamic data for human design decision prediction using two different methods. The first method directly combines the sequential design actions with static data in a recurrent neural network (RNN) model, while the second method integrates a feed-forward neural network that handles static data separately, yet in parallel with RNN. This study contributes to the field from three aspects: (a) we developed a method of utilizing designers’ cluster information as a surrogate static feature to combine with a design action sequence in order to tackle the challenge of obtaining designer-related attributes; (b) we devised a method that integrates the function–behavior–structure design process model with the one-hot vectorization in RNN to transform design action data to design process stages where the insights into design thinking can be drawn; (c) to the best of our knowledge, it is the first time that two methods of combining static and dynamic data in RNN are compared, which provides new knowledge about the utility of different combination methods in studying sequential design decisions. The approach is demonstrated in two case studies on solar energy system design. The results indicate that with appropriate kernel models, the RNN with both static and dynamic data outperforms traditional models that only rely on design action sequences, thereby better supporting design research where static features, such as human characteristics, often play an important role.
This paper presents a framework for studying design thinking. Three paradigmatic approaches are described to measure design cognitive processes: design cognition, design physiology and design neurocognition. Specific tools and methods serve each paradigmatic approach. Design cognition is explored through protocol analysis, black-box experiments, surveys and interviews. Design physiology is measured with eye tracking, electrodermal activity, heart rate and emotion tracking. Design neurocognition is measured using electroencephalography, functional near infrared spectroscopy and functional magnetic resonance imaging. Illustrative examples are presented to describe the types of results each method provides about the characteristics of design thinking, such as design patterns, design reasoning, design creativity, design collaboration, the co-evolution of the problem solution space, or design analysis and evaluation. The triangulation of results from the three paradigmatic approaches to studying design thinking provides a synergistic foundation for the understanding of design cognitive processes. Results from such studies generate a source of feedback to designers, design educators and researchers in design science. New models, new tools and new research questions emerge from the integrated approach proposed and lay down future challenges in studying design thinking.
Empathic design highlights the relevance of understanding users and their circumstances in order to obtain good design outcomes. However, theory-based quantitative methods, which can be used to test user understanding, are hard to find in the design science literature. Here, we introduce a validated method used in social psychological research – the empathic accuracy method – into design to explore how well two designers perform in a design task and whether the designers’ empathic accuracy performance and the physiological synchrony between the two designers and a group of users can predict the designers’ success in two design tasks. The designers could correctly identify approximately 50% of the users’ reported mental content. We did not find a significant correlation between the designers’ empathic accuracy and their (1) performance in design tasks and (2) physiological synchrony with users. Nevertheless, the empathic accuracy method is promising in its attempts to quantify the effect of empathy in design.
Is knowledge definable as justified true belief (“JTB”)? We argue that one can legitimately answer positively or negatively, depending on whether or not one’s true belief is justified by what we call adequate reasons. To facilitate our argument we introduce a simple propositional logic of reason-based belief, and give an axiomatic characterization of the notion of adequacy for reasons. We show that this logic is sufficiently flexible to accommodate various useful features, including quantification over reasons. We use our framework to contrast two notions of JTB: one internalist, the other externalist. We argue that Gettier cases essentially challenge the internalist notion but not the externalist one. Our approach commits us to a form of infallibilism about knowledge, but it also leaves us with a puzzle, namely whether knowledge involves the possession of only adequate reasons, or leaves room for some inadequate reasons. We favor the latter position, which reflects a milder and more realistic version of infallibilism.
We show that the replacement rule of the sequent calculi ${\bf G3[mic]}^= $ in [8] can be replaced by the simpler rule in which one of the principal formulae is not repeated in the premiss.
Neural Networks applied to Machine Translation need a finite vocabulary to express textual information as a sequence of discrete tokens. The currently dominant subword vocabularies exploit statistically-discovered common parts of words to achieve the flexibility of character-based vocabularies without delegating the whole learning of word formation to the neural network. However, they trade this for the inability to apply word-level token associations, which limits their use in semantically-rich areas and prevents some transfer learning approaches e.g. cross-lingual pretrained embeddings, and reduces their interpretability. In this work, we propose new hybrid linguistically-grounded vocabulary definition strategies that keep both the advantages of subword vocabularies and the word-level associations, enabling neural networks to profit from the derived benefits. We test the proposed approaches in both morphologically rich and poor languages, showing that, for the former, the quality in the translation of out-of-domain texts is improved with respect to a strong subword baseline.
The quality of a dataset used for evaluating data linking methods, techniques, and tools depends on the availability of a set of mappings, called reference alignment, that is known to be correct. In particular, it is crucial that mappings effectively represent relations between pairs of entities that are indeed similar due to the fact that they denote the same object. Since the reliability of mappings is decisive in order to perform a fair evaluation of automatic linking methods and tools, we call this property of mappings as mapping fairness. In this article, we propose a crowd-based approach, called Crowd Quality (CQ), for assessing the quality of data linking datasets by measuring the fairness of the mappings in the reference alignment. Moreover, we present a real experiment, where we evaluate two state-of-the-art data linking tools before and after the refinement of the reference alignment based on the CQ approach, in order to present the benefits deriving from the crowd assessment of mapping fairness.
Quantum set theory (QST) and topos quantum theory (TQT) are two long running projects in the mathematical foundations of quantum mechanics (QM) that share a great deal of conceptual and technical affinity. Most pertinently, both approaches attempt to resolve some of the conceptual difficulties surrounding QM by reformulating parts of the theory inside of nonclassical mathematical universes, albeit with very different internal logics. We call such mathematical universes, together with those mathematical and logical structures within them that are pertinent to the physical interpretation, ‘Q-worlds’. Here, we provide a unifying framework that allows us to (i) better understand the relationship between different Q-worlds, and (ii) define a general method for transferring concepts and results between TQT and QST, thereby significantly increasing the expressive power of both approaches. Along the way, we develop a novel connection to paraconsistent logic and introduce a new class of structures that have significant implications for recent work on paraconsistent set theory.
We show that Dependent Choice is a sufficient choice principle for developing the basic theory of proper forcing, and for deriving generic absoluteness for the Chang model in the presence of large cardinals, even with respect to$\mathsf {DC}$-preserving symmetric submodels of forcing extensions. Hence,$\mathsf {ZF}+\mathsf {DC}$ not only provides the right framework for developing classical analysis, but is also the right base theory over which to safeguard truth in analysis from the independence phenomenon in the presence of large cardinals. We also investigate some basic consequences of the Proper Forcing Axiom in$\mathsf {ZF}$, and formulate a natural question about the generic absoluteness of the Proper Forcing Axiom in$\mathsf {ZF}+\mathsf {DC}$ and$\mathsf {ZFC}$. Our results confirm$\mathsf {ZF} + \mathsf {DC}$ as a natural foundation for a significant portion of “classical mathematics” and provide support to the idea of this theory being also a natural foundation for a large part of set theory.