To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A multistudy approach is presented that allows design thinking of complex systems to be studied by triangulating causal controlled lab findings with coded data from more complex products. A case study illustration of this approach is provided. During the conceptual design of engineering systems, designers face many cognitive challenges, including design fixation, errors in their mental models, and the sunk cost effect. These factors need to be mitigated for the generation of effective ideas. Understanding the effects of these challenges in a realistic and complex engineering system is especially difficult due to a variety of factors influencing the results. Studying the design of such systems in a controlled environment is extremely challenging because of the scale and complexity of such systems and the time needed to design the systems. Considering these challenges, a mixed-method approach is presented for studying the design thinking of complex engineering systems. This approach includes a controlled experiment with a simple system and a qualitative cognitive-artifacts study on more complex engineering systems followed by the triangulation of results. The triangulated results provide more generalizable information for complex system design thinking. This method combines the advantages of quantitative and qualitative study methods, making them more powerful while studying complex engineering systems. The proposed method is illustrated further using an illustrative study on the cognitive effects of physical models during the design of engineering systems.
This paper presents a robust reliability analysis method for systems of multimodular redundant (MMR) controllers using the method of partitioning and parallel processing of a Markov chain (PPMC). A Markov chain is formulated to represent the N distinct states of the MMR controllers. Such a Markov chain has N2 directed edges, and each edge corresponds to a transition probability between a pair of start and end states. Because N can be easily increased substantially, the system reliability analysis may require large computational resources, such as the central processing unit usage and memory occupation. By the PPMC, a Markov chain's transition probability matrix can be partitioned and reordered, such that the system reliability can be evaluated through only the diagonal submatrices of the transition probability matrix. In addition, calculations regarding the submatrices are independent of each other and thus can be conducted in parallel to assure the efficiency. The simulation results show that, compared with the sequential method applied to an intact Markov chain, the proposed PPMC can improve the performance and produce allowable accuracy for the reliability analysis on large-scale systems of MMR controllers.
As a benefit of modularization of complex systems, original equipment manufacturers (OEMs) can choose suppliers in a less constricted way when faced with new or evolving requirements. However, new suppliers usually add uncertainties to the system development. Because suppliers are tightly integrated into the design process in modular design and therefore greatly influence the outcome of the OEM's products, the uncertainty along with requirements satisfaction of the suppliers and their modules should be controlled starting from potential supplier identification. In addition, to better satisfy new requirements, the potential supplier identification should be combined with architecture generation to enable the new technology integration. In this paper, we propose the Architecture & Supplier Identification Tool, which generates all possible architectures and corresponding suppliers based on new requirements through matrix mapping and propagation. Using the Architecture & Supplier Identification Tool, the overall uncertainty and requirements satisfaction of generated architectures can be estimated and controlled. The proposed method aims at providing decision support for early design of complex systems, thereby helping OEMs have an integrated view of suppliers and system architectures in requirements satisfaction and overall uncertainty.
Adaptability can have many different definitions: reliability, robustness, survivability, and changeability (adaptability to requirements change). In this research, we focused entirely on the last type. We discuss two alternative approaches to requirements change adaptability. One is the valuation approach that is based on utility and cost of design changes in response to modified requirements. The valuation approach is theoretically sound because it is based on utility and decision theory, but it may be difficult to use in the real world. The second approach is based on examining product architecture characteristics that facilitate changes that include modularity, hierarchy, interfaces, performance sensitivity, and design margins. This approach is heuristic in nature but more practical to use. If calibrated, it could serve as a surrogate for real adaptability. These measures were incorporated in a software tool for exploring alternative configurations of fractionated space satellite systems.
We consider a two-dimensional reflecting random walk on the non-negative integer quadrant. It is assumed that this reflecting random walk has skip-free transitions. We are concerned with its time-reversed process assuming that the stationary distribution exists. In general, the time-reversed process may not be a reflecting random walk. In this paper, we derive necessary and sufficient conditions for the time-reversed process also to be a reflecting random walk. These conditions are different from but closely related to the product form of the stationary distribution.
Autonomous docking is a focus of research in the field of self-assembly robots. Navigation is a significant stage in the process of autonomous docking between two robotic modules; it determines the efficiency of docking and even the success and failure of the docking task. In most cases, it is too difficult to simultaneously satisfy both linear and angular displacement constraints in a single dynamic numerical computation process. In the present paper, the navigation process is divided into two stages: first, the angular displacement constraint is satisfied, and then the linear displacement condition is fulfilled. In this way, the constraints are loosened and the difficulty of numerical computation is thereby effectively reduced. This two-stage docking navigation model is the main contribution of the present work. By taking the non-holonomic nature of the navigation behavior into consideration, both kinematic and dynamic analyses are performed, and the voltage data of the DC motors required by the two-stage docking navigation are obtained. Finally, docking navigation experiments are completed on a self-assembly modular robot named Sambot. It is verified that the present two-stage strategy is effective in controlling the docking navigation process.
We study mathematically a method for discovering which gene is related to a cell characteristic (“phenotype”) of interest. The method is based on RNA interference – a molecular process for gene deactivation – and on coupling the phenotype with cell fluorescence. A small number of candidate genes are thus isolated, and then tested individually. We model probabilistically this process, prove a limit theorem for its outcome, and derive operational guidelines for maximizing the probability of successful gene discovery.
We establish a relation between two uniform models of random k-graphs (for constant k ⩾ 3) on n labelled vertices: ℍ(k)(n,m), the random k-graph with exactly m edges, and ℍ(k)(n,d), the random d-regular k-graph. By extending the switching technique of McKay and Wormald to k-graphs, we show that, for some range of d = d(n) and a constant c > 0, if m ~ cnd, then one can couple ℍ(k)(n,m) and ℍ(k)(n,d) so that the latter contains the former with probability tending to one as n → ∞. In view of known results on the existence of a loose Hamilton cycle in ℍ(k)(n,m), we conclude that ℍ(k)(n,d) contains a loose Hamilton cycle when d ≫ log n (or just d ⩾ C log n, if k = 3) and d = o(n1/2).
Changes in world-wide population trends have provided new demands for new technologies in areas such as care and rehabilitation. Recent developments in the the field of robotics for neurorehabilitation have shown a range of evidence regarding usefulness of these technologies as a tool to augment traditional physiotherapy. Part of the appeal for these technologies is the possibility to place a rehabilitative tool in one's home, providing a chance for more frequent and accessible technologies for empowering individuals to be in charge of their therapy.
Objective: this manuscript introduces the Supervised Care and Rehabilitation Involving Personal Tele-robotics (SCRIPT) project. The main goal is to demonstrate design and development steps involved in a complex intervention, while examining feasibility of using an instrumented orthotic device for home-based rehabilitation after stroke.
Methods: the project uses a user-centred design methodology to develop a hand/wrist rehabilitation device for home-based therapy after stroke. The patient benefits from a dedicated user interface that allows them to receive feedback on exercise as well as communicating with the health-care professional. The health-care professional is able to use a dedicated interface to send/receive communications and remote-manage patient's exercise routine using provided performance benchmarks. Patients were involved in a feasibility study (n=23) and were instructed to use the device and its interactive games for 180 min per week, around 30 min per day, for a period of 6 weeks, with a 2-months follow up. At the time of this study, only 12 of these patients have finished their 6 weeks trial plus 2 months follow up evaluation.
Results: with the “use feasibility” as objective, our results indicate 2 patients dropping out due to technical difficulty or lack of personal interests to continue. Our frequency of use results indicate that on average, patients used the SCRIPT1 device around 14 min of self-administered therapy a day. The group average for the system usability scale was around 69% supporting system usability.
Conclusions: based on the preliminary results, it is evident that stroke patients were able to use the system in their homes. An average of 14 min a day engagement mediated via three interactive games is promising, given the chronic stage of stroke. During the 2nd year of the project, 6 additional games with more functional relevance in their interaction have been designed to allow for a more variant context for interaction with the system, thus hoping to positively influence the exercise duration. The system usability was tested and provided supporting evidence for this parameter. Additional improvements to the system are planned based on formative feedback throughout the project and during the evaluations. These include a new orthosis that allows a more active control of the amount of assistance and resistance provided, thus aiming to provide a more challenging interaction.
The study of arm muscles for independent operations leading to prosthetic design was carried out. Feature extraction was done on the recorded signal for investigating the voluntary muscular contraction relationship for different arm motions and then repeated factorial analysis of variance (ANOVA) technique was implemented to analyze effectiveness of signal. The electronic design consisted of analog and digital signal processing and controlling circuit and mechanical assembly consisted of wrist, palm and the fingers to grip the object in addition to a screw arrangement connected to a low power DC motor and gear assembly to open or close the hand. The wrist is mechanically rotated to orient the hand in a direction suitable to pick up/hold the object. The entire set up is placed in a casing which provides a cosmetic appeal to the artificial hand and the connected arm. The design criteria include electronic control, reliability, light weight, variable grip force with ease of attachment for simple operations like opening, grasping and lifting objects of different weight with grip force slightly more than enough just like that of a natural hand.
Patients with damage to the cerebellum make reaching movements that are uncoordinated or “ataxic.” One prevailing hypothesis is that the cerebellum functions as an internal model for planning movements, and that damage to the cerebellum results in movements that do not properly account for arm dynamics. An exoskeleton robot was used to record multi-joint reaching movements. Subsequently, joint-torque trajectories were calculated and a gradient descent algorithm found optimal, patient-specific perturbations to actual limb dynamics predicted to reduce directional reaching errors by an average of 41%, elucidating a promising form of robotic intervention and adding support to the internal model hypothesis.
In 1906, the Warrens, a wealthy New York banking family, rented a summer house on Long Island. That summer, six people in the household came down with typhoid fever, a serious bacterial illness with a persistent and very high fever. In the era before antibiotics, typhoid was frequently deadly.
Although the Warrens all survived, the outbreak was troubling enough that a sanitary engineer named George Soper was hired to investigate. Soper examined the water supply, the plumbing, and other possible sources of contamination, but found nothing to explain the outbreak. Eventually, he investigated the family’s new cook, a woman named Mary Mallon. Soper went through her employment history, and found that there had been typhoid outbreaks in most of the places she had worked. Mary Mallon, who became known as “Typhoid Mary,” was the first documented example of an asymptomatic carrier of typhoid (Figure 1.1). She herself was not sick, but she was able to spread the disease to others. Once Mary was discovered to carry typhoid, she was quarantined in a hospital for most of the rest of her life.
In the four parts of this book we’ve introduced foundational concepts from computer science in the context of biology. We had a good time writing it, and hope you’ve enjoyed using it.
Over the course of the book you’ve learned some powerful and fundamental techniques that are used throughout computational biology. You’ve also learned a valuable general skill – how to design computational solutions and implement them in your own programs. Like other skills, computational problem-solving and programming benefit from practice. As you do more, you’ll get even better at it.
With that in mind, we hope that you come away from this book with the confidence to take on new problems. These might range from writing a short program to do some quick analysis, to interfacing with existing programs, to solving altogether new research problems. The key is to find ways to use these tools to further your own interests. In the process, perhaps you’ll join us in the excitement that arises when computational techniques are used to explore the many mysteries of life on Earth.
Our final task is to develop an algorithm to reconstruct phylogenetic relationships based on sequence data. In the final homework problem, the source sequences are mitochondrial DNA from a number of modern human individuals, as well as from several fossils, including a Neanderthal. Let us begin by saying something about these sequences and how they are used to create input for our algorithm.
The cells of eukaryotes, such as humans, contain two types of DNA. The largest type is the nuclear DNA which is found in sets of chromosomes that are inherited sexually, with one copy of each chromosome coming from either parent. A second type of DNA can be found in the mitochondria, organelles specializing in energy metabolism. Mitochondria contain their own circular DNA molecule. As it turns out, mitochondria are inherited maternally – individual humans get their mitochondria from their mother’s egg rather than their father’s sperm. Thus, mitochondrial DNA is passed along the maternal line only.
Mitochondrial DNA has frequently been used in studies of human evolution. One advantage is the fact that it’s inherited from a single parent, and thus is not subject to recombination. Another advantage is the fact that mutations arise comparatively quickly in mammalian mitochondrial DNA. If we are comparing closely related samples, such as human individuals, a higher rate of mutation is good because it produces more differences with which to distinguish the samples.
In the previous chapter we saw how to solve some important computational problems with the use-it-or-lose-it principle. This approach obtains the correct answer by effectively exploring every possible solution to a problem. Unfortunately, it turns out that this approach can get very slow as data sets get large. For example, on a typical personal computer, running the LCS function on two random strings, each of length 10, takes approximately one thousandth of a second. But on two strings of length 25 it takes a good part of an hour and on strings of length 100 (which is still very short by the standards of biologists working with real sequences) it would take, conservatively, well over a trillion years.
We began this part of the book with the problem of determining homology between the mammalian X and the bird Z chromosomes. To solve this problem, we’ll need to do over 1000 comparisons between proteins that are each hundreds of amino acids long. That will (almost literally) take forever!
The different cell types in the human body look different and do very different things: Compare, for example, liver cells and brain cells. How do they manage to be so different given that they have the same DNA? The answer is that cells regulate the expression of their genes – that is, they control when and where their genes are used to make protein. As a result, different cell types make a different complement of proteins.
In fact, the expression of a gene can be regulated by other genes. Biologists represent this using a gene regulatory network, a diagram that shows how genes interact. Figure 13.1 shows an example of such a network for some genes in the bacterium Bacillus subtilis. In the diagram, each gene is represented by a circular node. To show that one gene regulates another, we draw an edge, that is, a line with an arrow. This indicates that one gene (the one which the arrow is drawn from) regulates the transcription of the second (the one which the arrow is drawn to). The effect of this regulation might either be positive (upregulation) or negative (downregulation), but we won’t make a distinction between those two cases here.