To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Finite state machines are used for modeling the control and sequencing view of a system or object. Many systems, such as real-time systems, are highly state-dependent; that is, their actions depend not only on their inputs but also on what has previously happened in the system. Notations used to define finite state machines are the state transition diagram, statechart, and state transition table. In highly state-dependent systems, these notations can help greatly by providing insight into understanding the complexity of these systems.
In the UML notation, a state transition diagram is referred to as a state machine diagram. The UML state machine diagram notation is based on Harel's statechart notation (Harel 1988; Harel and Politi 1998). In this book, the terms statechart and state machine diagram are used interchangeably. We refer to a traditional state transition diagram, which is not hierarchical, as a flat statechart and use the term hierarchical statechart to refer to the concept of hierarchical state decomposition. A brief overview of the statechart notation is given in Chapter 2 (Section 2.6).
This chapter starts by considering the characteristics of flat statecharts and then describes hierarchical statecharts. To show the benefits of hierarchical statecharts, this chapter starts with the simplest form of flat statechart and gradually shows how it can be improved upon to achieve the full modeling power of hierarchical statecharts. Several examples are given throughout the chapter from two case studies, the Automated Teller Machine and Microwave Oven finite state machines.
A software life cycle is a phased approach to developing software, with specific deliverables and milestones within each phase. A software life cycle model is an abstraction of the software development process, which is convenient to use for planning purposes. This chapter takes a software life cycle perspective on software development. Different software life cycle models (also referred to as software process models), including the spiral model and the Unified Software Development Process, are briefly described and compared. The roles of design verification and validation and of software testing are discussed.
SOFTWARE LIFE CYCLE MODELS
The waterfall model was the earliest software life cycle model to be widely used. This section starts with an overview of the waterfall model. It then outlines alternative software life cycle models that have since been developed to overcome some of the limitations of the waterfall model. These are the throwaway prototyping life cycle model, the incremental development life cycle model (also referred to as evolutionary prototyping), the spiral model, and the Unified Software Development Process.
Waterfall Life Cycle Model
Since the 1960s, the cost of developing software has grown steadily and the cost of developing and purchasing hardware has rapidly decreased. Furthermore, software now typically costs eighty percent of a total project's budget, whereas in the early days of software development, the hardware was by far the largest project cost (Boehm 2006).
We show that a set A ⊂ {0, 1}n with edge-boundary of size at mostcan be made into a subcube by at most (2ε/log2(1/ε))|A| additions and deletions, provided ε is less than an absolute constant.
We deduce that if A ⊂ {0, 1}n has size 2t for some t ∈ ℕ, and cannot be made into a subcube by fewer than δ|A| additions and deletions, then its edge-boundary has size at leastprovided δ is less than an absolute constant. This is sharp whenever δ = 1/2j for some j ∈ {1, 2, . . ., t}.
The cover time of a graph is a celebrated example of a parameter that is easy to approximate using a randomized algorithm, but for which no constant factor deterministic polynomial time approximation is known. A breakthrough due to Kahn, Kim, Lovász and Vu [25] yielded a (log logn)2 polynomial time approximation. We refine the upper bound of [25], and show that the resulting bound is sharp and explicitly computable in random graphs. Cooper and Frieze showed that the cover time of the largest component of the Erdős–Rényi random graph G(n, c/n) in the supercritical regime with c > 1 fixed, is asymptotic to ϕ(c)nlog2n, where ϕ(c) → 1 as c ↓ 1. However, our new bound implies that the cover time for the critical Erdős–Rényi random graph G(n, 1/n) has order n, and shows how the cover time evolves from the critical window to the supercritical phase. Our general estimate also yields the order of the cover time for a variety of other concrete graphs, including critical percolation clusters on the Hamming hypercube {0, 1}n, on high-girth expanders, and on tori ℤdn for fixed large d. This approach also gives a simpler proof of a result of Aldous [2] that the cover time of a uniform labelled tree on k vertices is of order k3/2. For the graphs we consider, our results show that the blanket time, introduced by Winkler and Zuckerman [45], is within a constant factor of the cover time. Finally, we prove that for any connected graph, adding an edge can increase the cover time by at most a factor of 4.
Cooking themselves is very important and difficult for elderly and disabled people in daily life. This paper presents a cooking robot for those people who are confined to wheelchairs. The robot can automatically load ingredients, cook Chinese dishes, take cooked foods out, deliver dishes to the table, self-clean, collect used ingredient box components, and so on. Its structure and interface is designed based on the barrier-free design principles. Elderly and disabled people can only click one button in the friendly Graphic User Interface of a Personal Digital Assistant (PDA) to launch the cooking processes, and several classic Chinese dishes would be placed in front of them one after another within few minutes. Experiments show that the robot can meet their special needs, and the involved aid activities are easy and effective for elderly and disabled people.
In dialogue systems it is important to label the dialogue turns with dialogue-related meaning. Each turn is usually divided into segments and these segments are labelled with dialogue acts (DAs). A DA is a representation of the functional role of the segment. Each segment is labelled with one DA, representing its role in the ongoing discourse. The sequence of DAs given a dialogue turn is used by the dialogue manager to understand the turn. Probabilistic models that perform DA labelling can be used on segmented or unsegmented turns. The last option is more likely for a practical dialogue system, but it provides poorer results. In that case, a hypothesis for the number of segments can be provided to improve the results. We propose some methods to estimate the probability of the number of segments based on the transcription of the turn. The new labelling model includes the estimation of the probability of the number of segments in the turn. We tested this new approach with two different dialogue corpora: SwitchBoard and Dihana. The results show that this inclusion significantly improves the labelling accuracy.
We introduce a discrete random process which we call the passenger model, and show that it is connected to a certain random model of the assignment problem and in particular to the so-called Buck–Chan–Robbins urn process. We propose a conjecture on the distribution of the location of the minimum cost assignment in a cost matrix with zeros at specified positions and remaining entries of exponential distribution. The conjecture is consistent with earlier results on the participation probability of an individual matrix entry. We also use the passenger model to verify a conjecture by V. Dotsenko on the assignment problem.
In a gait generation method based on the parametric excitation principle, appropriate motion of the center of mass restores kinetic energy lost by heel strike. The motion is realized by bending and stretching a swing-leg regardless of bending direction. In this paper, we first show that inverse bending restores more mechanical energy than forward bending, and then propose a parametric excitation-based inverse bending gait for a kneed biped robot, which improves gait efficiency of parametric excitation walking.
An r-cut of the complete r-uniform hypergraph Krn is obtained by partitioning its vertex set into r parts and taking all edges that meet every part in exactly one vertex. In other words it is the edge set of a spanning complete r-partite subhypergraph of Krn. An r-cut cover is a collection of r-cuts such that each edge of Krn is in at least one of the cuts. While in the graph case r = 2 any 2-cut cover on average covers each edge at least 2-o(1) times, when r is odd we exhibit an r-cut cover in which each edge is covered exactly once. When r is even no such decomposition can exist, but we can bound the average number of times an edge is cut in an r-cut cover between and . The upper bound construction can be reformulated in terms of a natural polyhedral problem or as a probability problem, and we solve the latter asymptotically.
This paper advocates a new science of intelligence, one that is holistic, multi-disciplinary, oriented to crucial values as health and well-being and able to contribute to the solution of real-world problems. As a starting point we study the interplay between two research disciplines that until now have been hardly related to each other: Ayurveda and multi-agent systems. We consider some possible results of the cross fertilisation like for instance the application of ayurvedic knowledge to improve the skills of practical reasoning agents.
Designers who are experts in a given design domain are well known to be able to Immediately focus on “good designs,” suggesting that they may have learned additional constraints while exploring the design space based on some functional aspects. These constraints, which are often implicit, result in a redefinition of the design space, and may be crucial for discovering chunks or interrelations among the design variables. Here we propose a machine-learning approach for discovering such constraints in supervised design tasks. We develop models for specifying design function in situations where the design has a given structure or embodiment, in terms of a set of performance metrics that evaluate a given design. The functionally feasible regions, which are those parts of the design space that demonstrate high levels of performance, can now be learned using any general purpose function approximator. We demonstrate this process using examples from the design of simple locking mechanisms, and as in human experience, we show that the quality of the constraints learned improves with greater exposure in the design space. Next, we consider changing the embodiment and suggest that similar embodiments may have similar abstractions. To explore convergence, we also investigate the variability in time and error rates where the experiential patterns are significantly different. In the process, we also consider the situation where certain functionally feasible regions may encode lower dimensional manifolds and how this may relate to cognitive chunking.
Data mining has become a well-established discipline within the domain of artificial intelligence (AI) and knowledge engineering (KE). It has its roots in machine learning and statistics, but encompasses other areas of computer science. It has received much interest over the last decade as advances in computer hardware have provided the processing power to enable large-scale data mining to be conducted. Unlike other innovations in AI and KE, data mining can be argued to be an application rather then a technology and thus can be expected to remain topical for the foreseeable future. This paper presents a brief review of the history of data mining, up to the present day, and some insights into future directions.
How can we prepare engineering students to work collectively on innovative design issues, involving ill-defined, “wicked” problems? Recent works have emphasized the need for students to learn to combine divergent and convergent thinking in a collaborative, controlled manner. From this perspective, teaching must help them overcome four types of obstacles or “fixation effects” (FEs) that are found in the generation of alternatives, knowledge acquisition, collaborative creativity, and creativity processes. We begin by showing that teaching based on concept–knowledge (C-K) theory can help to manage FEs because it helps to clarify them and then to overcome them by providing means of action. We show that C-K theory can provide scaffolding to improve project-based learning (PBL), in what we call project-based critical learning (PBCL). PBCL helps students be critical and give due thought to the main issues in innovative design education: FEs. We illustrate the PBCL process with several cases and show precisely where the FEs appear and how students are able to overcome them. We conclude by discussing two main criteria of any teaching method, both of which are usually difficult to address in situations of innovative design teaching. First, can the method be evaluated? Second, is the chosen case “realistic” enough? We show that C-K-based PBCL can be rigorously evaluated by teachers, and we discuss the circumstances in which a C-K-based PBCL may or may not be realistic.
Research on the relation between Belief Revision and Argumentation has always been fruitful in both directions: some argumentation formalisms can be used to define belief change operators, and belief change techniques have also been used for modeling the dynamics of beliefs in argumentation formalisms. In this paper, we give a historical perspective on how belief revision has evolved in the last three decades, and how it has been combined with argumentation. First, we will recall the foundational works concerning the links between both areas. On the basis of such insights, we will present a conceptual view on this topic and some further developments. We offer a glimpse into the future of research in this area based on the understanding of argumentation and belief revision as complementary, mutually useful disciplines.
Starting from the pioneering work on Linda and Gamma, coordination models and languages have gone through an amazing evolution process over the years. From closed to open systems, from parallel computing to multi-agent systems and from database integration to knowledge-intensive environments, coordination abstractions and technologies have gained in relevance and power in those scenarios where complexity has become a key factor. In this paper, we outline and motivate 25 years of evolution of coordination models and languages, and discuss their potential perspectives in the future of artificial systems.
This paper presents a personal interpretation of the evolution of artificial intelligence (AI) systems during these last 25 years. This evolution is presented along five generations of AI systems, namely expert systems, joint cognitive systems, intelligent systems, intelligent assistant systems, and the coming generation of context-based intelligent assistant systems. Our testimony relies on different real-world applications in different domains, especially for the French national power company, the subway companies in Paris and in Rio de Janeiro, in medicine, a platform for e-maintenance, road safety, and open sources. Our main claim is to underline that the next generation of AI systems (context-based intelligent assistant systems) requires a radically different consideration on context and its relations with the users, the task at hand, the situation, and the environment in which the task is accomplish by the user; the observation of users through their behaviors and not a profile library; a robust conceptual framework for modeling and managing context; and a computational tool for representing in a uniform way pieces of knowledge, of reasoning, and of contexts.