To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In two-point boundary value problems the auxiliary conditions associated with the differential equation, called the boundary conditions, are specified at two different values of x. This seemingly small departure from initial value problems has a major repercussion—it makes boundary value problems considerably more difficult to solve. In an initial value problem we were able to start at the point where the initial values were given and march the solution forward as far as needed. This technique does not work for boundary value problems, because there are not enough starting conditions available at either end point to produce a unique solution.
One way to overcome the lack of starting conditions is to guess the missing values. The resulting solution is very unlikely to satisfy boundary conditions at the other end, but by inspecting the discrepancy we can estimate what changes to make to the initial conditions before integrating again. This iterative procedure is known as the shooting method. The name is derived from analogy with target shooting—take a shot and observe where it hits the target, then correct the aim and shoot again.
Another means of solving two-point boundary value problems is the finite difference method, where the differential equations are approximated by finite differences at evenly spaced mesh points. As a consequence, a differential equation is transformed into set of simultaneous algebraic equations.
Find x that minimizes F(x) subject to g(x) = 0, h(x) ≥ 0
Introduction
Optimization is the term often used for minimizing or maximizing a function. It is sufficient to consider the problem of minimization only; maximization of F(x) is achieved by simply minimizing – F(x). In engineering, optimization is closely related to design. The function F(x), called the merit function or objective function, is the quantity that we wish to keep as small as possible, such as cost or weight. The components of x, known as the design variables, are the quantities that we are free to adjust. Physical dimensions (lengths, areas, angles, etc.) are common examples of design variables.
Optimization is a large topic with many books dedicated to it. The best we can do in limited space is to introduce a few basic methods that are good enough for problems that are reasonably well behaved and don't involve too many design variables. By omitting the more sophisticated methods, we may actually not miss all that much. All optimization algorithms are unreliable to a degree—any one of them may work on one problem and fail on another. As a rule of thumb, by going up in sophistication we gain computational efficiency, but not necessarily reliability.
The algorithms for minimization are iterative procedures that require starting values of the design variables x. If F(x) has several local minima, the initial choice of x determines which of these will be computed.
This book is targeted primarily toward engineers and engineering students of advanced standing (sophomores, seniors and graduate students). Familiarity with a computer language is required; knowledge of basic engineering mechanics is useful, but not essential.
The text attempts to place emphasis on numerical methods, not programming. Most engineers are not programmers, but problem solvers. They want to know what methods can be applied to a given problem, what are their strengths and pitfalls and how to implement them. Engineers are not expected to write computer code for basic tasks from scratch; they are more likely to utilize functions and subroutines that have been already written and tested. Thus programming by engineers is largely confined to assembling existing pieces of code into a coherent package that solves the problem at hand.
The “piece” of code is usually a function that implements a specific task. For the user the details of the code are unimportant. What matters is the interface (what goes in and what comes out) and an understanding of the method on which the algorithm is based. Since no numerical algorithm is infallible, the importance of understanding the underlying method cannot be overemphasized; it is, in fact, the rationale behind learning numerical methods.
This book attempts to conform to the views outlined above. Each numerical method is explained in detail and its shortcomings are pointed out.
Neuroevolution, or evolving neural networks with evolution algorithms such as genetic algorithms, is becoming one of the hottest areas in hybrid systems research. One of the areas that become under research using neuroevolutions is the controllers. In this paper, we shall present two engineering controllers based on neuroevolutions techniques. One of the controllers is used to monitor the temperature and humidity in an industry. This controller is having a linear behavior. The second controller is concerned with scheduling parts in queues in an industry. The scheduling controller is having a nonlinear behavior. The results obtained by the proposed controllers based on neuroevolution are compared with results obtained by traditional methods such as neural networks with backpropagation and ordinary simulation for the controller. The results show that the neuroevolution approaches outperform the results obtained by other methods.
In this paper we lay the foundations for exchanging, adapting, and interoperating engineering analysis models (EAMs). Our primary foundation is based upon the concept that engineering analysis models are knowledge-based abstractions of physical systems, and therefore knowledge sharing is the key to exchanging, adapting, and interoperating EAMs within or across organizations. To enable robust knowledge sharing, we propose a formal set of ontologies for classifying analysis modeling knowledge. To this end, the fundamental concepts that form the basis of all engineering analysis models are identified, described, and typed for implementation into a computational environment. This generic engineering analysis modeling ontology is extended to include distinct analysis subclasses. We discuss extension of the generic engineering analysis modeling class for two common analysis subclasses: continuum-based finite element models and lumped parameter or discrete analysis models. To illustrate how formal ontologies of engineering analysis modeling knowledge might facilitate knowledge exchange and improve reuse, adaptability, and interoperability of analysis models, we have developed a prototype engineering analysis modeling knowledge base, called ON-TEAM, based on our proposed ontologies. An industrial application is used to instantiate the ON-TEAM knowledge base and illustrate how such a system might improve the ability of organizations to efficiently exchange, adapt, and interoperate analysis models within a computer-based engineering environment. We have chosen Java as our implementation language for ON-TEAM so that we can fully exploit object-oriented technology, such as object inspection and the use of metaclasses and metaobjects, to operate on the knowledge base to perform a variety of tasks, such as knowledge inspection, editing, maintenance, model diagnosis, customized report generation of analysis models, model selection, automated customization of the knowledge interface based on the user expertise level, and interoperability assessment of distinct analysis models.
This study presents an application to optimize the use of an L-cut guillotine machine. The application has two distinct parts to it; first, a number of rectangular shapes are placed on as few metal sheets as possible by using genetic algorithms. Second, the sequence for cutting these pieces has to be generated. The guillotine's numeric control then uses this sequence to make the cuts.
Customers can directly express their preferences on many options when ordering products today. Mass customization manufacturing thus has emerged as a new trend for its aiming to satisfy the needs of individual customers. This process of offering a wide product variety often induces an exponential growth in the volume of information and redundancy for data storage. Thus, a technique for managing product configuration is necessary, on the one hand, to provide customers faster configured and lower priced products, and on the other hand, to translate customers' needs into the product information needed for tendering and manufacturing. This paper presents a decision-making scheme through constructing a product family model (PFM) first, in which the relationship between product, modules, and components are defined. The PFM is then transformed into a product configuration network. A product configuration problem assuming that customers would like to have a minimum-cost and customized product can be easily solved by finding the shortest path in the corresponding product configuration network. Genetic algorithms (GAs), mathematical programming, and tree-searching methods such as uniform-cost search and iterative deepening A* are applied to obtain solutions to this problem. An empirical case is studied in this work as an example. Computational results show that the solution quality of GAs retains 93.89% for a complicated configuration problem. However, the running time of GAs outperforms the running time of other methods with a minimum speed factor of 25. This feature is very useful for a real-time system.
The present paper studies the process of information generation during design and focuses on the relationship between the information importance and the required effort for its generation. Multiple associative relationships among design entities (handled as design descriptors) are used to represent the design knowledge. The characteristics of the dependent and the primary descriptors are examined and their distinct roles in the design process are discussed. Term definitions concerning the information importance and the design effort are also introduced. The descriptors are used to form a matrix. A number of operations on this matrix results in its transformation, with the final matrix reflecting the quantitative relationship between the information importance and the design effort. From the aforementioned matrix, a unique sorted list for the primary design descriptors is produced. Following this list during descriptor instantiation ensures the production of design information of maximum importance with the least effort in the early design stages. The design of a belt conveyor is used as a basis for a better understanding of the theoretical analysis and for a demonstration of the use of the suggested descriptor list.
The present paper deals with two graph parameters related to cover graphs and acyclic orientations of graphs.
The parameter $c(G)$ of a graph $G$, introduced by B. Bollobás, G. Brightwell and J. Nešetřil [Order3 245–255], is defined as the minimum number of edges one needs to delete from $G$ in order to obtain a cover graph. Extending their results, we prove that, for $\delta >0$, $(1-\delta) \frac{1}{l} \frac{n^2p}{2} \leq c({\mathcal G}_{n,p}) \leq (1+\delta) \frac{1}{l} \frac{n^2p}{2}$ asymptotically almost surely as long as $C n^{-1 + \frac{1}{l}} \leq p(n) \leq c n^{-1 + \frac{1}{ l-1} }$ for some positive constants $c$ and $C$. Here, as usual, ${\mathcal G}_{n,p}$ is the random graph.
Given an acyclic orientation of a graph $G$, an arc is called dependent if its reversal creates an oriented cycle. Let $d_{\min}(G)$ be the minimum number of dependent arcs in any acyclic orientation of $G$. We determine the supremum, denoted by $r_{\chi,g}$, of $d_{\min}(G)/e(G)$ in the class of graphs $G$ with chromatic number $\chi$ and girth $g$. Namely, we show that $r_{\chi,g} = {(\scriptsize\begin{array}{@{}c@{}}{\chi}-g+2\\ 2\end{array})} / {(\scriptsize\begin{array}{@{}c@{}}{\chi}\\ 2\end{array})}$. This extends results of D. C. Fisher, K. Fraughnaugh, L. Langley and D. B. West [J. Combin. Theory Ser. B71 73–78].
Haskell is a functional programming language whose evaluation is lazy by default. However, Haskell also provides pattern matching facilities which add a modicum of eagerness to its otherwise lazy default evaluation. This mixed or “non-strict” semantics can be quite difficult to reason with. This paper introduces a programming logic, P-logic, which neatly formalizes the mixed evaluation in Haskell pattern-matching as a logic, thereby simplifying the task of specifying and verifying Haskell programs. In p-logic, aspects of demand are reflected or represented within both the predicate language and its model theory, allowing for expressive and comprehensible program verification.
A Fano configuration is the hypergraph of 7 vertices and 7 triplets defined by the points and lines of the finite projective plane of order 2. Proving a conjecture of T. Sós, the largest triple system on $n$ vertices containing no Fano configuration is determined (for $n> n_1$). It is 2-chromatic with $\binom{n}{3}-\binom{\lfloor n/2 \rfloor}{3} -\binom{\lceil n/2 \rceil}{3}$ triples. This is one of the very few nontrivial exact results for hypergraph extremal problems.