To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Another important feature of C language is that it may include instructions for the compiler. Though these instructions are not directly the part of C language, they expand the scope of programming environment. These are known as preprocessor. In this chapter, we will discuss about these preprocessors.
A unique feature of C language is the preprocessor. The C preprocessor, as its name implies, processes before the processing of C source program. It is a program that processes the source code before the compiler translates the source code into the object code. During processing, these preprocessors perform some modifications on the source code based on the instructions provided by the preprocessors. These instructions are known as preprocessor commands or directives. Each of the Preprocessor directives begin with a # symbol and do not require semicolon(;) at the end. The preprocessor is used to make the source code more readable, much easy to modify, portable and more efficient.
Some common Preprocessor directives are:
• #define directive – To define a symbolic constant or macro.
• #undef directive – To undefine a symbolic constant or macro.
• #include directive – To include another file in a source program.
• Conditional Compilation directive – To compile parts of source code conditionally.
We already discussed about #define and #include directive in brief. Here they would be discussed in details along with other directives.
We consider connectivity properties and asymptotic slopes for certain random directed graphs on ℤ2 in which the set of points $\mathcal{C}_o$ that the origin connects to is always infinite. We obtain conditions under which the complement of $\mathcal{C}_o$ has no infinite connected component. Applying these results to one of the most interesting such models leads to an improved lower bound for the critical occupation probability for oriented site percolation on the triangular lattice in two dimensions.
Operators are analogous with the verb of English language. It tells what operation has to be performed on operand(s). C language has a rich set of operators. It has 45 different operators. But it does not have any operator to calculate exponent. So, we cannot calculate ab using some operators. For this, C provides a mathematical function named pow( ). In this chapter, we will discuss about most of the operators and their use. An Expression is a combination of variables, constants and operators to produce new values. Expressions are the backbone of C statement which is analogous with the sentence of English language. In this chapter, we will discuss about these building blocks.
OPERATORS IN C
Operators are the symbols that help the programmer to instruct the computer to perform certain arithmetic, relational or logical operation. Operators are used in C language program to operate on data and variables. One of the distinguished features of C language is its rich set of operators. The operators of C can be categorized as:
1. Arithmetic operators
2. Assignment operators
3. Increment and decrement operators
4. Relational operators
5. Logical operators
6. Conditional operators
7. Bitwise operators
8. Special operators
In addition to this classification operators can also be classified as unary, binary and ternary according to the number of operands associated with the particular operator.
Arithmetic Operators
All the basic arithmetic operations used in mathematics are available in C. These operators bear more or less same meaning in other languages also. C language supports both unary and binary arithmetic operators.
In the last two chapters, we have learnt to develop programs that used simple sequential statements or conditional statements. In the first case, each statement in the program is executed in a fixed order; whereas in the second case, depending on some condition a particular set of statements gets executed, skipped or another set of statements gets executed. But most of the time we have to execute same series of actions. The mechanism, which helps us to perform an action over and over, is the ‘loop’ or ‘iteration’.
Loops are of two types, entry controlled loop and exit controlled loop. When condition is checked before the execution of the body of the loop, it is called entry controlled loop or pre-test iteration. It is described by the following block diagram.
As the block diagram described, first condition will be checked. If the condition is true, the control will enter in the loop and body of the loop will execute. Then control will loop back and the condition will be tested again. This process will be continued until the condition becomes false. When the condition becomes false, control will terminate the loop and execute the next statement followed by the loop statement.
But when condition is checked after the execution of the body of the loop, it is called exit controlled loop or post test iteration. Here, first body of the loop will execute. Then condition will be checked to determine whether the loop will execute next time or not. If the condition is true, the control will loop back. When the condition becomes false, control will terminate the loop and execute the next statement followed by the loop statement. It is described by the following block diagram.
We know that to store a set of elements we need to declare an array. But when we handle large set of elements there are some disadvantages of array. First of all the allocation of an array is static. Thus we need to mention the size of the array before compilation and this size cannot be changed throughout the program. We have to execute the program for fixed specified set of elements. But this is not possible in real life. Suppose, we want to write a program that will read student's name, roll number and marks of various subjects from user or from file and prepare a result sheet in descending order of total marks. If we write this program using an array, then it will be executed for fixed number of students. But generally in different classes or sections number of students varies. So, a single program is not sufficient. We need to write programs for each class/ section; though the program logic is same for all cases. Different program is required only for different array size. One solution is to declare an array which is sufficiently large. But this may lead to the problem of wastage of memory. Again in certain situation, if more elements need to store than the defined one, then it also faces the lack of memory problems. Sometimes it may happen that requirement may changes with the time. In these cases, old programs are needed to modify. Solution of these problems is Dynamic Allocation. But only Dynamic Allocation is not the solution. Sometimes it is not always possible to predict the size of the array at the very beginning of the program.
The basic principle to define a function is to divide a large program into some smaller modules so that a complicated task can be broken into simpler and more manageable form that can be handled easily. These smaller modules are used to accomplish a definite task.
Function is a self-contained block of statements that perform some specific job. In C language, we achieve the modular programming using function. Functions in C are categorized into two groups. They are Library function and User defined function. Library functions are defined within several header files as per individual category of the functions. We already discussed some library functions and some other will be discussed later. Though C supports a large number of library functions, but this is not sufficient. According to our requirement we need to define our own functions. These are called user defined function. In this chapter we will discuss how a user defined function can be defined and how it can be accessed, etc.
Basically, any C program is a collection of one or more functions. If it contains only one function, then it is the main function. In C program main function is a special function. Every C program starts its execution from main function and other functions are called from this function as well. Every C program must have a main function and more than one main function is not possible in a single program. However, a C program may contain more than one user defined function and there is no restriction on this.m
In the last chapter, we discussed about single and multi dimensional array. We already learnt how to define arrays of different sizes and dimensions, how to initialize them, how to operate on arrays of different dimensions, etc. With this knowledge, we are ready to handle strings, which are, simply a special kind of array. String handling basically consists of:
Σ Input and output strings from/to keyboard/monitor or files.
Σ Copying and comparing strings with library functions.
Σ Manipulating case of characters in a string with library functions.
Σ Writing equivalent functions of string manipulating library functions.
We will discuss all important string manipulating library functions in this chapter; however a complete list of string manipulating library functions along with explanation is given in Appendix.
We will learn to write equivalent functions of string manipulating library functions in functions chapter.
STRING
A string is a sequence of characters terminated with a null character (‘\0’). It is usually stored as one-dimensional character array. A set of characters arranged in any sequence defined within double quotation is known as string constant. To manipulate text such as words and sentences, strings, i.e., character arrays are used. The way a group of integers can be stored in an integer array, is similar with a group of characters stored in a character array.
Collaboration networks provide a method for examining the highly heterogeneous structure of collaborative communities. However, we still have limited theoretical understanding of how individual heterogeneity relates to network heterogeneity. The model presented here provides a framework linking an individual's skill set to her position in the collaboration network, and the distribution of skills in the population to the structure of the collaboration network as a whole. This model suggests that there is a non-trivial relationship between skills and network position: individuals with a useful combination of skills will have a disproportionate number of links in the network. Indeed, in some cases, an individual's degree is non-monotonic in the number of skills she has—an individual with very few skills may outperform an individual with many. Special cases of the model suggest that the degree distribution of the network will be skewed, even when the distribution of skills is uniform in the population. The degree distribution becomes more skewed as problems become more difficult, leading to a community dominated by a few high-degree superstars. This has striking implications for labor market outcomes in industries where production is largely the result of collaborative effort.
Distributed information transfer is of paramount importance to the effectiveness of dynamic collective behaviors, especially when a swarm is confronted with complex environmental circumstances. Recently, the signaling network of interaction underlying such effective information transfers has been revealed in the particular case of bird flocks governed by a topological interaction. Such biological systems are known to be evolutionary optimized, but are also constrained by the very nature of the signaling mechanisms—owing to intrinsic limitations in sensory modalities—enabling communication among individuals. Here, we propose that artificial swarm design can be tackled from the angle of signaling network design. To this aim, we use different network models to investigate the impact of some network structural properties on the effectiveness of a specific emergent swarming behavior, namely global consensus. Two new network models are introduced, which together with the well-known Watts–Strogatz model form the basis for an analysis of the relationship between clustering, shortest path and speed to consensus. A network-theoretic approach combined with spectral graph theory tools are used to propose some signaling network design principles. Eventually, one key design principle—a concomitant reduction in clustering and connecting path—is successfully tested on simulations of swarms of self-propelled particles.
The second Dagstuhl seminar on coalgebraic logics took place from October 7–12, 2012, in the Leibniz Forschungszentrum Schloss Dagstuhl, following a successful earlier one in December 2009. From the 44 researchers who attended and the 30 talks presented, this collection highlights some of the progress that has been made in the field. We are grateful to Giuseppe Longo and his interest in a special issue in Mathematical Structures in Computer Science.
The study of inertial forces effects at high speeds in flexible parallel manipulators, which generate undesired deviations, is a challenging task due to the coupled and complicated equations of motion. A dynamic model of the Revolute Prismatic Revolute (RPR) planar manipulators (specifically 3-RPR, 2-RPR and 1-RPR) with flexible intermediate links is developed based on the assumed mode method. The flexible intermediate links are modeled as Euler-Bernoulli beams with fixed-free boundary conditions. Using the Lagrange multipliers, a generalized set of differential algebraic equations (DAEs) of motion is developed. In the simulations, the rigid body motion of the end-effector is constrained by some moving constraint equations while the vibrations of the flexible intermediate links cause deviations from the desired trajectory. From this analysis, the dynamic performance of the manipulators when tracking a desired trajectory is evaluated. A comparison of the results indicates that in some cases, adding each extra RPR chain in the n-RPR planar manipulators with flexible intermediate links reduces the stiffness and accuracy due to the inertial forces of the flexible links, which is opposite to what would be expected. The study provides insights to the design, control and suitable selection of the flexible manipulators.
In this paper, we propose an enhanced Monte Carlo localization (EMCL) algorithm for mobile robots, which deals with the premature convergence problem in global localization as well as the estimation error existing in pose tracking. By incorporating a mechanism for preventing premature convergence (MPPC), which uses a “reference relative vector” to modify the weight of each sample, exploration of a highly symmetrical environment can be improved. As a consequence, the proposed method has the ability to converge particles toward the global optimum, resulting in successful global localization. Furthermore, by applying the unscented Kalman Filter (UKF) to the prediction state and the previous state of particles in Monte Carlo Localization (MCL), an EMCL can be established for pose tracking, where the prediction state is modified by the Kalman gain derived from the modified prior error covariance. Hence, a better approximation that reduces the discrepancy between the state of the robot and the estimation can be obtained. Simulations and practical experiments confirmed that the proposed approach can improve the localization performance in both global localization and pose tracking.
The paper defines and studies the genus of finite state deterministic automata (FSA) and regular languages. Indeed, an FSA can be seen as a graph for which the notion of genus arises. At the same time, an FSA has a semantics via its underlying language. It is then natural to make a connection between the languages and the notion of genus. After we introduce and justify the the notion of the genus for regular languages, the following questions are addressed. First, depending on the size of the alphabet, we provide upper and lower bounds on the genus of regular languages: we show that under a relatively generic condition on the alphabet and the geometry of the automata, the genus grows at least linearly in terms of the size of the automata. Second, we show that the topological cost of the powerset determinization procedure is exponential. Third, we prove that the notion of minimization is orthogonal to the notion of genus. Fourth, we build regular languages of arbitrary large genus: the notion of genus defines a proper hierarchy of regular languages.
We focus on Erol Gelenbe's scientific and technical contributions to probability models in the computer and information sciences, but limit our survey to the last fifteen years. We start with a brief overview of his work as a single author, as well as his work in collaboration with over 200 co-authors. We discuss some of his recent and innovative work regarding a new probability model that represents Intermittent Energy Sources for Computing and Communications, introducing Energy Packet Networks which are a probabilistic representation of the flow, storage and consumption of electrical energy at the microscopic level (in electronic chips), and at the macroscopic level (e.g. in buildings or data centers) and for its routing and dynamic usage by consuming units (such as computer elements, chips or machines). We next discuss his work on designing computer and communication systems that parsimoniously use energy in order to achieve a satisfactory level of quality of service (QoS). Trade-offs between system QoS and energy consumption are also considered. Then we turn to Prof. Gelenbe's pioneering work on Autonomic Communications and the design and implementation of CPN, the Cognitive Packet Network, and we also briefly review his spiking random neural network that was used in CPN. This is followed by a brief review of work that he conducted since 1999 on human evacuation from dangerous or catastrophic environments, and the design of technology driven Emergency Management Systems. His research since the late 2000s on Gene Regulatory Networks is then covered together with its application to the detecting possible disease from microarray data. Finally, we briefly discuss some novel analytical models that he developed in this period with publications appearing in journals of physics and applied mathematics.