To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An important computer science problem is parsing a string according a given context-free grammar. A context-free grammar is a means of describing which strings of characters are contained within a particular language. It consists of a set of rules and a start nonterminal symbol. Each rule specifies one way of replacing a nonterminal symbol in the current string with a string of terminal and nonterminal symbols. When the resulting string consists only of terminal symbols, we stop. We say that any such resulting string has been generated by the grammar.
Context-free grammars are used to understand both the syntax and the semantics of many very useful languages, such as mathematical expressions, Java, and English. The syntax of a language indicates which strings of tokens are valid sentences in that language. The semantics of a language involves the meaning associated with strings. In order for a compiler or natural-language recognizers to determine what a string means, it must parse the string. This involves deriving the string from the grammar and, in doing so, determining which parts of the string are noun phrases, verb phrases, expressions, and terms.
Some context-free grammars have a property called look ahead one. Strings from such grammars can be parsed in linear time by what I consider to be one of the most amazing and magical recursive algorithms. This algorithm is presented in this chapter. It demonstrates very clearly the importance of working within the friends level of abstraction instead of tracing out the stack frames: Carefully write the specifications for each program, believe by magic that the programs work, write the programs calling themselves as if they already work, and make sure that as you recurse, the instance being input gets smaller.
Many important and practical problems can be expressed as optimization problems. Such problems involve finding the best of an exponentially large set of solutions. It can be like finding a needle in a haystack. The obvious algorithm, considering each of the solutions, takes too much time because there are so many solutions. Some of these problems can be solved in polynomial time using network flow, linear programming, greedy algorithms, or dynamic programming. When not, recursive backtracking can sometimes find an optimal solution for some instances in some practical applications. Approximately optimal solutions can sometimes be found more easily. Random algorithms, which flip coins, sometimes have better luck. However, for the most optimization problems, the best known algorithm require 2Θ(n) time on the worst case input instances. The commonly held belief is that there are no polynomial-time algorithms for them (though we may be wrong). NP-completeness helps to justify this belief by showing that some of these problems are universally hard amongst this class of problems. I now formally define this class of problems.
Ingredients: An optimization problem is specified by defining instances, solutions, and costs.
Instances: The instances are the possible inputs to the problem.
Solutions for Instance: Each instance has an exponentially large set of solutions. A solution is valid if it meets a set of criteria determined by the instance at hand.
Measure of Success: Each solution has an easy-to-compute cost, value, or measure of success that is to be minimized or maximized.
For some computational problems, allowing the algorithm to flip coins (i.e., use a random number generator) makes for a simpler, faster, easier-to-analyze algorithm. The following are the three main reasons.
Hiding the Worst Cases from the Adversary: The running time of a randomized algorithms is analyzed in a different way than that of a deterministic algorithm. At times, this way is fairer and more in line with how the algorithm actually performs in practice. Suppose, for example, that a deterministic algorithm quickly gives the correct answer on most input instances, yet is very slow or gives the wrong answer on a few instances. Its running time and its correctness are generally measured to be those on these worst case instances. A randomized algorithm might also sometimes be very slow or give the wrong answer. (See the discussion of quick sort, Section 9.1). However, we accept this, as long as on every input instance, the probability of doing so (over the choice of random coins) is small.
Probabilistic Tools: The field of probabilistic analysis offers many useful techniques and lemmas that can make the analysis of the algorithm simple and elegant.
Solution Has a Random Structure: When the solution that we are attempting to construct has a random structure, a good way to construct it is to simply flip coins to decide how to build each part. Sometimes we are then able to prove that with high probability the solution obtained this way has better properties than any solution we know how to construct deterministically.
Iterative Algorithms: Measures of Progress and Loop Invariants
Selection Sort: If the input for selection sort is presented as an array of values, then sorting can happen in place. The first k entries of the array store the sorted sublist, while the remaining entries store the set of values that are on the side. Finding the smallest value from A[k + 1] … A[n] simply involves scanning the list for it. Once it is found, moving it to the end of the sorted list involves only swapping it with the value at A[k + 1]. The fact that the value A[k + 1] is moved to an arbitrary place in the right-hand side of the array is not a problem, because these values are considered to be an unsorted set anyway. The running time is computed as follows. We must select n times. Selecting from a sublist of size i takes Θ(i) time. Hence, the total time isΘ(n + (n–1) + … + 2 + 1) = Θ(n2) (see Chapter 26).
Cyberspace constitutes a specific environment; the investigations in this field are based either on the original cyberspace-dependent methods and theories, or on universal theories and methods worked out in diverse areas of knowledge, not necessarily closely connected with cyberspace. A psychological theoretical construct (with vast practical perspectives) introduced by Csikszentmihalyi, (2000/1975) known as optimal, or flow experience, alongside the methods of its measurement, basically refer to the universal, that is, nonspecific theoretical and methodological background. This traditional methodology was adapted and accepted within cyberspace; it represents a growing area of the investigators' activity in the field.
Like many other investigations of human behavior in cyberspace, flow-related studies are of both practical and theoretical significance. The practical significance is associated with the challenges deriving from business: a large body of research is stimulated by business expectations of acquiring advantages in the quality of offers to be suggested to customers. The theoretical significance stems from a supposition that optimal experience is an important construct mediating human activity in cyberspace, and thus represents a special level of psychological mediation of mental processes. The mechanisms of multiple mediation and remediation of a previously mediated experience are known to affect human psychic development (Cole, 1996; Vygotsky, 1962).
In this chapter, major research directions are presented and discussed, referring to the optimal, or flow, experience studies conducted within cyberspace environments.
The psychology of cyberspace, or cyberpsychology, is a new field of study. Fewer than a handful of universities around the world offer a course in this emerging area, despite the unequivocal fact that many activities today take place online. In this novel social environment, new psychological circumstances project onto new rules governing human experiences, including physiological responses, behaviors, cognitive processes, and emotions. It seems, however, that psychology gradually is acknowledging and accepting this new field of study, as more behavioral scholars have begun to research the field, growing numbers of articles in the area appear in psychology journals, and an increasing number of books related to this domain are being published. This change reflects not only the growing number of professionals who find interest in researching the new field but also the growing number of people – students and laypeople alike – who search for credible and professional answers in this relatively unknown and uninvestigated area of human psychology.
I discovered this exciting direction in psychology mainly because of personal necessity. I was living in London, Ontario, Canada – affiliated with The University of Western Ontario and collaborating with my long-time friend and colleague William (Bill) Fisher, with whom I have thoroughly studied issues of sexuality on the Internet – when the revolutionary computer network, called the Internet, emerged (quite innovative in comparison to the relatively primitive Bitnet we used before).
Intergroup conflict is sadly part of our existence. Such conflicts exist around the globe originating through differences, for example, in beliefs, religion, race, and culture. The degree of conflict between rival groups varies from mild hostility to all-out war, leading to the loss of thousands of lives every year. The field of intergroup conflict has attracted the attention of many social psychologists who have attempted to understand the phenomenon and to provide solutions to end it.
These scholars concentrated their research on the structure of such conflicts that they perceived as comprising three major aspects: cognitive, affective, and behavioral. The cognitive aspect is demonstrated by the stereotype held by one group toward the other; the affective aspect by the prejudice held regarding the other group, and the behavioral aspect by discrimination against this group.
The fundamental component found in intergroup conflict is the stereotype – the negative perception of the other group. Stereotypes may include negative perceptions of a variety of characteristics such as traits, physical characteristics, and expected behaviors. People generally believe that their group (the ingroup) is a heterogeneous group, whereas members of the other group (the outgroup) are all similar to one another. This perception, known as the homogeneity effect, is one of the bases for our tendency to stereotype the members of the outgroup and claim that they are all, for example, hostile, liars, and lazy (Linville, Fischer, & Salovey, 1989; Linville & Jones, 1980).
Science and the Internet: Its most appealing, usable, and integrating component, the World Wide Web, came from its laboratories. Fifteen years after the invention of the web, it has become such an integral part of the infrastructure of modern societies that young people cannot imagine a world without it. It has become even easier to imagine a world without roads and cars than a world without the World Wide Web.
Time to ask in what ways the Internet had and is having an impact on science. How is what once came from the laboratory influencing that laboratory's structure and the researchers working in it? In particular, how is it influencing the way research is conducted? Tim Berners-Lee, who invented the World Wide Web at CERN in Geneva, wrote in 1998:
The dream behind the Web is of a common information space in which we communicate by sharing information. Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished. There was a second part of the dream, too, dependent on the Web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which we work and play and socialize. That was that once the state of our interactions was on line, we could then use computers to help us analyse it, make sense of what we are doing, where we individually fit in, and how we can better work together.