To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Support Vector Machines (SVMs) are some of the most widely used classification and regression algorithms for data analysis, pattern recognition, or cognitive tasks. Yet learning problems that can be solved by SVMs are limited in size because of high computational cost and excessive storage requirements. Many variations of the original SVM algorithm were introduced that scale better to large problems. They change the SVM framework quite drastically, such as apply optimizations other than the maximum margin, or introduce different error metrics for the cost function. Such algorithms may work for some applications, but they do not have the robustness and universality that make SVMs so popular.
The approach taken here is to maintain the SVM algorithm in its original form and scale it to large problems through parallelization. Computer performance cannot be improved anymore at the pace of the last few decades by increasing the clock frequencies. Today, significant accelerations are achieved mostly through parallel architectures, and multicore processors are commonplace nowadays. Mapping the SVM algorithm to multicore processors with shared-memory architectures is straightforward, yet this approach does not scale to a large number of processors. Here we investigate parallelization concepts that scale to hundreds and thousands of cores where, for example, cache coherence can no longer be maintained.
A number of SVM implementations on clusters or graphics processors (GPUs) have been proposed recently. A parallel optimization algorithm based on gradient projections has been demonstrated (see Zanghirati, and Zanni, 2003; Zanni, Serafini, and Zanghirati, 2006) that uses a spectral gradient method for fast convergence while maintaining the Karush-Kuhn-Tucker (KKT) constraints.
By
Evan Xiang, Hong Kong University of Science and Technology,
Nathan Liu, Hong Kong University of Science and Technology,
Qiang Yang, Hong Kong University of Science and Technology
Machine learning and data-mining technologies have already achieved significant success in many knowledge engineering areas including web search, computational advertising, recommender systems, etc. A major challenge in machine learning is the data sparsity problem. For example, in the domain of online recommender systems, we attempt to recommend information items (e.g., movies, TV, books, news, images, web pages, etc.) that are likely to be of interest to the user. However, the item space is usually very large and the amount of user preference values is small. When the user data are too sparse, it is difficult to obtain a reliable and useful model for recommendation. Whereas large online sites like Amazon and Google can easily access huge volumes of user data, the enormous number of smaller online business sites, which collectively constitute the long tail of the web, are much more likely to have very sparse user data and have difficulty in generating accurate recommendations. One potential solution to the data sparsity problem is to transfer knowledge from other information sources (e.g., Mehta and Hofmann, 2007; Li, Yang, and Xue, 2009). Such techniques for knowledge transfer are called transfer learning (see, e.g., Pan and Yang, 2010). An additional issue is that, in reality, many small websites often attract similar users and/or provide similar items, if not the identical ones, which implies that data about such users/items could potentially be distributed across different systems. For example, Delicious and Digg are both popular online social bookmarking tools.
In this paper, we consider a class of scheduling problems that are among the fundamentaloptimization problems in operations research. More specifically, we deal with a particularversion called job shop scheduling with unit length tasks. Using theresults of Hromkovič, Mömke, Steinhöfel, and Widmayer presented in their work JobShop Scheduling with Unit Length Tasks: Bounds and Algorithms, we analyze theproblem setting for 2 jobs with an unequal number of tasks. We contribute a deterministicalgorithm which achieves a vanishing delay in certain cases and a randomized algorithmwith a competitive ratio tending to 1. Furthermore, we investigate the problem with 3 jobsand we construct a randomized online algorithm which also has a competitive ratio tendingto 1.
Two souls, alas, do dwell within his breast; The one is ever parting from the other.
– Goethe Faust, Part I
Optimization is the sine qua non of decision theory. The list of contributors to the concept is a veritable who's who of mathematics that includes Fermat, Newton, Gauss, Euler, Lagrange, Fourier, Edgeworth, Pareto, von Neumann, Wiener, Dantzig, Kantorovich, Bellman, Kalman, Arrow, Nash, and others. Indeed, optimization has played a central role in virtually every decision-making procedure and enjoys uncontested mathematical respectability. As Euler noted, “Since the fabric of the world is the most perfect and was established by the wisest Creator, nothing happens in this world in which some reason of maximum or minimum would not come to light” (cited in Polya, 1954).
With the exception of work based on the results of Edgeworth, Pareto, von Neumann, Arrow, and Nash, however, optimization theory has focused on the behavior of a single decision maker. Indeed, the concept of optimization is an individual concept. In group scenarios, the issues become more complex: If a group wishes to optimize, it must act as if it were a single entity. As Arrow's (1951) impossibility theorem establishes, however, it is not generally possible to define a preference ordering for a group in terms of the preference orderings of its individual members. Consequently, the concept of optimization in group settings is often expressed through such concepts as equilibrium, nondominance, and social welfare, none of which enjoy the type of global superlativeness that the term optimization typically connotes.
The manner in which mathematical theories are applied does not depend on preconceived ideas: it is a purposeful technique depending on, and changing with, experience.
– William Feller An Introduction to Probability Theory and Its Applications (Wiley, 1950)
Given a group of stakeholders, a classical rational decision-making model comprises three distinct structural elements. First is the set of feasible actions (those actions that satisfy the logical, physical, and economic constraints associated with each stakeholder); second is the set of possible outcomes that can obtain as a result of all players taking action; and third is a preference ordering of the outcomes for each stakeholder. There is also a fourth component, namely, the concept of logic, or rationality, that governs decision making, but that component is not a part of the model structure. In this chapter, we focus exclusively on the structural components and defer consideration of rationality until Chapter 3.
As discussed in Chapter 1, under the classical game theory model, all players come to the moment of decision with all of their preference orderings completely and categorically defined. This model, however, does not permit group-level preferences to be defined, which significantly limits the use of classical game theory as a model of groups whose members possess sophisticated social relationships. This limitation is addressed by replacing categorical preference orderings with conditional preference orderings that explicitly account for the social influence that the players exert on each other.
Knowledge is widely taken to be a matter of pedigree. To qualify as knowledge, beliefs must be both true and justified. Sometimes justification is alleged to require tracing of the biological, psychological, or social causes of belief to legitimating sources. Another view denies that causal antecedents are crucial. Beliefs become knowledge only if they can be derived from impecable first premises according to equally noble first principles. But whether pedigree is traced to origins or fundamental reasons, centuries of criticism suggest that our beliefs are born on the wrong side of the blanket. There are no immaculate preconceptions.
Where all origins are dark, preoccupation with pedigree is self-defeating. We ought to look forward rather than backward and avoid fixation on origins.
– Isaac Levi The Enterprise of Knowledge (MIT Press, 1980)
Game theory has a great pedigree. To many, it is viewed as settled knowledge. In particular, the basic assumptions of categorical preference orderings and individual rationality have remained intact since the inception of the theory. Given these assumptions, the bulk of attention has focused on defining various solution concepts that conform to the rationality assumptions, including minimax theory, notions of equilibrium, coalition formation, principles of uncertainty, and analysis of repeated games. A fairly recent focus of interest has been advanced by the field of behavioral economics, which attempts to imbue games with greater psychological realism (for example, accounting notions of fairness and reciprocity).
Hypothesen sind Netze, nur der wird fangen, der auswirft.
Theories are nets: only he who casts will catch.
– Novalis (Friedrich von Hardenberg) Dialogen und Monolog, 1798
John Dewey observed that “In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry” [emphasis in original, Dewey, 1938, pp. 8–9]. Game theory has been successfully applied to the subject matter of general economic theory, particularly for competitive and market-driven scenarios where individual rationality dominates. It is firmly established; it is settled. In fact, it is so settled that it is available as a resource for further inquiry into the issue of multistakeholder decision making for scenarios where social relationships extend beyond self-interest.
The net cast by classical game theory, however, is designed to capture the essential characteristics of multistakeholder decision scenarios where all participants possess categorical (unconditional) preference orderings and are committed to achieving the best individual outcomes for themselves. This book revises game theory to cast a wider net designed to capture, in addition, decision-making scenarios where cooperation, compromise, negotiation, and altruism are significant issues, and where notions of concordant group behavior are important.
One cannot escape the feeling that these mathematical formulas have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers, that we get more out of them than was originally put into them.
– Heinrich Hertz
The uses of probability theory
Whenever one talks of foundational assumptions, it is hard to escape addressing philosophical issues. One area of mathematics that has long captured the interest of philosophers is probability theory. As betrayed by its very name, probability theory is typically applied to the epistemological issue of quantifying uncertainty regarding phenomena for which precise knowledge is not available.
In this book, we appropriate the mathematical structure and syntax of probability theory for a praxeological application. By so doing, we move far afield from its traditional epistemological home. Keeping in mind that this interpretation is nontraditional and may be controversial, we take considerable pains to provide a principle-based justification for appropriating probability theory for a nonepistemological application. As Hamming aptly observed, “it is dangerous to apply any part of science without understanding what is behind the theory” (Hamming, 1991, p. viii). Applying probability theory is essentially an art form and must be used with judgment and skill. The main difference between traditional usage and our usage is that, whereas probability is traditionally painted on an epistemological canvas, we choose to paint on a praxeological canvas as well.
Probability theory was to native good sense what a telescope or spectacles were to the naked eye: a mechanical extension designed along the same basic principles as the original.
– Gerd Gigerenzer et al. The Empire of Chance (Cambridge University Press, 1989)
The question of how to make choices in the presence of uncertainty that arises because of the lack of complete information has captured the interest of decision theorists for centuries. The approach to dealing with such situations depends on how the decision maker characterizes the uncertainty. With a situation of complete ignorance, it is difficult to proceed. Consequently, much effort has been devoted to devising rules to govern the way inferences are made. As put by Gigerenzer et al. (1989), “Although these rules cannot eliminate the uncertainty intrinsic to the situation, they do eliminate the uncertainty of opinion about how to weigh what information one has, and therefore, about what course of action to pursue” (p. 286). The approach that has most widely influenced science and economics is to comply with the certainty-equivalence hypothesis: Given a decision problem whose outcome is uncertain, it is assumed that a payoff value exists such that the decision maker is indifferent between receiving that payoff for certain and the uncertain outcome. A decision maker who complies with the certainty-equivalence hypothesis will then act as if the certainty-equivalent payoff were deterministically defined.
One can argue very persuasively that the weakest link in any chain of argument should not come at the beginning.
– Howard Raiffa Decision Analysis (Addison-Wesley, 1968)
Decision making is perhaps the most fundamental intellectual enterprise. Indeed, the word intelligent comes from the Latin roots inter (between) + legĕre (to choose). The study of motives and methods regarding how decisions might and should be made has long captured the interest of philosophers and social scientists and, more recently, of engineers and computer scientists. An important objective of such studies is to establish a framework within which to define rational behavior and solution concepts that result in appropriate choices. The development of formal theories of decision making, however, has proven to be a challenging and complex task. The reason is simple: Every nontrivial decision problem involves multiple stakeholders. A stakeholder is any entity that has an interest in the consequences of a decision, whether or not it has direct control over the decision. Unless the interests of all stakeholders coincide perfectly (a rare occurrence), conflicts will exist. The central challenge of any theory of decision making, therefore, is how to make choices in the presence of conflicting stakeholder interests.
The way a group of stakeholders deals with conflict is a function of its sociality: Conflict can result in either competition or cooperation.
In this chapter we present some application examples that are more lengthy than the examples discussed in earlier chapters. Although these examples are heavily stylized, they illustrate essential features of decision making in various contexts. The first three examples, the Battle of the Sexes, Ultimatum, and the Stag Hunt games, are games, along with the Prisoner's Dilemma game discussed in earlier chapters, that have received a great deal of attention as important examples of game theory, since they serve as models of many real-world situations. The fourth example, the Family Walk, is a social choice example that illustrates how conditional utilities can be used to define a social network in which, although every participant gets a vote, the votes are not delivered in a social vacuum. The fifth and final example is that of a multiagent system comprising three autonomous decision makers that must function cooperatively.
Battle of the Sexes
As discussed in Example 1.3, the Battle of the Sexes game involves a man and a woman who plan to meet in town for a social function. She (S) prefers to go to the ballet (B), whereas he (H) prefers the dog races (D). Each prefers to be with the other, however, regardless of where the social function takes place.