To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 5 covers research on visual perception and related psychological theories needed to fully understand the visualisation process. Cues and heuristics are discussed since they are effortless and quick ways for the brain to support human decision-making. Cues are stimuli in the environment triggering a habitual thought, i.e., a heuristic. On average, cues and heuristics will help shoppers come to sufficiently good decisions, but it is highly possible that in most situations a bit of more effortful reflection would lead to even better solutions. The chapter also goes through how heuristics can be misleading. For instance, if retailers reduce the number of stock-keeping units (SKUs), the ones remaining will more easily enter the awareness of the shoppers since there is less clutter. The fact that more products enter the shoppers' awareness will be misinterpreted by the shoppers who think that the number of SKUs has increased. Furthermore, research shows that colour is the visual quality that the brain accesses most easily and that brightness contrast is the dimension of colour that the brain uses most effortlessly. Finally, eye-tracking and the physics of the eye are discussed.
For received theories, (suboptimal) temptations arise first, and, consequently, people set up rules or institutions to control them. Hence, any deviation from institutions is suboptimal. However, these received theories face an anomaly, coined here the ‘Holiday License Paradox’: Why would people who adopt optimal institutions turn around and designate ‘holidays’ (cheat days) that allow them to indulge in suboptimal consumption? To solve this paradox, this paper reverses the entry point: people first set up rules – whereas temptations are identifiable only with respect to those rules. This solution raises a new question: what is the origin of rules? People adopt rules to control ‘temerity’, i.e., overconfidence. This raises a further question: what is the origin of temerity? Temerity is a default heuristic expressing the optimal response in life-and-death decisions. Thus, temerity-as-heuristics is rather efficient on average. However, temerity can become excessive, and, at second approximation, people adopt rules to control temerity. Once we regard rules or institutions to come first, i.e., prior to temptations, it becomes possible to solve the Holiday License Paradox.
This chapter introduces philosophies of science based on the notion that science functions within structures of theories, where some theories are fundamental and protected from falsification. A short piece of fiction illustrates this notion. Building on this story, Kuhn’s paradigms are introduced, including the concepts of scientific revolutions, paradigm shifts and incommensurability between paradigms. Some problematic aspects of paradigms are mentioned, such as the seeming lack of real scientific revolutions historically and whether progress and preservation of knowledge in science are really possible given the incommensurability between paradigms that replace each other. It is acknowledged that paradigm thinking has had a strong lasting role both within science and in society, but not always in a way Kuhn would have recognised. Lakatos’ research programmes are introduced as a similar but different approach, and the ‘new experimentalism’ is mentioned as a quite different way of dealing with theory-dependence and theory structures in science.
Theories of policy responsiveness assume that political decision-makers can rationally interpret information about voters’ likely reactions, but can we be sure of this? Political decision-makers face considerable time and information constraints, which are the optimal conditions for displaying decision-making biases—deviations from comprehensive rationality. Recent research has shown that when evaluating policies, political decision-makers display biases related to heuristics—cognitive rules of thumb that facilitate judgments and decision-making—when evaluating policies. It is thus likely that they also rely on heuristics in other situations, such as when forming judgments of voters’ likely reactions. But what types of heuristics do political decision-makers use in such judgments, and do these heuristics contribute to misjudgements of voters’ reactions? Existing research does not answer these crucial questions. To address this lacuna, we first present illustrative evidence of how biases related to heuristics contributed to misjudgements about voters’ reactions in two policy decisions by UK governments. Then, we use this evidence to develop a research agenda that aims to further our understanding of when political decision-makers rely on heuristics and the effects thereof. Such an agenda will contribute to the literature on policy responsiveness.
Heuristics are quick rules-of-thumb and easy-to-use tools that people often use to make decisions. Heuristics have been identified by research psychologists to describe and explain behavior.
With the representative heuristic, people will estimate the likelihood or probability of an event based on how similar it is to other known situations. With the heuristic of ignoring base rates, people will ignore hard data and will instead estimate probabilities based on narratives, even if the narratives are lacking in relevant details. With the availability heuristic, people estimate likelihood based on how quickly and easily something comes to mind. With anchoring and adjustment, people will first anchor an initial estimate based on the first number indicated, and they will adjust their estimate up or down from there. With satisficing, people will look for an acceptable solution that meets minimum requirements, but will not invest additinoal resources in seeking a perfect or optimum solution. Regression toward the mean explains why initial outliers move closer to the average with subsequent repetition.
Kant argues that sensible signs are necessary for thinking and considers only audible words adequate signs. For, since the sense of hearing does not immediately lead to specific images, only audible words express the generality of conceptual representations and have such a constitutive role in thought that deafness from birth constitutes an impediment to thinking. Words have this role because they are arbitrary and associated signs that serve to memorize the logical essence of concepts and function as mere characterizations that ‘mean nothing,’ unlike symbols, which provide images. Kant considers symbolic script a symptom of the lack of general concepts and banishes symbolic language from the core of his philosophy which he requires to provide acroamatic proofs that grant nothing to images. However, he not only recognizes the relevance of symbolic language in poetry and as a means of sensualizing abstract concepts, but appreciates its importance when he develops an interest in a heuristic methodology not based only on chance or luck, and in whose preliminary stages he recommends investigating metaphors, etymologies, and synonyms, and even rehabilitates topics, as heuristic tools to obtain insights that help formulate hypotheses to solve problems.
Traditional project management literature often portrays heuristics as flawed shortcuts leading to errors, advocating for rational, debiasing strategies to prevent cost overruns and benefit shortfalls. This is problematic as heuristics can be effective. Building on Gigerenzer’s concept of fast-and-frugal heuristics, this study examines the use of such smart heuristics by senior managers in a large engineering consultancy firm during the early bid/no-bid decision-making phase of infrastructure projects. Employing a qualitative method from the naturalistic decision-making program, the research uncovers a decision strategy termed "thresholding." This strategy distills extensive experience and interpretation of ambiguous information into binary decisions, effectively de-selecting projects that could be potentially disastrous. The approach also gives credence to agency, as it only deselects disasters but keeps many potential alternatives in the portfolio to mature into potentially ‘good projects’. At the same time, it addresses Flyvbjerg’s call for some scrutiny at the front end of projects to avoid catastrophic projects that start on the wrong premises. Our chapter adds to the debate on the Hiding Hand by not being concerned with the “hidden”, but instead, with what can be known in the early fuzzy front-end of projects.
While individuals are expected to perceive similarly identical quantities, regardless of the used units (e.g., 1 ton or 1000 kg), several scholars suggest that consumers over-infer quantities when they are presented in bigger and phonetically longer numbers. In two experimental studies, we examine this numerosity bias in the context of household food waste. Unlike previous scholars, manipulating numerosity revealed no effect: perceptions of food waste volume and likelihood to reduce it are not influenced by the used numeric value (2500 g vs. 2.5 kg; Study 1) nor the number of syllables (two kilos eight hundred seventy-five grams vs. three kilograms; Study 2).
Chapter 6 builds on students’ understanding of conditionals and loops from Chapter 5, demonstrating how they can be used to solve complex problems. Two key problem-solving approaches are applied: means-end analysis, in which a larger problem is deconstructed in smaller subproblems; and analogy, wherein an approach from a previously-solved problem is translated to solve a new problem. Detailed examples of each are used to illustrate their utility: students learn how to simulate the dice game of craps, and how to solve two long-standing computational problems, namely the Traveling Salesman problem and the Knapsack problem. This kind of practice is essential for students at this point in the textbook, as it trains the valuable skill of translating complex real-world problems into forms MATLAB can solve, then using MATLAB to solve them.
Economic agents often have to make decisions in environments affected by regime switches but expectation formation has hardly been explored in this context. We report about a laboratory experiment whose participants judgmentally forecast three time series subject to regime switches. The participants make forecasts without context knowledge and without support from statistical software. Their forecasts are only based on the previous realizations of the time series. Our interest is the explanation of the average forecasts with a simple model, the bounds & likelihood heuristic. In previous studies it was shown that this model can explain average forecasting behavior very well given stable and stationary time series. We find that the forecasts after a structural break are characterized by a higher variance and less accuracy over several periods. Considering this transition phase in the model, the heuristic performs even slightly better than the Rational Expectations Hypothesis.
Overbidding in sealed-bid second-price auctions (SPAs) has been shown to be persistent and associated with cognitive ability. We study experimentally to what extent cross-game learning can reduce overbidding in SPAs, taking into account cognitive skills. Employing an order-balanced design, we use first-price auctions (FPAs) to expose participants to an auction format in which losses from high bids are more salient than in SPAs. Experience in FPAs causes substantial cross-game learning for cognitively less able participants but does not affect overbidding for the cognitively more able. Vice versa, experiencing SPAs before bidding in an FPA does not substantially affect bidding behavior by the cognitively less able but, somewhat surprisingly, reduces bid shading by cognitively more able participants, resulting in lower profits in FPAs. Thus, ‘cross-game learning’ may rather be understood as ‘cross-game transfer’, as it has the potential to benefit bidders with lower cognitive ability whereas it has little or even adverse effects for higher-ability bidders.
People use fast and simple mental shortcuts for their predictions rather than making weighed assessments and rational decisions based on huge amounts of data. Heuristics are a cognitively cheap and efficient way to solve complex or novel problems because large amounts of the information available in the environment can be ignored.
One of the most vexing problems in cluster analysis is the selection and/or weighting of variables in order to include those that truly define cluster structure, while eliminating those that might mask such structure. This paper presents a variable-selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. The heuristic was subjected to Monte Carlo testing across more than 2200 datasets with known cluster structure. The results indicate the heuristic is extremely effective at eliminating masking variables. A cluster analysis of real-world financial services data revealed that using the variable-selection heuristic prior to the K-means algorithm resulted in greater cluster stability.
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical applications in the behavioral sciences. We compared the performances of nine prominent heuristic procedures for WCSS partitioning across 324 simulated data sets representative of a broad spectrum of test conditions. Performance comparisons focused on both percentage deviation from the “best-found” WCSS values, as well as recovery of true cluster structure. A real-coded genetic algorithm and variable neighborhood search heuristic were the most effective methods; however, a straightforward two-stage heuristic algorithm, HK-means, also yielded exceptional performance. A follow-up experiment using 13 empirical data sets from the clustering literature generally supported the results of the experiment using simulated data. Our findings have important implications for behavioral science researchers, whose theoretical conclusions could be adversely affected by poor algorithmic performances.
To date, most methods for direct blockmodeling of social network data have focused on the optimization of a single objective function. However, there are a variety of social network applications where it is advantageous to consider two or more objectives simultaneously. These applications can broadly be placed into two categories: (1) simultaneous optimization of multiple criteria for fitting a blockmodel based on a single network matrix and (2) simultaneous optimization of multiple criteria for fitting a blockmodel based on two or more network matrices, where the matrices being fit can take the form of multiple indicators for an underlying relationship, or multiple matrices for a set of objects measured at two or more different points in time. A multiobjective tabu search procedure is proposed for estimating the set of Pareto efficient blockmodels. This procedure is used in three examples that demonstrate possible applications of the multiobjective blockmodeling paradigm.
Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30×30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation time considerations generally limit their applicability to matrix sizes no greater than 35×35. Accordingly, a variety of heuristic methods have been proposed for larger matrices, including iterative quadratic assignment, tabu search, simulated annealing, and variable neighborhood search. Although these heuristics can produce exceptional results, they are prone to converge to local optima where the permutation is difficult to dislodge via traditional neighborhood moves (e.g., pairwise interchanges, object-block relocations, object-block reversals, etc.). We show that a heuristic implementation of dynamic programming yields an efficient procedure for escaping local optima. Specifically, we propose applying dynamic programming to reasonably-sized subsequences of consecutive objects in the locally-optimal permutation, identified by simulated annealing, to further improve the value of the objective function. Experimental results are provided for three classic matrix permutation problems in the combinatorial data analysis literature: (a) maximizing a dominance index for an asymmetric proximity matrix; (b) least-squares unidimensional scaling of a symmetric dissimilarity matrix; and (c) approximating an anti-Robinson structure for a symmetric dissimilarity matrix.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.
Although the K-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The p-median model is an especially well-studied clustering problem that requires the selection of p objects to serve as cluster centers. The objective is to choose the cluster centers such that the sum of the Euclidean distances (or some other dissimilarity measure) of objects assigned to each center is minimized. Using 12 data sets from the literature, we demonstrate that a three-stage procedure consisting of a greedy heuristic, Lagrangian relaxation, and a branch-and-bound algorithm can produce globally optimal solutions for p-median problems of nontrivial size (several hundred objects, five or more variables, and up to 10 clusters). We also report the results of an application of the p-median model to an empirical data set from the telecommunications industry.
The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal or ordinal attributes. In such instances, the CPP incorporates edge costs obtained from an aggregation of binary equivalence relations among the attributes. We review existing theory and methods for the CPP and propose two versions of a new neighborhood search algorithm for efficient solution. The first version (NS-R) uses a relocation algorithm in the search for improved solutions, whereas the second (NS-TS) uses an embedded tabu search routine. The new algorithms are compared to simulated annealing (SA) and tabu search (TS) algorithms from the CPP literature. Although the heuristics yielded comparable results for some test problems, the neighborhood search algorithms generally yielded the best performances for large and difficult instances of the CPP.