Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-08T11:12:13.131Z Has data issue: false hasContentIssue false

Multi-attribute utility models as cognitive search engines

Published online by Cambridge University Press:  01 January 2023

Pantelis P. Analytis*
Affiliation:
Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
Amit Kothiyal
Affiliation:
Max Planck Institute for Human Development
Konstantinos Katsikopoulos
Affiliation:
Max Planck Institute for Human Development
Rights & Permissions [Opens in a new window]

Abstract

In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfect knowledge of utilities. We point out that these two contexts represent the boundaries of a continuum, of which the middle remains uncharted: How should people search intelligently when they possess imperfect information about the alternatives? We assume that decision makers first estimate the utility of each available alternative and then search the alternatives in order of their estimated utility until expected benefits are outweighed by search costs. We considered three well-known models for estimating utility: (i) a linear multi-attribute model, (ii) equal weighting of attributes, and (iii) a single-attribute heuristic. We used 12 real-world decision problems, ranging from consumer choice to industrial experimentation, to measure the performance of the three models. The full model (i) performed best on average but its simplifications (ii and iii) also had regions of superior performance. We explain the results by analyzing the impact of the models’ utility order and estimation error.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2014] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: An example of the decision-making process. There are three alternatives available in the market represented by the points K , L and M on the plot. The average rating ai (x axis) maps to the subjective utility (y axis) of the alternatives for the decision maker but with some error. As represented by the straight line, the decision maker believes that the expected utility of the alternatives can be estimated by ui = 0.3*ai where ai is the average rating. As represented by the bell-shaped curves, the decision maker believes that the estimation error єi of her model is iid Gaussian with mean µ = 0 and standard deviation σ= 0.5. The inset table shows the expected utility (EU) and realized utility (RU) of the alternatives. The decision maker will first sample the alternative with the highest attribute value and then at each step decide whether to sample the next alternative. For c = 0.05 , for example, the decision maker will sample the alternatives K and L and choose the alternative L.

Figure 1

Table 1: The environment names refer to the variable that was assumed to be the utility in that environment. Asterisks indicate that the source data set was composed of several alternatives that were available in the market at the point of data collection. All the remaining data sets represent the results of controlled experiments. No. A refers to the number of alternatives and No. a to the number of attributes. The notation indicates the strongest Pearson correlation between the utility variable and the attributes, the mean Pearson correlation between the utility and the attributes, and the mean intercorrelation between the attributes. CPU stands for central processing unit and f. lamp for fluorescent lamp.

Figure 2

Table 2: The average performance of the models across the 12 environments in 8 cost conditions. Multi-attribute linear utility (MLU) performed best but as the cost decreased performance differences with equal-weighted linear utility (EW) and single-attribute utility (SA) attenuated.

Figure 3

Figure 2: The average utility achieved by the subjective utility models as a function of search cost. Overall, multi-attribute linear utility (MLU) performed best, but equal-weighted linear utility (EW) and single-attribute utility (SA) also had regions of superior performance. The difference between the models gets smaller as the cost of search decreases. CPU stands for central processing unit and f. lamp for fluorescent lamp.

Figure 4

Figure 3: The average utility of the best explored alternative after a search of length k. In environments with a high R2, such as central processing unit (CPU) efficiency and octane quality, the best alternatives are located early in the search and the differences between these strategies and random search are the largest. In most cases there is a close correspondence between model performance in the full task and model performance in mere search; contrast with Figure 2. MLU stands for multi-attribute linear utility, EW for equal-weighted linear utility, and SA for single-attribute utility.

Figure 5

Figure 4: On the upper left side we present the average standard deviation of the error component of each of the models in that environment. The length of search is moderated by the best alternative discovered so far and the estimated deviation of the error component of the models. On average, multi-attribute linear utility (MLU) searches less than the equal-weighted linear utility (EW) and single-attribute utility (SA) models but there is some variability across environments.

Figure 6

Table 3: The average length of search (number of alternatives sampled k) across environments for the three models for different costs of search. The equal-weighted linear utility (EW) and single-attribute utility (SA) models search more alternatives, on average, than multi-attribute linear utility (MLU). The differences in the length of search are more pronounced for low costs.

Figure 7

Table 4: Accuracy of the three models on binary choices. Fifty percent of the dataset was used as a training set and all the possible pairs of the test set were used to evaluate the models. Multi-attribute linear utility (MLU) performed best, on average, followed by single-attribute utility (SA) and equal-weighted linear utility (EW). MLU performed best in seven environments EW in four and SA in one.

Supplementary material: File

Analytis et al. supplementary material

Analytis et al. supplementary material
Download Analytis et al. supplementary material(File)
File 97.9 KB