Hostname: page-component-77f85d65b8-hzqq2 Total loading time: 0 Render date: 2026-04-14T08:10:56.392Z Has data issue: false hasContentIssue false

Time pressure reduces financial bubbles: evidence from a forecasting experiment

Published online by Cambridge University Press:  27 March 2026

Mikhail Anufriev
Affiliation:
Department of Economics, UTS Business School, University of Technology Sydney, Sydney, NSW, Australia Department of Finance, VŠB - Technical University of Ostrava, Ostrava, Czechia
Frieder Neunhoeffer
Affiliation:
Universidade de Lisboa, ISEG Lisbon School of Economics and Management, ISEG Research, Lisboa, Portugal
Jan Tuinstra*
Affiliation:
University of Amsterdam, Amsterdam School of Economics, Amsterdam, The Netherlands
*
Corresponding author: Jan Tuinstra; Email: j.tuinstra@uva.nl
Rights & Permissions [Opens in a new window]

Abstract

We investigate whether time pressure exacerbates or mitigates bubbles in laboratory experiments. We find that under high time pressure price volatility is lower and market prices are closer to their fundamental value. This is due to participants using simpler adaptive forecasting strategies, instead of the self-reinforcing extrapolative expectations that they use under low time pressure, and which are conducive to the emergence of bubbles. In addition, by substantially increasing the number of decision periods in our experiment, we find that in the long run prices tend to converge to their fundamental value, also in the absence of time pressure.

Information

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of the Economic Science Association.

1. Introduction

Expectations play a crucial role in economics due to their profound influence on decision-making. Individuals base their choices today—such as consuming or producing goods and services, investing in financial markets, or purchasing real estate—on expectations about the future. These choices shape economic outcomes which, in turn, influence the expectations of decision-makers. Understanding this feedback loop between expectations and market outcomes is essential for evaluating market functioning and predicting the effects of economic policies. A common simplification assumes that expectations are formed rationally, consistent with the underlying economic model. However, this assumption has been challenged both by experimental evidence (Duffy, Reference Duffy, Kagel and Roth2016; Palan, Reference Palan2013) and survey data (Case & Shiller, Reference Case and Shiller2003; Coibion et al., Reference Coibion, Gorodnichenko and Kamdar2018). Recent theoretical and experimental work shows that extrapolative expectations can temporarily become self-fulfilling, particularly in demand-driven asset markets, leading to persistent price deviations from fundamental values (Anufriev & Hommes, Reference Anufriev and Hommes2012; Barberis et al., Reference Barberis, Greenwood, Jin and Shleifer2018; Fuster et al., Reference Fuster, Laibson and Mendel2010; Hommes et al., Reference Hommes, Sonnemans, Tuinstra and van de Velden2005).

This paper advances the understanding of forecasting in self-referential financial markets by addressing two key observations. First, the existing literature on expectation formation has largely overlooked the role of decision-making time. In financial markets, the ability to respond swiftly to profit opportunities is critical for traders’ success. This is evident in the fast-paced nature of trading, from the rapid reactions of open outcry exchanges to market news within seconds (Busse & Green, Reference Busse and Green2002) to the rise of high-frequency trading algorithms optimized for ultra-short-term transactions. As a result, traders often operate under significant time pressure, which influences their behavior and shapes market dynamics.

Second, experimental research on forecasting typically examines behavior in stationary market environments over a relatively small number of decision periods—usually 50 or fewer. However, evidence from other domains (e.g., the Cournot game in Friedman et al., Reference Friedman, Huck, Oprea and Weidenholzer2015) suggests that this level of repetition may not fully capture the underlying learning processes of participants. Thus, it is essential to investigate whether the observed patterns in expectation formation persist over an extended number of decision periods or are merely transitory.

We address these gaps by investigating the effects of (i) extended running time (i.e., more decision periods) and (ii) time pressure (i.e., reduced time per decision) on asset price dynamics in a Learning-to-Forecast (LtF) experiment. LtF experiments have become essential for studying financial markets (Hommes, Reference Hommes2011). These experiments isolate participants’ forecasting abilities, with computerized agents executing trades based on their price predictions. Market clearance is handled by the experimental software, allowing researchers to analyze the self-referential nature of markets and the reciprocal influence between forecasts and outcomes.Footnote 1

A consistent finding in LtF asset pricing experiments is the endogenous emergence of bubbles and crashes. Building on the influential study by Hommes et al. (Reference Hommes, Sonnemans, Tuinstra and van de Velden2005), our experiment extends the number of decision periods in a stationary environment to approximately 150 and systematically varies the time available for decision-making both between and within subjects. Our main findings are as follows.

First, we find that time pressure has a mitigating effect on the occurrence of bubbles and crashes, particularly during the first $50$ periods. To understand the underlying mechanism, we analyze participants’ forecasting rules. Under low time pressure—consistent with previous LtF experiments—participants tend to rely on trend-extrapolating rules, expecting price increases to continue. These expectations drive higher demand from computerized traders, raising market-clearing prices. As expectations of higher prices are validated, this strategy is reinforced, resulting in large bubbles. In contrast, high time pressure prompts participants to adopt simpler prediction rules based only on the last observed price, which inhibits bubble formation.

Second, regarding extended running time, our data show that in markets with sufficient decision time (i.e., low time pressure), the incidence and size of bubbles and crashes gradually decrease. This finding suggests that the patterns observed in earlier 50-period LtF experiments are transitory. The mechanism is explained through the analysis of forecasting rules, as participants eventually learn to adopt strategies that lead to better performance. However, convergence remains fragile, and bubbles occasionally re-emerge—regardless of the level of time pressure—even after many periods.

We now place these findings within the context of the existing literature.

1.1. Literature review

By studying the expectation channels driving price bubbles and crashes, we contribute to the financial literature documenting repeated boom-and-bust cycles in real markets (e.g., Phillips et al., Reference Phillips, Wu and Yu2011; Phillips et al., Reference Phillips, Shi and Yu2015 for stock markets, Cheah & Fry, Reference Cheah and Fry2015; Corbet et al., Reference Corbet, Lucey and Yarovaya2018 for cryptocurrency markets, and Case & Shiller, Reference Case and Shiller2003; Case et al., Reference Case, Shiller and Thompson2012 for housing markets). These dynamics have been replicated in laboratory experiments with paid participants under various conditions (see Hommes, Reference Hommes2011 and Palan, Reference Palan2013 for reviews). However, in such experiments, participants’ decisions are typically made without time pressure, and the limited number of decision periods makes it difficult to assess whether experimental bubbles are persistent or merely transient phenomena that disappear over time.

Laboratory experiments on the effects of time pressure on decision-making have a long tradition in psychology but are relatively new in economics (see Spiliopoulos & Ortmann, Reference Spiliopoulos and Ortmann2018 for a review). The study by Moritz et al. (Reference Moritz, Siemsen and Kremer2014) examines an individual decision-making experiment in which participants repeatedly predict the next realization of an exogenous time series. The authors find that forecasting performance declines under time pressure. Similarly, Kocher and Sutter (Reference Kocher and Sutter2006) provide evidence from an experimental beauty-contest game, showing that time pressure can impair decision-making quality by delaying convergence to the Nash equilibrium. Ferri et al. (Reference Ferri, Ploner and Rizzolli2021) examine time pressure in the classic Smith et al. (Reference Smith, Suchanek and Williams1988) asset market experiment, finding that it increases price volatility and leads to positive deviations from fundamental values, especially early in the experiment.

In contrast to these studies, we find that time pressure enhances forecasting performance. We attribute this result to the way forecasting strategies emerge and evolve within expectation feedback systems, a feature not addressed in prior studies. Under higher time pressure, participants tend to adopt simpler forecasting strategies, such as relying on the last observed price instead of extrapolating price trends. These simplified strategies stabilize market dynamics, improving price predictability.

Our explanation aligns with adaptive decision-making (ADM) theory (Payne et al., Reference Payne, Bettman and Johnson1993). For example, Payne et al. (Reference Payne, Bettman and Johnson1988) and Rieskamp and Hoffrage (Reference Rieskamp and Hoffrage2008) show that in individual decision-making tasks, participants adapt to high time pressure by employing less complex decision rules. Building on this, Spiliopoulos et al. (Reference Spiliopoulos, Ortmann and Zhang2018) demonstrate that similar adaptations occur in strategic interaction problems. In an environment with dynamic feedback, we find that less complex forecasting rules reduce market volatility, creating a stabilizing effect. This ADM interpretation is further supported by within-subject variations in our study, where we identify systematic differences in the rules participants use under different conditions.Footnote 2 Furthermore, Gigerenzer and Goldstein (Reference Gigerenzer and Goldstein1996) and Goldstein and Gigerenzer (Reference Goldstein and Gigerenzer2009) show that simple decision or prediction rules can outperform more complex ones, a pattern also evident in our experiment.

By running an LtF asset pricing experiment, we contribute to the body of research that consistently demonstrates the endogenous emergence of bubbles and crashes, driven by strong coordination among participants on a common trend-extrapolating forecasting strategy (e.g., Hommes et al., Reference Hommes, Sonnemans, Tuinstra and Van de Velden2008). This pattern has been observed under various conditions, including the presence of stabilizing ‘fundamentalist’ robot traders (Hommes et al., Reference Hommes, Sonnemans, Tuinstra and van de Velden2005), in large markets with up to 100 participants (Hommes et al., Reference Hommes, Kopányi-Peuker and Sonnemans2021), with varying market fundamentals (Alfarano et al., Reference Alfarano, Camacho-Cuena, Colasante and Ruiz-Buforn2024), when the forecasting is framed in terms of returns instead of prices (Hanaki et al., Reference Hanaki, Hommes, Kopányi, Kopányi-Peuker and Tuinstra2023), and among experienced participants (Kopányi-Peuker & Weber, Reference Kopányi-Peuker and Weber2021).Footnote 3

Our study is the first LtF experiment to extend to approximately 150 decision periods, significantly exceeding the duration of previous studies. Earlier LtF experiments typically span a maximum of 50 consecutive periods, except for those divided into separate blocks. For instance, Bao et al. (Reference Bao, Hommes, Sonnemans and Tuinstra2012) conducted 65 periods incorporating structural changes in the asset’s fundamental value at periods 21 and 44. Similarly, Kopányi-Peuker and Weber (Reference Kopányi-Peuker and Weber2021) examined the effect of experience through three consecutive repetitions lasting 28, 32, and 26 periods, respectively. Notably, in both studies, bubbles persisted even in the later stages of the experiments.

Asset trading experiments, inspired by Smith et al. (Reference Smith, Suchanek and Williams1988), typically span around 15 periods, during which bubbles and crashes are well-documented. These dynamics often persist even in longer experiments. For instance, Lahav (Reference Lahav2011) and Hoshihata et al. (Reference Hoshihata, Ishikawa, Hanaki and Akiyama2017) conducted 200-period trading experiments and observed the formation of multiple bubbles throughout the session. Similarly, Smith et al. (Reference Smith, Lohrenz, King, Montague and Camerer2014) explored trading over 50 periods and found that most markets exhibited a single bubble and subsequent crash pattern. Finally, Kopányi-Peuker and Weber (Reference Kopányi-Peuker and Weber2024) compare treatments with 15 and 30 periods, as well as treatments with an indefinite horizon (but with expected duration close to these numbers), and find very similar dynamics across all treatments.Footnote 4

Our low time pressure condition replicates previous LtF studies, and we find that bubbles tend to disappear over time in our longer experiment. The number of decision periods significantly affects outcomes in other decision-making environments as well. For example, Berninghaus and Ehrhart (Reference Berninghaus and Ehrhart1998) find that increasing repetitions in a minimal effort game helps participants coordinate on the Pareto-dominant equilibrium. Similarly, Duffy and Hopkins (Reference Duffy and Hopkins2005) observe that in a 100-period market entry experiment, participants eventually coordinate on an asymmetric pure strategy Nash equilibrium, though convergence requires nearly all 100 periods. Friedman et al. (Reference Friedman, Huck, Oprea and Weidenholzer2015) study Cournot games over 1200 periods, finding that quantities align with the competitive outcome during the first 50 periods but eventually fall below the Cournot-Nash equilibrium and, in some cases, approach collusive levels. Finally, Bartling et al. (Reference Bartling, Fehr and Özdemir2023) challenge earlier findings on markets and moral values by Falk and Szech (Reference Falk and Szech2013), showing that these effects do not hold under repetitions. Similarly, our study cautions that LtF experiments may report only transitory outcomes, even when conducted for as many as 50 periods.

The remainder of this paper is organized as follows. Section 2 details the experimental design and outlines the tested hypotheses. Section 3 presents the results, analyzed at both aggregate and individual levels. Section 4 offers concluding remarks. The underlying asset pricing model and additional statistics are provided in Appendices A and B. Additional data and supplementary information, including experimental instructions, demographic details, and participant response analyses, are available in the Supplementary Materials online.

2. Experimental design

The experiment, programmed in oTree (Chen et al., Reference Chen, Schonger and Wickens2016), was conducted in the CREED laboratory of the University of Amsterdam (UvA). A total of 186 participants attended 12 sessions across three treatments, with no individual participating in more than one session. The sample was fairly gender-balanced (54% female), with an average age of 22. A majority (65%) of participants were students from the Economics and Business faculty at UvA (see Online Appendix D for more details). Each session lasted about two hours, with average earnings (including a show-up fee) of € 27.64, ranging from € 13 to € 40.

The experimental setup builds on the classical asset-pricing model with heterogeneous expectations (see Brock & Hommes, Reference Brock and Hommes1998 and Campbell et al., Reference Campbell, Lo and MacKinlay1997) and is similar to the LtF experiments in Hommes et al. (Reference Hommes, Sonnemans, Tuinstra and van de Velden2005); Hommes et al. (Reference Hommes, Sonnemans, Tuinstra and Van de Velden2008). We follow their design to focus solely on the effects of time pressure and long-run dynamics. The financial market consists of a risk-free asset and a risky asset. Participants act as “advisors to pension funds” and do not trade directly; instead, they provide price forecasts, based on which their funds determine the demand for the risky asset. The price of the risky asset is determined by a computer algorithm that incorporates forecasts from the human participants and robot traders. This price is then reported back to the participants, who provide new forecasts for the next period. The process repeats over multiple periods. Participants are paid based on the accuracy of their forecasts.

Before presenting the treatments in Section 2.3 and the hypotheses in Section 2.4, we describe the price-generating mechanism and experimental procedures in the next two sections. Complete experimental instructions are provided in Online Appendix C.

2.1. The price-generating mechanism

As previously mentioned, there are two assets in the market. The risk-free asset pays a fixed interest rate $r \gt 0$ per period. The infinitely lived risky asset pays a dividend $y_t$ in period $t$. The dividend is IID with an expected value $\bar{y}$. The price of the risky asset, $p_{t}$, is endogenously determined by the asset’s aggregate demand and supply. All traders are assumed to be myopic mean-variance maximizers with full information about the dividend process. The model can be solved for the market-clearing price $p_t$, see Appendix A. There we show that traders’ demand for the risky asset is an increasing linear function of the expected return $p_{i,t+1}^{e}+\bar{y}-(1+r)p_t$, where $p_{i,t+1}^{e}$ represents trader $i$’s forecast for the price in period $t+1$, made at the beginning of period $t$, prior to the realization of the market-clearing price $p_t$. In the experiment, participants’ task is to provide these price forecasts to their trader (pension fund).

In each experimental market, there are six large pension funds, each advised by a participant, and a fraction $n_t\in[0,1)$ of robot traders. These robots generate forecasts representing the fundamental price of the risky asset, defined as the discounted expected value of all future dividends, and are therefore similar to “value investors” in financial markets. For our IID dividend process, the fundamental price is

(1)\begin{equation} p^{f}=\frac{\bar{y}}{r}\,. \end{equation}

The market-clearing price in period $t$, depending on market composition, is

(2)\begin{equation} p_t = \frac{1}{1+r} \Big( (1-n_t) \bar{p}_{t+1}^{e} + n_t p^f + \bar{y} \Big)\,, \end{equation}

where $\bar{p}_{t+1}^{e}$ is the average expected price for period $t+1$ among human participants who submitted forecasts during period $t$. Substituting Eq. (1) yields

(3)\begin{equation} p_t-p^f = \frac{1-n_t}{1+r} \left( \bar{p}_{t+1}^{e}-p^f \right)\,. \end{equation}

Hence, the price deviation from the fundamental value in period $t$ depends positively on the expected deviation in period $t+1$. This creates positive expectations feedback between predicted and realized prices: if traders expect the price to rise, they buy the asset to benefit from anticipated capital gains, driving up the market-clearing price. As the right-hand side of Eq. (3) decreases with $n_t$, robots, acting as fundamentalists, mitigate this feedback. Following Hommes et al. (Reference Hommes, Sonnemans, Tuinstra and van de Velden2005), we assume these fundamentalists become more active as the price deviates further from its fundamental value, setting

(4)\begin{equation} n_t = 1-\exp \left( -\frac{1}{20} \left| \frac{p_{t-1}-p^f}{p^f} \right| \right) \,. \end{equation}

As a result, the robots’ influence grows with mispricing, acting as an endogenous mechanism to prevent bubbles from growing indefinitely. Specification (4) ensures that the mispricing must be substantial for the robots to become relevant. Even when the price is twice the fundamental value, the robots’ weight remains below 5%, which is less than one-third of the weight of each human participant’s forecast.Footnote 5

2.2. Experimental procedure and software

Upon arriving at the lab, participants receive both paper and on-screen instructions detailing the market environment, task, experiment structure, and payoff determination (see Online Appendix C). Following standard LtF experimental protocols (Hommes, Reference Hommes2011), participants are not explicitly provided with equations (2) and (4). Instead, they are informed that several funds allocate their wealth between a risky and a risk-free asset with the price of the risky asset determined by the equilibrium between the aggregate demand of all funds and a fixed supply. Participants act as financial advisors tasked with forecasting the future price of the risky asset as accurately as possible. They are also informed that higher price forecasts increase their fund’s investment in the risky asset, while other funds in the market are either advised by fellow participants or follow a fixed investment strategy.

The instructions explain that the experiment consists of two phases of price forecasting, followed by a brief questionnaire. In each phase, participants predict prices for a large number of consecutive periods, ranging from 120 to 180. This range is specified to discourage strategic behavior that might occur toward the end of each phase. Participants are informed of limited time to make each forecast and possible ‘waiting’ time between decisions. They are also told that waiting time, decision time, and market parameters remain constant within each phase but may vary between the two phases.

Earnings are based on participants’ forecasting accuracy and are determined as follows. At the end of the experiment, the computer program randomly selects 10 periods from each phase for payment. The total points accumulated during these 20 periods determine each participant’s payoff. For a selected period $t$, the points earned by participant $i$, $e_{i,t}$, depend on their absolute prediction error, according to

(5)\begin{equation} e_{i,t}=\frac{200}{1+ \left| p_{t}-p_{i,t}^{e} \right |}\,, \end{equation}

where $p_{i,t}^e$ is the participant’s forecast for the realized price $p_{t}$. If no forecast is made in period $t$, $e_{i,t}=0$. The hyperbolic scoring rule (5) strongly incentivizes accuracy by penalizing even small errors. A graphical representation of this scoring function, included in the instructions, can be found in Online Appendix C.

This payoff structure ensures participants have a strong incentive to be accurate in each period, regardless of their performance in other periods. Total points earned are converted at a rate of one euro per 100 points at the end of the experiment. Participants can earn between 0 and 2 euros per period, with a maximum of € $40$ for the entire experiment. In addition, each participant receives a € $10$ show-up fee.

After reading the instructions and completing a short comprehension test, participants engage in a practice round to familiarize themselves with the software. Figure 1 shows an example of the interface. Participants can submit forecasts by either typing a number into the submission box at the top of the screen or clicking directly on the main graph.Footnote 6 When using the graph, the selected value appears both as the forecast in the graph and in the submission box. Participants may revise their forecasts as often as they wish within the decision time. Once the decision time expires, the forecast in the submission box is automatically submitted, except in treatment $\mathbf{HLS}$ (see Section 2.3), where participants must finalize their forecast by pressing the blue ‘Submit’ button next to the submission box or the ‘Enter’ key on the keyboard.

Figure 1. An example of the computer screen. The graph shows past forecasts (blue) and past prices (red). The table provides the same information, and also displays the “Potential earnings”, i.e., the points awarded for each period if that period is selected for payment, as computed using Eq. (5). A forecast can be entered either by typing a number into the box at the top center of the screen or by clicking on the graph in the lower part. This example is from treatment $\mathbf{HLS}$, where participants must either press the ‘Enter’ key on the keyboard or click the blue ‘Submit’ button at the top of the screen to submit their forecast

Our sessions consist of 6, 12, or 18 participants. Once all participants have completed the practice round, the computer randomly forms groups of six, and the first phase begins. The starting screen announces the interest rate $r$, the mean dividend $\overline{y}$, and the waiting and decision times. This information remains visible at the top of the screen throughout the phase, along with a countdown timer. The realized price for period $t=1$ is determined by participants’ forecasts for period $t=2$. Thus, participants begin by submitting forecasts for periods 1 and 2. No robots are active in period 1. To limit initial forecast dispersion, the instructions state that the first two prices in the first phase are “likely to lie between 0 and 200,” and the first two prices in the second phase are “likely to lie between 0 and 100.” These ranges include the fundamental price. Subsequently, in each period $t \gt 1$, participants submit a price forecast for period $t+1$.

The computer interface includes a graph displaying the participant’s forecasts (blue) and realized prices (red) for the most recent $20$ periods (Figure 1). A smaller graph in the upper-left corner shows the entire time series from the start of the phase, with both graphs automatically adjusting their vertical axes to accommodate higher prices or forecasts.Footnote 7 A table on the right-hand side shows the participant’s forecasts, realized prices, and the number of points per period, computed using Eq. (5). These points are labeled “Potential Earnings” because they are only earned if the corresponding period is selected for final payment.

At the end of the phase, participants are notified of their transition to a new market. They are randomly re-matched into groups of six, and the second phase begins with new market parameters, including new waiting and decision times.

After finishing the two phases, participants complete the standard three-question Cognitive Reflection Test (Frederick, Reference Frederick2005). They then fill out a questionnaire on demographic information (e.g., gender, age, study program), their previous experience with lab experiments and the forecasting strategies they used in each of the phases in the experiment (see Online Appendix D). Finally, the computer informs each participant which 20 rounds were chosen for payment and displays their final payoff. All payments are made privately.

2.3. Treatments

We solicit forecasts from participants under two distinct conditions: the low time pressure (LTP) condition and the high time pressure (HTP) condition. These conditions differ in the amount of time participants are given to submit their forecasts.

Each decision period consists of two stages: a waiting time and a decision time. During the waiting time, participants can view the computer interface, which includes a graph and table displaying past prices and forecasts. They can also navigate the screen with their mouse and select potential forecasts. However, the submission box does not appear until the waiting time ends and the decision time begins, at which point it displays the last value clicked and allows participants to submit their forecasts.

In the LTP condition, the waiting time is set to 10 seconds, followed by a decision time of 15 seconds, totaling 25 seconds per period. This duration aligns with the average time taken for forecasts in previous LtF experiments.Footnote 8 In contrast, the HTP condition features no waiting time and a decision time of only $6$ seconds, placing participants under substantial time pressure. This impact was confirmed by responses to the post-experiment questionnaire.

Each participant experienced both conditions, implemented in two separate phases. However, this design was primarily intended to ensure comparable session lengths and participant payoffs, as encouraged by the CREED-lab guidelines, rather than to facilitate within-subject comparisons. We do not primarily focus on within-subject comparisons due to the potential for strong order effects, which are beyond our control. Individual experiences in LtF experiments heavily depend on group composition, which, as demonstrated by Hennequin (Reference Hennequin2018), can significantly influence subsequent behavior. Consequently, we anticipated that the second-phase data would be of lower quality, as heterogeneous prior experiences would likely have a pronounced impact. Additionally, variations in the show-up rates made it impossible to guarantee comparable rematching for sessions conducted on different days.

For these reasons, our design incorporates between-subjects treatments. The treatments differ in the order in which participants experience the two conditions. To isolate the time pressure effect as sole source of any between-treatment differences, we used identical market parameters across treatments in the first phase. To mitigate order effects in the second phase, we modified the parameters to generate a substantially different fundamental price ( $p^{f}=126.4$ in the first phase and $p^{f}=71.2$ in the second phase), which remained constant across treatments. To mitigate potential end-of-phase effects caused by round numbers of periods or reusing the same number of periods, we predetermined the total number of periods for each condition: 146 for the LTP condition and 159 for the HTP condition. However, participants were only informed about the range of possible periods. The selected number of periods was also guided by the need to ensure that sessions could be completed within approximately two hours.

We refer to the treatment that begins with the LTP condition as treatment $\mathbf{LH}$ and to the treatment that begins with the HTP condition as treatment $\mathbf{HL}$. Table 1 summarizes the treatments, specifying their parameters, the number of markets, and their notation. The letter indicates the corresponding time pressure condition. Markets in the second phase are denoted with a superscript 2. While the number of markets for each treatment is the same in the first and second phases, their composition differs due to rematching. We conducted 12 markets in each phase of treatment $\mathbf{LH}$ and 10 markets in each phase of treatment $\mathbf{HL}$.

Table 1. Overview of the treatments. The last two columns display the experimental market notations and the parameter values for each of the experiment’s two phases. In both phases, the interest rate is $r\!=\!0.05$. Each market consists of six human participants

During the initial sessions of our experiment, we observed a series of downward spikes in individual forecasts under the HTP condition (e.g., panels 4, 8, and 10 in Figures F4 and F5, Online Appendix F). These anomalies appear to stem from participants being unable to complete their forecasts in the submission box within the allotted time.Footnote 9 Such outliers, absent under the LTP condition, may have lasting effects on price dynamics. To rule out these outliers as the primary driver of differences between time pressure conditions, we introduced an additional treatment, $\mathbf{HLS}$, as a robustness check. This treatment was identical to $\mathbf{HL}$ in all respects, except that participants were required to explicitly confirm their forecasts by either pressing the “Enter” key or clicking the “Submit” button added next to the submission box. We conducted nine markets in each phase of treatment $\mathbf{HLS}$.

Finally, it is important to note that in all treatments, the average forecast used in Eq. (2) was calculated based only on submitted forecasts. Consequently, for some periods the average forecast could be based on fewer than six forecasts if not all participants managed to submit their forecasts for those periods within the time limit.

2.4. Hypotheses

The first research question examines whether the emergence of large bubbles and crashes in asset prices, frequently observed in LtF experiments, is a transient phenomenon. That is, will market prices eventually stabilize and converge to their fundamental value? Existing LtF experiments provide limited evidence on this issue, as they typically span only about 50 periods, leaving it unclear whether price volatility decreases or increases over time.

It is plausible to hypothesize that, given sufficient opportunities to learn within the stationary environment of the experiment, participants may improve their predictions, ultimately driving market prices toward their fundamental value. Moreover, participants’ earnings, which depend on absolute forecast errors, tend to be substantially lower in volatile markets than in those where prices closely track the fundamental value. This creates a strong incentive for participants to adapt their prediction strategies when such strategies contribute to pronounced volatility.

This leads to the following hypothesis:

Hypothesis 1. Asset price volatility and mispricing decrease in the long run.

The second research question focuses on the effect of time pressure on price volatility and mispricing. The direction of this effect is not evident a priori. On the one hand, time pressure has been shown to reduce decision-making quality (Kocher and Sutter, Reference Kocher and Sutter2006) and increase market volatility in experimental asset markets (Ferri et al., Reference Ferri, Ploner and Rizzolli2021). Under increased time pressure, participants have less opportunity for thorough deliberation, which may lead to forecasting errors and, consequently, heightened price volatility and mispricing.

This consideration leads to the following hypothesis:

Hypothesis 2. Increased time pressure increases price volatility and mispricing.

On the other hand, bubbles in earlier LtF experiments appear to be driven by participants coordinating on a trend-extrapolating prediction strategy, where forecasts depend on the last two observed prices. When all participants in a market extrapolate trends from past prices, the positive feedback inherent in the underlying price-generating mechanism amplifies the trend, giving rise to large bubbles in asset prices. Under increased time pressure, however, coordination on a common prediction strategy may become more difficult, potentially inhibiting the emergence of large bubbles and crashes. For example, Anufriev et al. (Reference Anufriev, Chernulich and Tuinstra2022) found that increasing task complexity—such as by extending the forecasting horizon in LtF experiments—hinders the coordination of expectations and results in greater price stability. Thus, rather than increasing volatility, higher time pressure might actually promote more stable price dynamics.

This consideration leads to an alternative version of the second hypothesis:

Hypothesis 3. Increased time pressure reduces price volatility and mispricing.

We will now discuss the experimental results in view of Hypotheses 1, 2, and 3.

3. Experimental results

In this section, we present the experimental data and use it to test the hypotheses formulated in Section 2.4. The experimental design enables us to examine the effect of time pressure—comparing the low-time pressure (LTP) condition with the high-time pressure (HTP) condition—through both between-subject and within-subject analyses. For the between-subject analysis, we compare treatment $\mathbf{LH}$ with treatments $\mathbf{HL}$ and $\mathbf{HLS}$, phase by phase. For the within-subject analysis, we compare the first and second phases within each treatment. However, as discussed in Section 2.2, the second-phase data are likely influenced by participants’ first-phase experiences. Consequently, we prioritize the first-phase data and between-subject comparisons, using second-phase data cautiously.

In Section 3.1, we analyze the effects of the number of decision periods and time pressure on market prices, focusing on the between-subject variation in the first-phase data. Robustness checks, using the experimental data from both phases, are presented in Section 3.2. In Section 3.3, we investigate the mechanisms leading to our results by analyzing “market expectations”, defined as the average price forecast across all participants within the same market. Finally, in Section 3.4, we classify participants’ forecasting behavior according to several heuristics and examine their evolution both between and within treatments.

Since the number of periods under the HTP condition exceeds that under the LTP condition, our analysis focuses on the common periods 1-145. To compare short-run and long-run dynamics, we analyze prices and forecasts during periods 11–50 and 106–145. The selection of periods 11-50 follows the convention for the analysis of earlier LtF experiments to exclude the first ten (of the typically 50) periods to account for participants’ initial learning. To ensure an equal number of periods for analysis, we include the last 40 common periods, corresponding to periods 106–145.

3.1. Market prices

In this section, we analyze the market prices using the first-phase data. The top panel of Figure 2 corresponds to the LTP condition and shows the prices (thin gray lines) for each $\mathbf{L}$ market (i.e., from the first phase of treatment $\mathbf{LH}$). The remaining two panels correspond to the HTP condition, illustrating the prices for $\mathbf{H}$ and $\mathbf{HS}$ markets (i.e., from the first phase of treatments $\mathbf{HL}$ and $\mathbf{HLS}$). In each panel, the black line represents the median price across all markets in each period.Footnote 10 To facilitate comparisons, the vertical axis in all panels ranges from 0 to 500, which is approximately four times the fundamental price. Note that in some periods, particularly in treatment $\mathbf{LH}$, market prices exceed 500.Footnote 11

Figure 2. Median prices (thick black line) and prices in individual markets (gray lines) during the first phase of the three experimental treatments. The fundamental price, $p^f=126.4$, is indicated by the dashed horizontal line

We can make several preliminary observations by comparing the median prices in Figure 2, further supported by the prices in each market. First, median prices under the LTP condition show significantly higher overvaluation compared to those under the HTP condition. In contrast, there appears to be no clear difference between the median dynamics of $\mathbf{H}$ and $\mathbf{HS}$ markets, both conducted under the HTP condition. This suggests that differences between time pressure conditions cannot be attributed to participants submitting incomplete predictions in treatment $\mathbf{HL}$.

Second, price volatility is notably high under the LTP condition, with prices oscillating wildly in all markets (see also footnote 11). Conversely, price volatility under the HTP condition is considerably lower. Most markets in the lower two panels exhibit relatively minor oscillations and tend to converge quickly to the fundamental value (e.g., markets $\mathbf{H}$3, $\mathbf{H}$4, $\mathbf{HS}$1, $\mathbf{HS}$3 and $\mathbf{HS}$4).

Third, under the LTP condition, oscillations begin almost immediately, while under the HTP condition, they seem to be more prevalent in the second half of the experiment, following an initial phase of relatively minor volatility (e.g., markets $\mathbf{H}$5, $\mathbf{H}$8, $\mathbf{HS}$8 and $\mathbf{HS}$9). Moreover, fluctuations under the LTP condition seem to diminish over time, though this tendency is fragile; in some markets, oscillations re-emerge after prices seem to have converged (e.g., markets $\mathbf{L}$1, $\mathbf{L}$2, $\mathbf{L}$8 and $\mathbf{L}$9). By contrast, under the HTP condition, there is no evident structural decline in price volatility over time.

To corroborate these observations, we consider two quantitative measures: the interquartile range (IQR) to evaluate price volatility and the median of the relative absolute deviations (RAD) from the fundamental value to assess mispricing.Footnote 12 Figure 3 shows the IQR (left panel, logarithmic scale) and the median RAD (right panel) for all 31 experimental markets of the first phase. Each panel is divided into three sections corresponding to the $\mathbf{L}$, $\mathbf{H}$, and $\mathbf{HS}$ markets. Within each section, the statistics for each market are computed over three different time periods: 11-50 (blue dots on the left), 1-145 (black dots in the middle), and 106-145 (red dots on the right), with disks indicating the median value. Numerical values for both measures for all markets are provided in Table B1 in Appendix B.

Figure 3. Measures of price volatility (IQR, left panel, logarithmic scale) and mispricing (Median of RAD, right panel) by treatment for each market of the first phase, computed over three different time periods: $11$ $50$ (blue dots), $1$ $145$ (black dots), and $106$ $145$ (red dots). The disks show the median over the markets

Figure 3 illustrates that, under the LTP condition, both measures tend to decrease over time. Notably, as shown in Table B1, the IQR decreases in all $\mathbf{L}$ markets except one ( $\mathbf{L}$12) and the median RAD decreases in all $\mathbf{L}$ markets except two ( $\mathbf{L}$8 and $\mathbf{L}$12).Footnote 13

This tendency is statistically confirmed by a Wilcoxon signed-rank test; see Table 2 for the $p$-values of tests presented in this section. Comparing $\mathbf{L}$ markets between periods 11-50 and 106-145, the $p$-values for the two-sided tests are 0.0010 for the IQR and 0.0210 for the median RAD. These results indicate that, under the LTP condition, both price volatility and mispricing are significantly lower in later periods compared to initial periods, at the 1% and 5% significance levels, respectively. In contrast, in markets conducted under the HTP condition ( $\mathbf{H}$ or $\mathbf{HS}$), neither price volatility nor mispricing shows a statistically significant decrease between periods 11-50 and 106-145.

Table 2. $p$-values of the corresponding tests (see the last column) for various comparisons on the first-phase data

Note: The first two columns specify the markets and time periods for the data. Asterisks

* , ** and *** indicate $p$-values that are below $10\%$, $5\%$ and $1\%$, respectively.

Consequently, we reject Hypothesis 1 for the first-phase data from treatments $\mathbf{HL}$ and $\mathbf{HLS}$, but not for treatment $\mathbf{LH}$.

Result 1

Price volatility and mispricing decrease in the long run under the LTP condition but not under the HTP condition.

Recall that in earlier LtF experiments, prices do not converge to their fundamental value within the standard duration of 50 periods. This aligns with the dynamics observed in our $\mathbf{L}$ markets: until period 50, all twelve markets exhibit large price fluctuations. However, as Result 1 indicates, price fluctuations tend to decrease after these initial 50 periods. Long-run dynamics differ significantly from those at the outset, though boom-and-bust cycles can occasionally re-emerge. We conclude that while bubbles and crashes initially seem persistent, they are ultimately transient in this stationary environment and diminish over time.

Turning to the comparison of time pressure conditions reveals striking differences in price patterns during the first 50 periods. In particular, the HTP condition (covering $\mathbf{H}$ and $\mathbf{HS}$ markets) exhibits smaller price fluctuations than the LTP condition and earlier LtF experiments. For example, among the 19 HTP markets, only two ( $\mathbf{H}$6 and $\mathbf{HS}$5) show larger fluctuations (measured by the IQR) in periods 11–50 than the most stable LTP market ( $\mathbf{L}$8); see Table B1. Moreover, when price fluctuations arise under the HTP condition, they are often triggered by extreme predictions from individual participants.Footnote 14 In contrast, such outliers are rare during the first 50 periods of the LTP condition, indicating that price fluctuations there cannot be attributed solely to idiosyncratic behavior by individual participants.

Hence, increased time pressure seems to inhibit price volatility and the emergence of bubbles and crashes during the initial 50 periods. This is confirmed by comparing the IQR and the median RAD for periods 11-50 using a two-sided Mann-Whitney-Wilcoxon (MWW) test. Differences between the $\mathbf{L}$ and $\mathbf{H}$ markets, and between the $\mathbf{L}$ and $\mathbf{HS}$ markets, are statistically significant at the 1% level (see Table 2).

Figures 3 and B1 to B3 suggest that the effect of time pressure on price behavior diminishes toward the end of the phase. For instance, although the median RAD is on average still higher under the LTP condition than under the HTP condition, the mean IQR for periods 106-145 is lower under the LTP condition, with six of the seven markets exhibiting the highest IQR belonging to the HTP condition (Table B1). While the differences in the IQR between the LTP and HTP conditions are not statistically significant for periods 106-145, the differences in the median RAD are significant at the 10% level.

This leads to our second result, confirming Hypothesis 3.

Result 2

Price volatility and mispricing are lower under HTP compared to LTP for periods 11-50. For periods 106-145, there is no statistically significant difference in price volatility.

Finally, test results reported in the lower part of Table 2 indicate that, regardless of the periods examined, there is no significant difference in either IQR or median RAD between the first-phase markets conducted under the two HTP treatments ( $\mathbf{HL}$ and $\mathbf{HLS}$). Thus, incomplete predictions have a minimal impact on price dynamics.

3.2. Robustness of results 1 and 2

In this section, we first assess the robustness of Results 1 and 2 by using the entire dataset, including second-phase data. We subsequently discuss whether non-responses by participants in the HTP condition could have influenced our findings.

3.2.1. Tests using pooled data

Table 3 presents the $p$-values for tests applied to the entire experimental dataset. Specifically, we pool all 31 markets under the LTP condition (comprising 12 $\mathbf{L}$ markets from the first phase, and 10 $\mathbf{L}^2$ and 9 $\mathbf{LS}^2$ markets from the second phase) and all 31 markets under the HTP condition (comprising 10 $\mathbf{H}$ and 9 $\mathbf{HS}$ markets from the first phase, and 12 $\mathbf{H}^2$ markets from the second phase).

Table 3. $p$-values of the corresponding tests (see the last column) for comparisons based on all data

Note: The first two columns specify the markets and time periods for the data.

* , ** and ***indicate $p$-values that are below $10\%$, $5\%$ and $1\%$, respectively

The upper part of the table supports Result 1: the differences in both volatility (IQR) and mispricing (median RAD) across time are highly statistically significant for the LTP condition, reflecting the long-run stabilization observed in these markets. For HTP markets, as in the first-phase data, there is no significant difference in the IQR, though the median RAD now differs at the 5% significance level.Footnote 15

The lower part of Table 3 supports Result 2: both volatility and mispricing differ significantly across time pressure conditions for periods 11-50 at the $1\%$ level. Such strong significant differences are not observed for the ending interval or for all periods.

3.2.2. Effect of non-submissions

In all three treatments, participants occasionally fail to submit forecasts, resulting in prices being based on fewer than six human forecasts. Could these “non-submissions” explain the observed results?

Figure 4 illustrates the time evolution of the fraction of non-submitted forecasts during both phases, calculated across all markets within each treatment and smoothed using a 5-period moving average. Thin solid lines represent non-submissions in the $\mathbf{LH}$ and $\mathbf{HL}$ treatments, and thick solid lines represent those in the $\mathbf{HLS}$ treatment. Blue lines correspond to the HTP condition, while red lines represent these fractions under the LTP condition.

Figure 4. The fraction of participants who did not submit a forecast in a given time period, shown as a 5-period moving average across all markets over two phases. Red lines represent the LTP condition, and blue lines represent the HTP condition. The phase change, occurring after period 146 in the $\mathbf{LH}$ treatment and after period 159 in the $\mathbf{HL}$ and $\mathbf{HLS}$ treatments, is indicated by the vertical dashed lines

Overall, the fraction of non-submissions is higher in HTP markets, likely due to elevated time pressure, as reflected by between-phase jumps.Footnote 16 Non-submissions may affect price dynamics through a selection effect if participants with more non-submissions differ in forecasting accuracy from others (see Kocher et al., Reference Kocher, Schindler, Trautmann and Xu2019 for a discussion of selection effects induced by time pressure in decision-making).

To rule out this possibility, we conduct two tests. First, despite the higher number of non-submissions in $\mathbf{HS}$ markets, there are no significant differences in IQR or median RAD compared to $\mathbf{H}$ markets (see Table 2). Second, we assess correlations across all HTP markets between the number of non-submissions in periods 1–10 and the IQR and median RAD in periods 11–50. None of these correlations is statistically significant.Footnote 17 These findings indicate that the impact of higher time pressure on price dynamics cannot be attributed to differences in non-submission rates.

Figure 4 also suggests that, despite the many decision periods, fatigue was not a substantial factor in our experiment. If fatigue had been significant and the incentives insufficient to counteract it, we would expect non-submissions to increase over time, particularly toward the end of the experiment. However, this pattern is not observed.

3.3. Market expectations

In this section, we provide structural behavioral explanations for Results 1and 2. Bubbles and crashes in LtF experiments are often linked to participants’ tendency to coordinate on trend-extrapolating forecasting heuristics (Anufriev & Hommes, Reference Anufriev and Hommes2012; Hommes et al., Reference Hommes, Sonnemans, Tuinstra and van de Velden2005). We considered the possibility that the greater stability under HTP might be explained by a failure to coordinate expectations, but find no evidence to support this hypothesis (see Appendix B). An alternative explanation is that the long-run stability under LTP and the lower incidence of bubbles under HTP result from the evolution of forecasting heuristics, and how this is influenced by time pressure.

We analyze market expectations, $\bar{p}_{t+1}^e$, defined as the average forecast submitted by participants in a given market and period. Forecasting heuristics, modeled as:

(6)\begin{equation} \bar{p}_{t+1}^e = a + b_1 p_{t-1} + b_2 p_{t-2} + \nu_t = a + \theta_1 p_{t-1} + \theta_2 (p_{t-1}-p_{t-2}) + \nu_t\,, \end{equation}

where $a$, $b_1$, $b_2$ are fixed coefficients, $\nu_t$ is an error term, and $\theta_1 = b_1 + b_2$, $\theta_2 = - b_2$, have been shown to describe expectations in previous LtF experiments well (Hommes et al., Reference Hommes, Sonnemans, Tuinstra and van de Velden2005). Rule (6) extrapolates price trends when $b_2 \lt 0$, or equivalently, when $\theta_2 \gt 0$.

For each of the 62 experimental markets, we estimate the forecasting heuristic in Eq. (6) separately for periods 11–50 and 106–145. Table 4 reports the average and median estimated coefficients for each treatment and phase.Footnote 18

Table 4. The average and median (calculated over all markets within a treatment and phase) of the estimated coefficients for the prediction rule in Eq. (6)

Table 4 indicates that forecasting rules differ structurally between the beginning and end of the phase, as well as between the low and high time pressure conditions. In $\mathbf{L}$ markets, the median estimate of $b_2$ for periods 11-50 is $-0.95$, reflecting a strong tendency to extrapolate trends: if the price rises by one unit between the two most recent periods, participants’ average forecast increases by nearly one additional unit. This trend extrapolation weakens toward the end of the phase, with the median $b_2$ estimate increasing to $-0.63$ for periods 106-145. By contrast, there is virtually no trend extrapolation in the first phase in the HTP markets ( $\mathbf{H}$ and $\mathbf{HS}$) during the initial periods, with median $b_2$ estimates of $-0.05$ for periods 11-50. The same tendencies are observed in the second phase, though trend extrapolation is initially stronger in $\mathbf{H}^2$ markets.

To visualize these tendencies, Figure 5 displays scatter plots of the estimated coefficient pairs $(b_1, b_2)$ from Eq. (6) for the 31 first-phase markets for periods 11-50 (left) and 106-145 (right). The triangle and parabola divide the $(b_1,b_2)$ space into regions with qualitatively different market dynamics generated by the prediction rule in Eq. (6).Footnote 19 Prices converge if the pair $(b_1,b_2)$ lies inside the triangle and diverge if it lies outside. Dynamics are monotone if $(b_1,b_2)$ lies above the parabola and are oscillatory if it lies below. The closer $b_2$ is to the lower horizontal edge of the triangle, the more persistent the oscillations.

Figure 5. Scatter plots of estimated $(b_1,b_2)$ coefficients from market expectations, Eq. (6) for first-phase markets during periods 11–50 (left panel) and 106–145 (right panel). Prices converge for points inside the triangle. They oscillate below the parabola

The estimates for $\mathbf{L}$, $\mathbf{H}$, and $\mathbf{HS}$ markets are represented by dots, filled squares, and non-filled squares, respectively. For periods 11–50 (left), most coefficient pairs for $\mathbf{L}$ markets are located in the lower right corner of the triangle, often near the boundary. This pattern indicates persistent oscillations. In contrast, the estimated coefficients for HTP markets cluster above the parabola, often with an estimated value of $b_2$ close to 0, suggesting that prices converge monotonically and rapidly.

When comparing this with periods 106–145 (right), we observe how forecasting rules evolve, revealing distinct patterns between conditions. In LTP markets, heuristics shift away from the corner inside the triangle, in a stabilizing direction. Instead, in most HTP markets, estimated market expectations drift in the opposite direction, toward the lower right corner of the triangle, though a few exceptions remain where the estimated value of $b_2$ equals zero. Thus, while trend extrapolation is more pronounced under LTP compared to HTP during the initial periods, this pattern has largely disappeared by the phase’s end.

This discussion is summarized as

Result 3

Trend-extrapolating heuristics are frequently used under low time pressure and are less common under high time pressure, particularly during periods 11–50.

Three additional observations supporting this result can be inferred from Table B2 (Appendix B). First, in all 12 $\mathbf{L}$ markets, the estimated coefficient $b_2$ for periods 11-50 is significantly different from zero, and all these coefficients are negative. Second, in 11 of the 12 $\mathbf{L}$ markets, the estimate of $b_2$ is lower (in absolute value) for periods 106-145 than for periods 11-50. Third, among the 19 HTP markets of the first phase, the estimate of $b_2$ is significantly different from zero in only seven markets for periods 11-50, increasing to 12 markets for periods 106-145. In fact, in several HTP markets in the first phase ( $\mathbf{H}$2, $\mathbf{HS}$3, and $\mathbf{HS}$7), market expectations are close to naïve expectations (i.e., $\bar{p}_{t+1}^e = p_{t-1}$), for periods 11-50.

3.4. Individual expectations and adaptation

In Section 3.3, we analyzed the evolution of market expectations. In this section, we investigate individual forecasting heuristics. For each participant, and for periods 11-50 and 106-145 in both the first and second phases, we estimate the model

(7)\begin{equation} p^{e}_{i,t+1} = \alpha + \beta_1 p_{t-1} + \beta_2 p_{t-2} + \beta_3 p^e_{i,t}+ \epsilon_t\,, \end{equation}

where $p^e_{i,t}$ is participant $i$’s prediction for price $p_t$, $\alpha$, $\beta_1$, $\beta_2$, and $\beta_3$ are fixed coefficients, and $\epsilon_t$ is an error term. Eq. (7) nests several commonly used expectations models, allowing us to classify participants’ forecasting behavior.Footnote 20

The results of this classification are presented in Table 5. They confirm the tendencies described in Result 3. Under the LTP condition, 92% of participants are classified as trend-extrapolative at the beginning of the experiment. However, this fraction decreases to 57% by the end of the first phase, accompanied by an increase in the fraction of (stabilizing) adaptive participants. In contrast, under the HTP condition, a substantial fraction of adaptive forecasters is observed both early on and later in the first phase.

Table 5. Classification of participants based on their forecasting behavior. Heuristic (7) is estimated for periods 11-50 and 106-145. The table reports the fractions of participant types within each market

Because our design includes within-subject variation in time pressure, we apply the classification to verify whether the forecasting behavior of subjects changed substantially between the two phases. Table 6 presents the ‘transition matrix’, pooled across all treatments, which shows how participants transitioned between trend-extrapolating, adaptive, and unclassified behavior (rows = LTP, columns = HTP), or retained the same behavior, based on periods 11–50 in each phase. Note that some participants experienced the LTP condition before the HTP condition, and for other participants it was the other way around—all of these participants are pooled in the entries in Table 6. Table F6 in Online Appendix F provides transition matrices for each of the three treatments separately.

Table 6. Participants’ transition matrix based on the classification of individual forecasting heuristics, derived from Eq. (7), estimated for periods 11–50 in each experimental phase. The data are pooled across all treatments, with LTP behavior in rows and HTP behavior in columns

The data show that, out of 155 participants who extrapolate trends under LTP conditions, half (77) also do so under HTP, while the other half use other behaviors, including 49 (32%) adopting adaptive behavior. In contrast, of the 85 participants using trend-extrapolation in HTP, a large majority (91%) also does so in LTP. To visually illustrate the within-subject findings, Figure F8 in Online Appendix F presents a scatter plot comparing the $-\beta_2$ estimates (the trend-extrapolation coefficient) for each participant under LTP and HTP. This figure highlights a tendency for lower extrapolation under HTP. This supports Result 3 at the individual level.

A natural interpretation of our findings regarding changes in forecasting behavior under different time pressure conditions is that participants in the LTP condition, having more time to form predictions, tend to adopt more sophisticated approaches. For example, they analyze the time series, detect patterns, and extrapolate trends when making forecasts. In contrast, under the HTP condition, predictions rely more heavily on the most recent information—the last available price—due to time constraints. Incidentally, this interpretation is supported by a textual analysis of participants’ responses to the post-experimental questionnaire, where they describe their forecasting strategies under each time pressure condition (see Online Appendix E). For example, a word frequency analysis of the responses reveals that terms like ‘trend’ were more frequently mentioned under the LTP condition. An illustrative response is: “I had enough time to look back and make forecasting decisions based on past trends.” Conversely, terms like ‘time’ were more frequently mentioned under the HTP condition. Thus, a typical response is: “I had no time to think, so I almost always filled in the previous price.” Footnote 21

Differences in individual forecasting behavior translate into differences in price dynamics. Under the LTP condition, trend-extrapolating heuristics are at least directionally confirmed due to positive feedback in the underlying price dynamics, Eq. (2), leading to significant fluctuations in market-clearing prices. High price volatility results in forecast errors for participants, which consequently reduces their earnings. This may prompt participants to adapt their prediction rules and avoid strong extrapolation, eventually leading to more stable prices. However, we observe that such adaptation requires a substantial number of periods.

The situation is different under the HTP condition. Decision time constraints immediately encourage reliance on the most recently observed price. When many participants in a market adopt this behavior, it leads to a relatively stable time series of market-clearing prices. Over time, as participants gain experience with the decision environment, they may adapt to the time pressure and learn to identify and extrapolate price trends. However, given the fragility of the system— specifically, the feedback coefficient being close to 1—instability may arise. Still, it might be easier for participants in the HTP condition to revert to stable rules as they observe a decline in their forecasting errors.

4. Conclusion

In this paper, we investigate how time pressure and an increased number of decision periods affect market dynamics and price volatility in a Learning-to-Forecast experiment. Consistent with prior literature, participants tend to adopt trend-extrapolating rules when time constraints are relaxed. This behavior, reinforced by the market’s positive feedback structure, amplifies price swings and generates bubbles and crashes. However, these are transitory: over time, participants reduce strong extrapolation, causing prices to converge toward the fundamental value—albeit slowly and somewhat precariously. Under high time pressure, by contrast, participants adopt simpler heuristics—such as naïve expectations anchored to the most recently observed price—which stabilize the market more quickly and reduce volatility. The mechanisms behind these aggregate outcomes align with the adaptive decision-making framework (Payne et al., Reference Payne, Bettman and Johnson1993): participants adjust their behavioral heuristics over time, with time pressure encouraging reliance on information-sparse strategies. In our context, these simpler rules often yield better outcomes by tempering the market feedback loop, consistent with the “fast and frugal” heuristics literature (Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein1996; Goldstein & Gigerenzer, Reference Goldstein and Gigerenzer2009).

We conclude the paper with several directions for future research prompted by our findings. Our results suggest that, although they tend to have a stabilizing effect, neither time for learning nor time pressure alone necessarily guarantees market stability. The market is inherently prone to bubbles and crashes, even after long periods of apparent calm. Anufriev et al. (Reference Anufriev, Hommes and Makarewicz2019) show that even artificial agents in similar settings experience abrupt volatility after long stability phases. We observe similar dynamics in our experiment.

Further, price instability may arise from non-stationary environments. For instance, Xiong and Yu (Reference Xiong and Yu2011) attribute persistent bubbles in the Chinese warrants market to continual entry of inexperienced investors. Laboratory studies by Deck et al. (Reference Deck, Porter and Smith2014) and Kirchler et al. (Reference Kirchler, Bonn, Huber and Razen2015) confirm that new traders and added liquidity increase bubble persistence. This raises questions about whether late-entry participants prolong bubbles in the LtF experiments, especially under low time pressure. Similarly, large structural shocks can destabilize the market. Bao et al. (Reference Bao, Hommes, Sonnemans and Tuinstra2012) show that sudden changes in fundamentals delay reconvergence; our results suggest high time pressure might accelerate recovery in such cases.

The structure of the self-referential system also matters. If the feedback coefficient in the underlying price dynamics exceeds unity, naïve expectations may be insufficient to prevent instability, requiring more sophisticated strategies, such as adaptive expectations. Whether participants can discover and adopt such heuristics under different time constraints remains an open question for future research.

Finally, our study offers new insight into the stability of expectation-driven behavior in financial markets, complementing the literature on bubbles due to strategic trading initiated by Smith et al. (Reference Smith, Suchanek and Williams1988). By isolating forecasting from trading, we highlight how time pressure can improve outcomes via expectations. The similarity of outcomes between previous LtF experiments and experimental studies that allow for trading (Kopányi-Peuker & Weber, Reference Kopányi-Peuker and Weber2021), as well as with learning-to-optimize experiments (Bao et al., Reference Bao, Hommes and Makarewicz2017), suggests that time pressure may similarly affect trading behavior—an avenue for future investigation.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/eec.2025.10037.

Acknowledgements

For many useful comments and feedback we thank Aleksandr Alekseev, Simone Alfarano, John Duffy, Myrna Hennequin, Cars Hommes, Johan de Jong, Anita Kopányi-Peuker, Michael McBride, Andreas Ortmann, Valentyn Panchenko, Rupert Sausgruber, Stefan Trautmann, Matthias Weber, as well as three anonymous referees and the Editor (Ragan Petrie). Further, we thank participants at the 2019 ESA meeting, Vancouver, the 2019 Asian Experimental Finance Meeting, Singapore, the 2021 Experimental Finance Online Conference, the 2022 WEHIA workshop, Catania, the 4th Behavioral Macroeconomics Workshop, Bamberg, the 2023 CEF conference, Nice, the 2023 CREST/CEFM Workshop on Experimental Economics, Paris, and seminars at the University of Amsterdam and the University of California Irvine.

Funding statement

The authors acknowledge support from the Australian Research Council’s Discovery Project funding scheme (project DP200101438), the ORA project “BEAM” (NWO 464-15-143), the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 721846, “Expectations and Social Influence Dynamics in Economics” (ExSIDE), the Czech Science Foundation (GACR) under project 22-28882S, and VSB-TU Ostrava under the SGS project SP2026/003.

Open access funding provided by University of Amsterdam.

Competing interests

Not applicable

Data availability statement

The replication material for the study is available at https://doi.org/10.3886/E238461V1.

Ethical standards

The experiment conducted in this paper has been approved by the Ethics Committee Economics and Business (EBEC) of the University of Amsterdam (EC 20190408050407).

Consent to participate

When registering to the experiment, subjects gave their consent to participate.

Consent for publication

Publication of the manuscript has been approved by all co-authors.

The replication material for the study is available at https://doi.org/10.3886/E238461V1.

Appendix A. Asset-pricing model

Our experiment is built around the standard asset pricing model of a long-lived risky asset. Following Brock & Hommes (Reference Brock and Hommes1998), we make several simplifying assumptions to focus exclusively on the impact of price expectations.

Consider a market with a large number of investors and two assets: a risk-free bond and a risky equity. Time is discrete and indexed by $t$. Let $r$ denote the return on the risk-free bond, and let $y_{t}$ and $p_t$ be the dividend and price per share of the risky asset, respectively. Let $W_{i,t}$ denote the wealth of investor $i$, and $z_{i,t}$ represent the investor’s holdings of the risky asset purchased at time $t$. The evolution of investor $i$’s wealth is given by

(A.1)\begin{equation} W_{i,t+1} = W_{i,t} (1+r) +\left(p_{t+1}+y_{t+1}-(1+r) p_{t}\right) z_{i,t}\,, \end{equation}

where the term in parentheses on the right-hand side is the excess return of the risky asset. Investors are mean-variance maximizers, solving the optimization problem

(A.2)\begin{equation} \max \left\{ \operatorname{E}_{i,t} [W_{i,t+1}] - \frac{a_i}{2} \operatorname{V}_{i,t}[W_{i,t+1}] \right\}\,, \end{equation}

where $a_i$ is the risk aversion of investor $i$, and $\operatorname{E}_{i,t}[\,\cdot\,]$ and $\operatorname{V}_{i,t}[\,\cdot\,]$ denote the investor’s beliefs about the expected wealth and variance of wealth, respectively. All investors share the same risk aversion, $a_i \equiv a$, and the same belief about the variance of the price $p_{t+1}$, denoted as $\sigma^2$. Solving problem (A.2) subject to (A.1) yields the investor’s demand for the risky asset

(A.3)\begin{equation} z_{i,t}=\frac{\text{E}_{i,t}\left(p_{t+1}+y_{t+1}-(1+r)p_t\right) }{a\sigma^2}\,. \end{equation}

The price of the risky asset in each period is determined by equilibrium between aggregate demand and the exogenous supply, which is set to zero. The temporary market equilibrium condition is $\sum_i z_{i,t} = 0$. Solving the equilibrium yields the price:

\begin{equation*} p_t=\frac{1}{1+r} \sum\nolimits_i \big( \operatorname{E}_{i,t}[p_{t+1}]+ \operatorname{E}_{i,t}[y_{t+1}]\big)=\frac{1}{1+r} \left( \sum\nolimits_i \operatorname{E}_{i,t}[p_{t+1}] + \bar{y} \right)\,, \end{equation*}

where the last equality assumes, as in the experiment, that dividends are IID with mean $\bar{y}$.

Equation (2), which generates prices in the experiment, coincides with the last equation. Specifically, in the experiment, a fraction $n_t\in[0,1)$ of traders are robotic investors who expect the price to be at the fundamental value $p^f=\bar{y}/r$, as defined by (1). That is, $\operatorname{E}_{i,t}[p_{t+1}] = p^f$ for these investors. The remaining weight, $1-n_t$, corresponds to human participants advising large pension funds. The average of their forecasts (with equal weights) is denoted as $\bar{p}_{t+1}^e$ in Eq. (2).

Figure B1. Prices in the twelve $\boldsymbol{L}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Figure B2. Prices in the ten $\boldsymbol{H}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Figure B3. Prices in the nine $\boldsymbol{HS}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Appendix B. Data and additional data analysis

Table B1. Interquartile range and median relative absolute deviation (RAD) from the fundamental value for each market in the first phase of the experiment and selected time periods. Averages and medians are computed across all markets within the same treatment

Market prices and their descriptive statistics. Figures B1 to B3 show the market prices (blue lines) for all $\boldsymbol{L}$, $\boldsymbol{H}$, and $\boldsymbol{HS}$ markets, respectively, representing the first phase data across all three treatments. The fundamental value $p^{f}=126.4$, constant across all markets, is indicated by the black dashed line.

Table B2. Market expectations in Eq. (6) for the first phase of the experiment. For parameters estimates, $^*$ denotes significance at the $10\%$ level, $^{**}$ at the $5\%$ level, and $^{***}$ at the $1\%$ level. For the Ljung-Box and Engel specification tests ($p$-values are shown in the LB and H columns), bold font indicates rejection of the null hypothesis of residual structure (autocorrelations or heteroscedasticity) at the 5% level

Table B1 presents the descriptive statistics of the experimental data for each market from the first phase, along with the average and median values by treatment.

Market Expectations.

Table B2 reports the estimated coefficients of the AR(2) model in Eq. (6) for market expectations in the first phase of the experiment. The model is estimated separately for each market over periods 11-50 and 105-145. The table shows the average and median of the estimates by treatment. The LB and H columns provide $p$-values for the Ljung–Box Q test for zero autocorrelation and the Engle test for residual heteroscedasticity.

Coordination. To assess the effect of time pressure on coordination, we calculate the standard deviation of individual forecasts for each period in each market. Figure B4 plots the median of this measure of mis-coordination across all markets within each treatment for the first 20 periods. Lower values indicate higher coordination.

Figure B4. Time evolution of the measure of mis-coordination of participants in the three treatments during the first 20 periods of the experiment. The measure is the median (over markets) of the standard deviation of individual forecasts

The high standard deviation in $\boldsymbol{H}$ markets is consistent with the possibility of incomplete forecasts in treatment $\boldsymbol{HL}$. However, no significant difference in coordination levels is observed between $\boldsymbol{L}$ and $\boldsymbol{HS}$ markets during the first 10 periods, as confirmed by a two-sided Mann-Whitney-Wilcoxon test.Footnote 22 Furthermore, after 10 periods, coordination decreases under the LTP condition but remains stable under the HTP condition. These trends are inconsistent with an explanation of Result 2 based on a failure of coordination under the HTP condition.

Footnotes

1 Our experiment is conducted in a financial market setting, but the paper’s central themes—time pressure and long-term dynamics—have broader implications beyond this context. For instance, LtF experiments have been widely applied in macroeconomic research, beginning with the pioneering work of Marimon et al. (Reference Marimon, Spear and Sunder1993); Marimon and Sunder (Reference Marimon and Sunder1993); Marimon and Sunder (Reference Marimon and Sunder1994) and continuing through recent studies such as Hommes et al. (Reference Hommes, Massaro and Weber2019); Assenza et al. (Reference Assenza, Heemeijer, Hommes and Massaro2021); Mauersberger (Reference Mauersberger2021); Mokhtarzadeh and Petersen (Reference Mokhtarzadeh and Petersen2021); Kryvtsov and Petersen (Reference Kryvtsov and Petersen2021); Petersen and Rholes (Reference Petersen and Rholes2022); Evans et al. (Reference Evans, Hommes, McGough and Salle2022); Kostyshyna et al. (Reference Kostyshyna, Petersen and Yang2022); Salle (Reference Salle2023). For an in-depth review, see Hommes (Reference Hommes2021).

2 The individual differences in strategies across time pressure conditions observed in our study can also be interpreted through the lens of dual-process theory (e.g., Stanovich & West, Reference Stanovich and West2000; Kahneman, Reference Kahneman2003). This theory distinguishes between a fast, intuitive process (System 1) and a slower, deliberate process (System 2). Prior studies suggest that System 1 dominates under higher time pressure (Moritz et al., Reference Moritz, Siemsen and Kremer2014; Ferri et al., Reference Ferri, Ploner and Rizzolli2021), consistent with our finding that participants use simpler forecasting rules in high time pressure conditions.

3 Kopányi-Peuker and Weber (Reference Kopányi-Peuker and Weber2021) show that bubbles and crashes also occur with experienced participants when they actively trade assets, as in the experiment of Smith et al. (Reference Smith, Suchanek and Williams1988). The similarity in market outcomes between LtF experiments and Learning-to-Optimize experiments, where participants trade rather than only forecast, is further supported by the findings of Bao et al. (Reference Bao, Hommes and Makarewicz2017) and Arifovic et al. (Reference Arifovic, Hommes and Salle2019). Moreover, Carlé et al. (Reference Carlé, Lahav, Neugebauer and Noussair2019) and Füllbrunn et al. (Reference Füllbrunn, Huber, Eckel and Weitzel2024) find that forecasting behavior in experimental asset markets aligns closely with participants’ trading behavior.

4 In trading experiments, the number of periods influence participants’ strategic decisions by shaping their investment horizons. This contrasts with LtF experiments, where the forecast horizon is fixed by the pricing equation and is independent of the number of decision periods. Our study adopts a two-period-ahead setup, requiring participants to forecast prices at $t+1$ before the price at $t$ is known. Anufriev et al. (Reference Anufriev, Chernulich and Tuinstra2022) and Evans et al. (Reference Evans, Hommes, McGough and Salle2022) systematically compare forecasting horizons in LtF experiments, but use no more than 60 decision periods. Trading experiments by Hirota and Sunder (Reference Hirota and Sunder2007) and Razen et al. (Reference Razen, Huber and Kirchler2017) explore the impact of investment horizons on strategic behavior, but do not extend the number of decision periods beyond 15.

5 Some LtF experiments (Hommes et al., Reference Hommes, Sonnemans, Tuinstra and Van de Velden2008; Hommes et al., Reference Hommes, Kopányi-Peuker and Sonnemans2021) do not include robot traders, so the price follows Eq. (2) with $n_t \equiv 0$. In these studies, price bubbles often form early and grow rapidly until they reach an artificial upper bound, typically set at 1000. In contrast, our experiment sets a much higher upper bound at 10000 (see footnote 6). However, due to the presence of stabilizing robots, the highest observed price was 2 504. By incorporating robots, we thus avoid the use of artificial caps on bubbles and prevent episodes of severe instability that could demotivate subjects through large forecasting errors and low payoffs. The average weight of the robots in our experiment, across all periods and all markets, is about 2%, and in only 0.4% of the 9 393 periods does $n_{t}$ exceed 25%.

6 Forecasts can be in the interval $(0,10 000]$ to two decimal places. Following previous studies, the upper limit is not mentioned in the instructions, but participants are notified if their forecast exceeds it. Of the 186 participants, only five made one or more forecasts within 5% of this bound.

7 The vertical axis of each graph dynamically adjusts to provide additional space above the displayed time series. If a participant clicks in this space, above the current range, the graph expands further. Multiple clicks in the upper area allow participants to quickly extend the range as needed, ensuring no limitations for mouse-generated forecasts.

8 The LTP condition serves as a benchmark for comparison with earlier LtF experiments, which typically spanned about 50 periods with soft time limits of 60 seconds. Our study imposes a hard time limit to accommodate more decision periods. Based on average decision times in previous studies, we set the limit at $25$ seconds. However, strict limits may lead to rushed decisions, potentially diverging from the standard LtF approach. Moritz et al. (Reference Moritz, Siemsen and Kremer2014) showed that a waiting period helps mitigate such “under-thinking” and rushed decisions. For this reason, we included a waiting period in the LTP condition.

9 Participants’ entries may have been cut off when the decision time elapsed after entering only the first digit, even if they intended to input two or three digits. This interpretation is supported by comments from the post-experiment questionnaire.

10 See Appendix B for the price dynamics in each market. Figs. B1 to B3 show all $\mathbf{L}$, $\mathbf{H}$, and $\mathbf{HS}$ markets, respectively. Some of these markets are referenced below to illustrate our results. (Online Appendix G shows the price dynamics in each second-phase market.)

11 The market price rises above 500 in nine out of the twelve $\mathbf{L}$ markets, occurring in a total of 199 periods (approximately 11% of all periods). By contrast, prices never exceed 500 in any $\mathbf{H}$ market and do so in only five periods (0.35% of all periods) across all $\mathbf{HS}$ markets.

12 The IQR measures the length of the interval containing the middle half of the (ordered) market prices. The RAD from the fundamental value is defined as $\left| p_t-p^f \right| /p^f$, introduced as a measure for mispricing by Stöckl et al. (Reference Stöckl, Huber and Kirchler2010). While they use the mean RAD, we focus on the median RAD for its robustness to outliers. We also considered alternative measures, such as the standard deviation of prices (for volatility) and the number of periods where the price is within 5% of the fundamental value $p^f$ (for mispricing), but these measures yielded similar results.

13 The increase of volatility in market $\mathbf{L}$12 is attributed to one participant submitting extreme predictions (1 or 10000) in 30 of the 34 periods following period 54. In market $\mathbf{L}$8, the price converges to approximately 240 after period 60 and remains in this range for about 60 periods. Consequently, price volatility is low, but mispricing remains high.

14 See Figures F6 and F7 in Online Appendix F. For example, in markets $\mathbf{HS}$2, $\mathbf{HS}$5, and $\mathbf{HS}$8 prices remain fairly stable until a sudden spike is caused by one participant predicting a price about 10 times higher than the last observed market price.

15 The market price data from the second phase, along with their IQR and median RAD values, are provided in Online Appendix G. Table G1 shows that the IQR decreases in 13 out of 19 markets and the median RAD decreases in 14 out of 19 markets for the LTP markets. Both measures also decline for most HTP markets, but only the difference in the median RAD is statistically significant for those markets. Participants’ experiences with the LTP condition in the first-phase may have contributed to the decrease in price volatility and mispricing in the later periods of the HTP markets, although these experiences were different for the different participants in the same second-phase market.

16 Differences between $\mathbf{H}$ and $\mathbf{HS}$ are driven by explicit submission requirements in $\mathbf{HLS}$. However, under the LTP condition, there are no structural differences between $\mathbf{L}^2$ and $\mathbf{LS}^2$ markets. This suggests that the decision time parameters were calibrated effectively and that time pressure was not binding under the LTP condition.

17 The correlation between non-submissions and IQR is $-0.2677$ for $\mathbf{H}$ ( $p=0.4546$) and $0.0655$ for $\mathbf{HS}$ ( $p=0.8670$). For median RAD, the correlation is $0.0158$ for $\mathbf{H}$ ( $p=0.9655$) and $0.5059$ for $\mathbf{HS}$ ( $p=0.1647$). Non-submissions in the first 10 periods range from 4 to 13 (out of 60 forecasts) per market in $\mathbf{H}$ and from 1 to 19 in $\mathbf{HS}$.

18 Detailed estimates and their statistical significance for each market are provided in Table B2 (Appendix B) for the first-phase and in Table G2 (Online Appendix G) for the second-phase data.

19 Eqs. (2) and (4) are complemented by the prediction rule in Eq. (6) with $\nu_t=0$ and $a = p^f (1 - b_1 -b_2)$. This restriction ensures that the rule is “consistent,” i.e., it predicts $p^f$ in the fundamental steady state. The edges of the triangle are defined by $b_2-b_1=1+r$ (left edge), $b_1+b_2=1+r$ (right edge), and $b_2=-1-r$ (bottom edge). The parabola is given by $b_2=-b_1^2/(4(1+r))$. See Proposition 2, Appendix B in Anufriev and Hommes (Reference Anufriev and Hommes2012) for a formal derivation.

20 Specifically, behavior is classified as trend-extrapolative according to Eq. (7) if $\beta_2$ is negative and significantly different from zero, and as adaptive if $\beta_2$ is not significantly different from zero, while at least one of $\beta_1$ and $\beta_3$ is significantly positive and neither is significantly negative. In all other cases, we refer to behavior as unclassified. Within the adaptive class, we further distinguish between AR(1) (when $\beta_1=1$ and $\beta_3$ is not significantly different from zero), stubborn (when $\beta_3=1$ and $\beta_1$ is not significantly different from zero), and other adaptive (all other cases) behaviors. We apply a significance level of 5% in the tests. See Online Appendix F for all estimated individual heuristics.

21 As part of the post-experimental questionnaire we administered the standard three-question CRT test (Frederick, Reference Frederick2005) to all participants (see Online Appendix D). A higher CRT score only significantly improves forecasting accuracy in the first phase of treatment $\mathbf{HLS}$ (see Table H1 and Table H2 in Online H). There is therefore only limited evidence for dual-process theory as an explanation of our data.

22 The hypothesis that the standard deviations of forecasts in $\boldsymbol{L}$ markets are not lower than those in $\boldsymbol{HS}$ markets cannot be rejected at the 5% level for any period between 1 and 20, except for period 2. This hypothesis also holds when data are aggregated over multiple periods, e.g., 1–10. Note that forecasts for the first two periods are made without prior price information, while variability in subsequent periods may reflect price dynamics.

References

Alfarano, S., Camacho-Cuena, E., Colasante, A., & Ruiz-Buforn, A. (2024). The effect of time-varying fundamentals in learning-to-forecast experiments. Journal of Economic Interaction and Coordination, 19 (4), 619647.10.1007/s11403-023-00397-6CrossRefGoogle Scholar
Anufriev, M., Chernulich, A., & Tuinstra, J. (2022). Asset price volatility and investment horizons: An experimental investigation. Journal of Economic Behavior & Organization, 193, 1948.10.1016/j.jebo.2021.11.019CrossRefGoogle Scholar
Anufriev, M., & Hommes, C. (2012). Evolutionary selection of individual expectations and aggregate outcomes in asset pricing experiments. American Economic Journal: Microeconomics, 4(4), 3564.Google Scholar
Anufriev, M., Hommes, C., & Makarewicz, T. (2019). Simple forecasting heuristics that make us smart: Evidence from different market experiments. Journal of the European Economic Association, 17(5), 15381584.10.1093/jeea/jvy028CrossRefGoogle Scholar
Arifovic, J., Hommes, C., & Salle, I. (2019). Learning to believe in simple equilibria in a complex OLG economy - evidence from the lab. Journal of Economic Theory, 183, 106182.10.1016/j.jet.2019.05.001CrossRefGoogle Scholar
Assenza, T., Heemeijer, P., Hommes, C. H., & Massaro, D. (2021). Managing self-organization of expectations through monetary policy: A macro experiment. Journal of Monetary Economics, 117, 170186.10.1016/j.jmoneco.2019.12.005CrossRefGoogle Scholar
Bao, T., Hommes, C., & Makarewicz, T. (2017). Bubble formation and (in) efficient markets in learning-to-forecast and optimise experiments. Economic Journal, 127(605), F581F609.10.1111/ecoj.12341CrossRefGoogle Scholar
Bao, T., Hommes, C., Sonnemans, J., & Tuinstra, J. (2012). Individual expectations, limited rationality and aggregate outcomes. Journal of Economic Dynamics and Control, 36(8), 11011120.10.1016/j.jedc.2012.03.006CrossRefGoogle Scholar
Barberis, N., Greenwood, R., Jin, L., & Shleifer, A. (2018). Extrapolation and bubbles. Journal of Financial Economics, 129(2), 203227.10.1016/j.jfineco.2018.04.007CrossRefGoogle Scholar
Bartling, B., Fehr, E., & Özdemir, Y. (2023). Does market interaction erode moral values?. Review of Economics and Statistics, 105(1), 226–235.10.1162/rest_a_01021CrossRefGoogle Scholar
Berninghaus, S. K., & Ehrhart, K. -M. (1998). Time horizon and equilibrium selection in tacit coordination games: Experimental results. Journal of Economic Behavior & Organization, 37(2), 231248.10.1016/S0167-2681(98)00086-9CrossRefGoogle Scholar
Brock, W., & Hommes, C. (1998). Heterogeneous beliefs and routes to chaos in a simple asset pricing model. Journal of Economic Dynamics & Control, 22(8-9), 12351274.10.1016/S0165-1889(98)00011-6CrossRefGoogle Scholar
Busse, J. A., & Green, T. C. (2002). Market efficiency in real time. Journal of Financial Economics, 65(3), 415437.10.1016/S0304-405X(02)00148-4CrossRefGoogle Scholar
Campbell, J. Y., Lo, A. W., & MacKinlay, A. C. (1997). The econometrics of financial markets. Princeton University Press.10.1515/9781400830213CrossRefGoogle Scholar
Carlé, T. A., Lahav, Y., Neugebauer, T., & Noussair, C. N. (2019). Heterogeneity of beliefs and trade in experimental asset markets. Journal of Financial and Quantitative Analysis, 54(1), 215245.10.1017/S0022109018000571CrossRefGoogle Scholar
Case, K. E., & Shiller, R. J. (2003). Is there a bubble in the housing market?. Brookings Papers on Economic Activity, 2003(2), 299–362.10.1353/eca.2004.0004CrossRefGoogle Scholar
Case, K. E., Shiller, R. J., Thompson, A. (2012). What have they been thinking? Home buyer behavior in hot and cold markets. Technical report, National Bureau of Economic Research.10.3386/w18400CrossRefGoogle Scholar
Cheah, E. -T., & Fry, J. (2015). Speculative bubbles in bitcoin markets? An empirical investigation into the fundamental value of bitcoin. Economics Letters, 130, 3236.10.1016/j.econlet.2015.02.029CrossRefGoogle Scholar
Chen, D. L., Schonger, M., & Wickens, C. (2016). oTree – An open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance, 9, 8897.10.1016/j.jbef.2015.12.001CrossRefGoogle Scholar
Coibion, O., Gorodnichenko, Y., & Kamdar, R. (2018). The formation of expectations, inflation, and the Phillips curve. Journal of Economic Literature, 56(4), 14471491.10.1257/jel.20171300CrossRefGoogle Scholar
Corbet, S., Lucey, B., & Yarovaya, L. (2018). Datestamping the bitcoin and ethereum bubbles. Finance Research Letters, 26, 8188.10.1016/j.frl.2017.12.006CrossRefGoogle Scholar
Deck, C., Porter, D., & Smith, V. (2014). Double bubbles in assets markets with multiple generations. Journal of Behavioral Finance, 15(2), 7988.10.1080/15427560.2014.908884CrossRefGoogle Scholar
Duffy, J. (2016). Macroeconomics: A survey of laboratory research. In Kagel, J. H. Roth, A. E. (Eds.) Handbook of experimental economics. Princeton, Princeton University Press, 2 pp.190.Google Scholar
Duffy, J., & Hopkins, E. (2005). Learning, information, and sorting in market entry games: Theory and evidence. Games and Economic Behavior, 51(1), 3162.10.1016/j.geb.2004.04.007CrossRefGoogle Scholar
Evans, G. W., Hommes, C., McGough, B., & Salle, I. (2022). Are long-horizon expectations (de-) stabilizing? Theory and experiments. Journal of Monetary Economics, 132 4463.10.1016/j.jmoneco.2022.08.002CrossRefGoogle Scholar
Falk, A., & Szech, N. (2013). Morals and markets. Science, 340(6133), 707711.10.1126/science.1231566CrossRefGoogle ScholarPubMed
Ferri, G., Ploner, M., & Rizzolli, M. (2021). Trading fast and slow: The role of deliberation in experimental financial markets. Journal of Behavioral and Experimental Finance, 32, 100593.10.1016/j.jbef.2021.100593CrossRefGoogle Scholar
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 2542.10.1257/089533005775196732CrossRefGoogle Scholar
Friedman, D., Huck, S., Oprea, R., & Weidenholzer, S. (2015). From imitation to collusion: Long-run learning in a low-information environment. Journal of Economic Theory, 155, 185205.10.1016/j.jet.2014.10.006CrossRefGoogle Scholar
Füllbrunn, S., Huber, C., Eckel, C., & Weitzel, U. (2024). Heterogeneity of beliefs and trading behavior: A reexamination. Journal of Financial and Quantitative Analysis 59 (3), 13371361.10.1017/S002210902300011XCrossRefGoogle Scholar
Fuster, A., Laibson, D., & Mendel, B. (2010). Natural expectations and macroeconomic fluctuations. Journal of Economic Perspectives, 24(4), 6784.10.1257/jep.24.4.67CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103(4), 650.10.1037/0033-295X.103.4.650CrossRefGoogle ScholarPubMed
Goldstein, D. G., & Gigerenzer, G. (2009). Fast and frugal forecasting. International Journal of Forecasting, 25(4), 760772.10.1016/j.ijforecast.2009.05.010CrossRefGoogle Scholar
Hanaki, N., Hommes, C., Kopányi, D., Kopányi-Peuker, A., & Tuinstra, J. (2023). Forecasting returns instead of prices exacerbates financial bubbles. Experimental Economics, 26 (5),11851213.10.1007/s10683-023-09815-9CrossRefGoogle Scholar
Hennequin, M. (2018) Experiences and expectations in asset markets: An experimental study. University of Amsterdam, Technical report, Working Paper.Google Scholar
Hirota, S., & Sunder, S. (2007). Price bubbles sans dividend anchors: Evidence from laboratory stock markets. Journal of Economic Dynamics and Control, 31(6), 18751909.10.1016/j.jedc.2007.01.008CrossRefGoogle Scholar
Hommes, C. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab. Journal of Economic Dynamics and Control, 35(1), 124.10.1016/j.jedc.2010.10.003CrossRefGoogle Scholar
Hommes, C. (2021). Behavioral and experimental macroeconomics and policy analysis: A complex systems approach. Journal of Economic Literature, 59(1), 149219.10.1257/jel.20191434CrossRefGoogle Scholar
Hommes, C., Kopányi-Peuker, A., & Sonnemans, J. (2021). Bubbles, crashes and information contagion in large-group asset market experiments. Experimental Economics, 67(3), 120.Google Scholar
Hommes, C., Massaro, D., & Weber, M. (2019). Monetary policy under behavioral expectations: Theory and experiment. European Economic Review, 118, 193212.10.1016/j.euroecorev.2019.05.009CrossRefGoogle Scholar
Hommes, C., Sonnemans, J., Tuinstra, J., & van de Velden, H. (2005). Coordination of expectations in asset pricing experiments. Review of Financial Studies, 18(3), 955980.10.1093/rfs/hhi003CrossRefGoogle Scholar
Hommes, C., Sonnemans, J., Tuinstra, J., & Van de Velden, H. (2008). Expectations and bubbles in asset pricing experiments. Journal of Economic Behavior & Organization, 67(1), 116133.10.1016/j.jebo.2007.06.006CrossRefGoogle Scholar
Hoshihata, T., Ishikawa, R., Hanaki, N., & Akiyama, E. (2017) Flat bubbles in long-horizon experiments: Results from two market conditions. GREDEG Working Papers Series-2017-32, Technical report.Google Scholar
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5), 14491475.10.1257/000282803322655392CrossRefGoogle Scholar
Kirchler, M., Bonn, C., Huber, J., & Razen, M. (2015). The “inflow-effect”—trader inflow and price efficiency. European Economic Review, 77, 119.10.1016/j.euroecorev.2015.03.006CrossRefGoogle Scholar
Kocher, M. G., Schindler, D., Trautmann, S. T., & Xu, Y. (2019). Risk, time pressure, and selection effects. Experimental Economics, 22(1), 216246.10.1007/s10683-018-9576-1CrossRefGoogle Scholar
Kocher, M. G., & Sutter, M. (2006). Time is money – Time pressure, incentives, and the quality of decision-making. Journal of Economic Behavior & Organization, 61(3), 375392.10.1016/j.jebo.2004.11.013CrossRefGoogle Scholar
Kopányi-Peuker, A., & Weber, M. (2021). Experience does not eliminate bubbles: Experimental evidence. Review of Financial Studies, 34(9), 44504485.10.1093/rfs/hhaa121CrossRefGoogle Scholar
Kopányi-Peuker, A., & Weber, M. (2024). The role of the end time in experimental asset markets. Journal of Corporate Finance, 88, 102647.10.1016/j.jcorpfin.2024.102647CrossRefGoogle Scholar
Kostyshyna, O., Petersen, L., Yang, J. (2022). A horse race of monetary policy regimes: An experimental investigation. Technical report, National Bureau of Economic Research.10.3386/w30530CrossRefGoogle Scholar
Kryvtsov, O., & Petersen, L. (2021). Central bank communication that works: Lessons from lab experiments. Journal of Monetary Economics, 117, 760780.10.1016/j.jmoneco.2020.05.001CrossRefGoogle Scholar
Lahav, Y. (2011). Price patterns in experimental asset markets with long horizon. Journal of Behavioral Finance, 12 (1), 2028.10.1080/15427560.2011.552747CrossRefGoogle Scholar
Marimon, R., Spear, S. E., & Sunder, S. (1993). Expectationally driven market volatility: An experimental study. Journal of Economic Theory, 61(1), 74103.10.1006/jeth.1993.1059CrossRefGoogle Scholar
Marimon, R., & Sunder, S. (1993). Indeterminacy of equilibria in a hyperinflationary world: Experimental evidence. Econometrica, 61(5), 10731107.10.2307/2951494CrossRefGoogle Scholar
Marimon, R., & Sunder, S. (1994). Expectations and learning under alternative monetary regimes: An experimental approach. Economic Theory, 4(1), 131162.10.1007/BF01211121CrossRefGoogle Scholar
Mauersberger, F. (2021). Monetary policy rules in a non-rational world: A macroeconomic experiment. Journal of Economic Theory, 197, 105203.10.1016/j.jet.2021.105203CrossRefGoogle Scholar
Mokhtarzadeh, F., & Petersen, L. (2021). Coordinating expectations through central bank projections. Experimental Economics, 24(3), 883918.10.1007/s10683-020-09684-6CrossRefGoogle ScholarPubMed
Moritz, B., Siemsen, E., & Kremer, M. (2014). Judgmental forecasting: Cognitive reflection and decision speed. Production and Operations Management, 23(7), 11461160.10.1111/poms.12105CrossRefGoogle Scholar
Palan, S. (2013). A review of bubbles and crashes in experimental asset markets. Journal of Economic Surveys, 27(3), 570588.10.1111/joes.12023CrossRefGoogle Scholar
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(3), 534.Google Scholar
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. Cambridge University Press.10.1017/CBO9781139173933CrossRefGoogle Scholar
Petersen, L., & Rholes, R. (2022). Macroeconomic expectations, central bank communication, and background uncertainty: A covid-19 laboratory experiment. Journal of Economic Dynamics and Control, 143, 104460.10.1016/j.jedc.2022.104460CrossRefGoogle ScholarPubMed
Phillips, P. C., Shi, S., & Yu, J. (2015). Testing for multiple bubbles: Historical episodes of exuberance and collapse in the S&P 500. International Economic Review, 56(4), 10431078.10.1111/iere.12132CrossRefGoogle Scholar
Phillips, P. C., Wu, Y., & Yu, J. (2011). Explosive behavior in the 1990s Nasdaq: When did exuberance escalate asset values?. International Economic Review, 52(1), 201226.10.1111/j.1468-2354.2010.00625.xCrossRefGoogle Scholar
Razen, M., Huber, J., & Kirchler, M. (2017). Cash inflow and trading horizon in asset markets. European Economic Review, 92, 359384.10.1016/j.euroecorev.2016.11.010CrossRefGoogle Scholar
Rieskamp, J., & Hoffrage, U. (2008). Inferences under time pressure: How opportunity costs affect strategy selection. Acta Psychologica, 127(2), 258276.10.1016/j.actpsy.2007.05.004CrossRefGoogle ScholarPubMed
Salle, I.L.. (2023). What to target? Insights from a lab experiment. Journal of Economic Behavior & Organization, 212, 514533.10.1016/j.jebo.2023.05.031CrossRefGoogle Scholar
Smith, A., Lohrenz, T., King, J., Montague, P. R., & Camerer, C. F. (2014). Irrational exuberance and neural crash warning signals during endogenous experimental market bubbles. Proceedings of the National Academy of Sciences, 111(29), 1050310508.10.1073/pnas.1318416111CrossRefGoogle ScholarPubMed
Smith, V. L., Suchanek, G. L., & Williams, A. W. (1988). Bubbles, crashes, and endogenous expectations in experimental spot asset markets. Econometrica, 56(5), 11191151.10.2307/1911361CrossRefGoogle Scholar
Spiliopoulos, L., & Ortmann, A. (2018). The BCD of response time analysis in experimental economics. Experimental Economics, 21(2), 383433.10.1007/s10683-017-9528-1CrossRefGoogle ScholarPubMed
Spiliopoulos, L., Ortmann, A., & Zhang, L. (2018). Complexity, attention, and choice in games under time constraints: A process analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1609.Google ScholarPubMed
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate?. Behavioral and Brain Sciences, 23(5), 645–665.10.1017/S0140525X00003435CrossRefGoogle ScholarPubMed
Stöckl, T., Huber, J., & Kirchler, M. (2010). Bubble measures in experimental asset markets. Experimental Economics, 13(3), 284298.10.1007/s10683-010-9241-9CrossRefGoogle Scholar
Xiong, W., & Yu, J. (2011). The Chinese warrants bubble. American Economic Review, 101(6), 2723–53.10.1257/aer.101.6.2723CrossRefGoogle Scholar
Figure 0

Figure 1. An example of the computer screen. The graph shows past forecasts (blue) and past prices (red). The table provides the same information, and also displays the “Potential earnings”, i.e., the points awarded for each period if that period is selected for payment, as computed using Eq. (5). A forecast can be entered either by typing a number into the box at the top center of the screen or by clicking on the graph in the lower part. This example is from treatment $\mathbf{HLS}$, where participants must either press the ‘Enter’ key on the keyboard or click the blue ‘Submit’ button at the top of the screen to submit their forecast

Figure 1

Table 1. Overview of the treatments. The last two columns display the experimental market notations and the parameter values for each of the experiment’s two phases. In both phases, the interest rate is $r\!=\!0.05$. Each market consists of six human participants

Figure 2

Figure 2. Median prices (thick black line) and prices in individual markets (gray lines) during the first phase of the three experimental treatments. The fundamental price, $p^f=126.4$, is indicated by the dashed horizontal line

Figure 3

Figure 3. Measures of price volatility (IQR, left panel, logarithmic scale) and mispricing (Median of RAD, right panel) by treatment for each market of the first phase, computed over three different time periods: $11$$50$ (blue dots), $1$$145$ (black dots), and $106$$145$ (red dots). The disks show the median over the markets

Figure 4

Table 2. $p$-values of the corresponding tests (see the last column) for various comparisons on the first-phase data

Figure 5

Table 3. $p$-values of the corresponding tests (see the last column) for comparisons based on all data

Figure 6

Figure 4. The fraction of participants who did not submit a forecast in a given time period, shown as a 5-period moving average across all markets over two phases. Red lines represent the LTP condition, and blue lines represent the HTP condition. The phase change, occurring after period 146 in the $\mathbf{LH}$ treatment and after period 159 in the $\mathbf{HL}$ and $\mathbf{HLS}$ treatments, is indicated by the vertical dashed lines

Figure 7

Table 4. The average and median (calculated over all markets within a treatment and phase) of the estimated coefficients for the prediction rule in Eq. (6)

Figure 8

Figure 5. Scatter plots of estimated $(b_1,b_2)$ coefficients from market expectations, Eq. (6) for first-phase markets during periods 11–50 (left panel) and 106–145 (right panel). Prices converge for points inside the triangle. They oscillate below the parabola

Figure 9

Table 5. Classification of participants based on their forecasting behavior. Heuristic (7) is estimated for periods 11-50 and 106-145. The table reports the fractions of participant types within each market

Figure 10

Table 6. Participants’ transition matrix based on the classification of individual forecasting heuristics, derived from Eq. (7), estimated for periods 11–50 in each experimental phase. The data are pooled across all treatments, with LTP behavior in rows and HTP behavior in columns

Figure 11

Figure B1. Prices in the twelve $\boldsymbol{L}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Figure 12

Figure B2. Prices in the ten $\boldsymbol{H}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Figure 13

Figure B3. Prices in the nine $\boldsymbol{HS}$ markets (blue thick lines). The black dashed horizontal line represents the fundamental price, $p^f=126.4$

Figure 14

Table B1. Interquartile range and median relative absolute deviation (RAD) from the fundamental value for each market in the first phase of the experiment and selected time periods. Averages and medians are computed across all markets within the same treatment

Figure 15

Table B2. Market expectations in Eq. (6) for the first phase of the experiment. For parameters estimates, $^*$ denotes significance at the $10\%$ level, $^{**}$ at the $5\%$ level, and $^{***}$ at the $1\%$ level. For the Ljung-Box and Engel specification tests ($p$-values are shown in the LB and H columns), bold font indicates rejection of the null hypothesis of residual structure (autocorrelations or heteroscedasticity) at the 5% level

Figure 16

Figure B4. Time evolution of the measure of mis-coordination of participants in the three treatments during the first 20 periods of the experiment. The measure is the median (over markets) of the standard deviation of individual forecasts

Supplementary material: File

Anufriev et al. supplementary material

Anufriev et al. supplementary material
Download Anufriev et al. supplementary material(File)
File 1.4 MB