Hostname: page-component-89b8bd64d-shngb Total loading time: 0 Render date: 2026-05-08T04:29:03.837Z Has data issue: false hasContentIssue false

APPROXIMATING THE VALUE FUNCTION FOR OPTIMAL EXPERIMENTATION

Published online by Cambridge University Press:  14 November 2018

Hans M. Amman*
Affiliation:
University of Amsterdam
David A. Kendrick
Affiliation:
University of Texas
Marco P. Tucci
Affiliation:
University of Siena
*
Address correspondence to: Hans M. Amman, Faculty of Economics and Business, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands; e-mail: amman@uva.nl, hans.amman@gmail.com. Mobile: +31651532162.

Abstract

In the economics literature, there are two dominant approaches for solving models with optimal experimentation (also called active learning). The first approach is based on the value function and the second on an approximation method. In principle the value function approach is the preferred method. However, it suffers from the curse of dimensionality and is only applicable to small problems with a limited number of policy variables. The approximation method allows for a computationally larger class of models, but may produce results that deviate from the optimal solution. Our simulations indicate that when the effects of learning are limited, the differences may be small. However, when there is sufficient scope for learning, the value function solution seems more aggressive in the use of the policy variable.

Information

Type
Articles
Copyright
© Cambridge University Press 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable