4 - Evaluation Methods
from PART I - INTRODUCTION
Published online by Cambridge University Press: 05 February 2016
Summary
After developing statistical methods for recommender systems, it is important to evaluate their performance to assess performance metrics in different application settings. Broadly speaking, there are two kinds of evaluation, depending on whether a recommendation algorithm, or more precisely, the model used in the algorithm, has been deployed to serve users:
1. Predeployment offline evaluation: A new model must show strong signs of performance improvement over existing baselines before being deployed to serve real users. To ascertain the potential of a newmodel before testing it on real user visits, we compute various performance measures on retrospective (historical) data. We refer to this as offline evaluation. To perform such offline evaluation, we need to log data that record past user-item interactions in the system. Model comparison is performed by computing various offline metrics based on such data.
2. Postdeployment online evaluation: Once the model performance looks promising based on offline metrics, we test it on a small fraction of real user visits. We refer to this as online evaluation. To perform online evaluation, it is typical to run randomized experiments online. A randomized experiment, also referred to as an A/B test or a bucket test in web applications, compares a new method to an existing baseline. It is conducted by assigning two random user or visit populations to the treatment bucket and the control bucket, respectively. The treatment bucket is typically smaller than the control because it serves users according to the new recommendationmodel that is being tested, whereas the control bucket serves users using the status quo. After running such a bucket test for a certain time period, we gauge model performance by comparing metrics that are computed using data collected from the corresponding buckets.
In this chapter, we describe several ways to measure the performance of recommendation models and discuss their strengths and weaknesses. We start in Section 4.1 with traditional offline evaluation metrics that measure out-of-sample predictive accuracy on retrospective ratings data. Our use of the term rating is generic and refers to both explicit ratings like star ratings on movies and implicit ratings (also called responses) like clicks on recommended items (we use ratings and responses interchangeably). In Section 4.2, we discuss online evaluation methods, describing both performance metrics and how to set up online bucket tests in a proper way.
- Type
- Chapter
- Information
- Statistical Methods for Recommender Systems , pp. 55 - 78Publisher: Cambridge University PressPrint publication year: 2016