Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 7: Supervised Machine Learning

Chapter 7: Supervised Machine Learning

pp. 267-340

Authors

, University of British Columbia, Vancouver, , University of British Columbia, Vancouver
Resources available Unlock the full potential of this textbook with additional resources. There are free resources and Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Summary

Who so neglects learning in his youth, loses the past and is dead for the future.

– Euripides (484 BC – 406 BC), Phrixus, Frag. 927

Learning is the ability of an agent to improve its behavior based on experience. This could mean the following

  • • The range of behaviors is expanded; the agent can do more.

  • • The accuracy on tasks is improved; the agent can do things better.

  • • The speed is improved; the agent can do things faster.

  • The ability to learn is essential to any intelligent agent. As Euripides pointed out, learning involves an agent remembering its past in a way that is useful for its future.

    This chapter considers the problem of making a prediction as supervised learning: given a set of training examples made up of input–output pairs, predict the output of a new example where only the inputs are given. We explore four approaches to learning: choosing a single hypothesis that fits the training examples well, predicting directly from the training examples, selecting the subset of a hypothesis space consistent with the training examples, or (in Section 10.4 (page 512)) predicting based on the posterior probability distribution of hypotheses conditioned on the training examples.

    Chapter 10 considers learning probabilistic models. Chapter 12 covers reinforcement learning. Section 15.2 (page 701) considers learning relational representations.

    Learning Issues

    The following components are part of any learning problem:

    Task The behavior or task that is being improved

    Data The experiences that are used to improve performance in the task, usually in the form of a sequence of examples

    Measure of improvement How the improvement is measured – for example, new skills that were not present initially, increasing accuracy in prediction, or improved speed

    Consider the agent internals of Figure 2.9 (page 66). The problem of learning is to take in prior knowledge and data (e.g., about the experiences of the agent) and to create an internal representation (the knowledge base) that is used by the agent as it acts.

    Learning techniques face the following issues:

    Task Virtually any task for which an agent can get data or experiences can be learned.

    Access options

    Review the options below to login to check your access.

    Purchase options

    There are no purchase options available for this title.

    Have an access code?

    To redeem an access code, please log in with your personal login.

    If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

    Also available to purchase from these educational ebook suppliers