The chapter starts by describing the appearance and interpretation of a regression tree, followed by a more detailed explanation of the recursive partitioning algorithm used in the construction of tree models. We describe how optimum tree complexity is chosen based on the results of a crossvalidation procedure, and how a tree model can be simplified via its pruning. The concepts of competing and surrogate predictors are touched upon, both enabling a more effective application of the fitted tree models. The methods described in this chapter are accompanied by a carefully-explained guide to the R code needed for their use, in this case employing the rpart package.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.