The simple linear regression is one of the most frequently employed statistical model. Linear regression is used to describe the relationship between two numerical variables, but it also serves as a building block for more complex statistical methods, such as the multivariate ordination. We start by comparing the concepts of regression and correlation, before introducing the equation of the simple linear regression. We also explain the decomposition of the observed values of the response variable into fitted values and regression residuals. Following this is a discussion regarding the hypotheses that can be tested for a regression model, distinguishing the F-ratio based test from the t tests of individual regression coefficients. The calculation of confidence and prediction intervals allow us to enhance diagrams displaying the fitted model. A separate section is devoted to the graphs of regression diagnostics and their interpretation, as well as to the effects of log-transforming the variables to linearise their relationship. Additional specialised sections deal with regression through the origin and its possible dangers, regression using a predictor with random variation, and with linear calibration. The methods described in this chapter are accompanied by a carefully-explained guide to the R code needed for their use, including the effects and lmodel2 packages.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.