Starting from this chapter, Part III introduces several commonly used algorithms in pattern recognition and machine learning. Support vector machines (SVM) starts from a simple and beautiful idea: large margin. We first show that in order to find such an idea, we may need to simplify our problem setup by assuming a linearly separable binary one. Then we visualize and calculate the margin to reach the SVM formulation, which is complex and difficult to optimize. We practice the simplification procedure again until the formulation becomes viable, briefly mention the primal--dual relationship, but do not go into details of its optimization. We show that the simplification assumptions (linear, separable, and binary) can be relaxed such that SVM will solve more difficult tasks---and the key ideas here are also useful in other tasks: slack variables and kernel methods.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.