Support vector machines (SVMs) are one of the world's most popular machine learning problems.
SVMs can be used for either classification problems or regression problems, which makes them quite versatile.
In this tutorial, you will learn how to build your first Python support vector machines model from scratch using the breast cancer data set included with scikit-learn
.
Table of Contents
You can skip to a specific section of this Python machine learning tutorial using the table of contents below:
- The Python Libraries We Will Need In This Tutorial
- The Data Set We Will Use In This Tutorial
- Splitting the Data Set Into Training Data and Test Data
- Training The Support Vector Machines Model
- Making Predictions With Our Support Vector Machines Model
- Assessing the Performance of Our Support Vector Machines Model
- The Full Code For This Tutorial
- Final Thoughts
The Python Libraries We Will Need In This Tutorial
You will be using a number of open-source Python libraries in this tutorial, including NumPy, pandas, and matplotlib. Here are some imports that you'll need to run before getting started:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
Next up, you'll import the data set we will be using throughout this tutorial.
The Data Set We Will Use In This Tutorial
This tutorial makes use of the breast cancer data set that comes included with scikit-learn
. Accordingly, we will now import that data set into our Python script.
First, import the load_breast_cancer
function from the datasets
module of scikit-learn
with this command:
from sklearn.datasets import load_breast_cancer
Next, you need to create an instance of the breast cancer data set. The following statement should do the trick:
cancer_data = load_breast_cancer()
This cancer_data
variable includes more than just the breast cancer data set. As an example, we will see shortly that there is a useful description contained in this raw_data
data structure.
Because of this, the last step that we need to do in importing the data set is store data alone in its own DataFrame called raw_data
. Here is the code to do this:
raw_data = pd.DataFrame(cancer_data['data'], columns = cancer_data['feature_names'])
Let's investigate what's actually contained in this data set.
Every data set included in scikit-learn
comes with a description field that can help you understand what the data set is describing.
Let's print this description. The following statement should do the trick:
print(raw_data['DESCR'])
This generates:
.. _breast_cancer_dataset:
Breast cancer wisconsin (diagnostic) dataset
--------------------------------------------
**Data Set Characteristics:**
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
worst/largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 0 is Mean Radius, field
10 is Radius SE, field 20 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
.. topic:: References
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.
The most important takeaways from this data set description are:
- There are 569 observations in the data set
- Each observation has 30 numeric attributes
Now that we have an understanding of how our data set is structured, let's move on to splitting our data set into training data and test data.
Splitting the Data Set Into Training Data and Test Data
To split our data set into training data and test data, the first thing we need to do is specify our x
and y
variables.
Our x
variables will be the raw_data
pandas DataFrame that we created earlier. Our y
variables need to be parsed from the original cancer_data
object that we created earlier, where it is stored under the key target
.
More specifically, here is the code to create our x
and y
variables:
x = raw_data
y = cancer_data['target']
We will be using scikit-learn
's train_test_split
function combined with list unpacking to split our data set into training data and test data (just like we did with linear regression and logistic regression earlier in this course).
First you'll need to import the function with the following statement:
from sklearn.model_selection import train_test_split
Now you can create training data and test data along both the x
and y
axes with the following statement:
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x, y, test_size = 0.3)
This splits the data such that the test data is 30% of the original data set (indicated by the parameter test_size = 0.3
).
Now that our data is split, let's move on to training our first support vector machines model.
Training The Support Vector Machines Model
Before you can train your first support vector machine model, you'll need to import the model class from scikit-learn
.
The SVC
class lives within scikit-learn
's svm
module. Here is the statement to import it:
from sklearn.svm import SVC
Now let's create an instance of this class and assign it to the variable model
:
model = SVC()
We can now train the SVM model using the same method as with our k-nearest neighbors model and our random forests model earlier in this course: by invoking the fit
method on it, and passing in x_training_data
and y_training_data
.
Here's the code to do this:
model.fit(x_training_data, y_training_data)
Our model has now been trained. Let's move on to making predictions with the model in the next section of this tutorial.
Making Predictions With Our Support Vector Machines Model
Any machine learning model created using scikit-learn
can be used to make predictions by simply invoking the predict
method on it and passing in the array of values that you'd like to generate predictions from.
In this case, here is the Python statement that you would use to store predictions from the x_test_data
in a variable called predictions
:
predictions = model.predict(x_test_data)
We'll assess the performance of our model next.
Assessing the Performance of Our Support Vector Machines Model
We'll use the same performance measurement techniques for our support vector machines model as we did with the other classification models we've built in this course: a classification_report
and a confusion_matrix
.
To start, let's import these functions from scikit-learn
:
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
First let's generate our classification_report:
print(classification_report(y_test_data, predictions))
This generates:
precision recall f1-score support
0 1.00 0.84 0.91 67
1 0.90 1.00 0.95 104
accuracy 0.94 171
macro avg 0.95 0.92 0.93 171
weighted avg 0.94 0.94 0.93 171
Next let's generate our confusion matrix:
print(confusion_matrix(y_test_data, predictions))
This generates:
[[ 56 11]
[ 0 104]]
The Full Code For This Tutorial
You can view the full code for this tutorial in this GitHub repository. It is also pasted below for your reference:
#Data imports
import pandas as pd
import numpy as np
#Visualization imports
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
#Import the data set from scikit-learn
from sklearn.datasets import load_breast_cancer
cancer_data = load_breast_cancer()
raw_data = pd.DataFrame(cancer_data['data'], columns = cancer_data['feature_names'])
# print(cancer_data['DESCR'])
#Split the data set into training data and test data
x = raw_data
y = cancer_data['target']
from sklearn.model_selection import train_test_split
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x, y, test_size = 0.3)
#Train the SVM model
from sklearn.svm import SVC
model = SVC()
model.fit(x_training_data, y_training_data)
#Make predictions with the model
predictions = model.predict(x_test_data)
#Measure the performance of our model
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(y_test_data, predictions))
print(confusion_matrix(y_test_data, predictions))
Final Thoughts
In this tutorial, you learned how to build Python support vector machines models.
Here is a brief summary of what was discussed in this tutorial:
- How to import and load the built-in breast cancer data set from
scikit-learn
- How to print descriptions from the built-in datasets included with
scikit-learn
- How to split your data set into training data and test data using
scikit-learn
- How to import the
SVC
model fromscikit-learn
'ssvm
module - How to train an SVM model
- How to make predictions with a support vector machines model in Python
- How to measure the performance of a support vector machines model using the
classification_report
andconfusion_matrix
functions