You are currently viewing Support Vector Machine- Learn to implement SVM in Python

Support Vector Machine- Learn to implement SVM in Python

In this tutorial we learn about Support Vector Machine, types of SVM, and its implementation in python from scratch.

Some other blog post that you may want to read is

Introduction to Support Vector Machine

In this machine learning series now we move onto the fifth algorithm of machine learning which is a support vector machine. The support vector machine solves the problem of classification as well as regression. It belongs to the category of the deterministic algorithm (given a particular input, will always produce the same output).

Let us understand the algorithm with the help of an example.

A few days ago, My niece was buying fruits from a fruitseller. My niece was very confused that which fruit should he buy because two fruits were looking similar from him. After that, he asks me uncle which one is apple they all looking very similar to me. He is confused between the apple and green pears because of their shapes and tastes. The problem of classifying things can be solved by Support Vector Machine.

Support Vector Machine is mainly used to solve the classification problem. As from the above example, we clearly understand what Support Vector Machine is. Let us represent the above figure in coordinate representations.

SVM tutorial

Let us understand the technical terms behind the SVM.

1- The red lines that separate the two-class is called as a hyperplane.

2- The nearest point( specifically orange points) to the lines are called support vectors.

Types of Support Vector Machine

There are basically two types of SVM are there.

1- Linear SVM– It creates a line or a hyperplane which separates the data into classes. Here the dataset is linearly separable.

2- Non-linear SVM- It is used to classifying a non-linearly separable dataset.

What is linear SVM and how does it works

As we all know the linear SVM goal is used to separate the dataset into two classes by creating a hyperplane. Your goal is to create a line that classifies the data into two groups or classes.

So the workflow is as follows:

First, we have to draw all the possible lines which separate the two classes and then we have to pick those lines which have the highest margin.

So the question arises why the above lines are the best split why not this line.

SVM figure

This is because we choose only that line which has the widest margin that separates the two groups or classes.

In other words, SVM models try to maximize the distance between the two classes by creating a well-defined decision boundary.

What is Non-linear SVM and how to make the data separable

Non-linear SVM is also called Kernel SVM and it is used to map the dataset into the higher dimensional space.

The below graph reveals the non-linear separable dataset

Not linearly Separable

So the question arises on how to make this thing separable.

There are two ways to solve this problem

1- Map this dataset into some higher dimensional feature space where the data points become separable.

Mapping dataset into higher dimension

2- And the second way is by adding the polynomial features and similarity features.

So when we add polynomial features to the dataset, and in some cases, it creates the linear separable dataset.

In the below figure the dataset is not separable but if we add x2= (x1 )^2, the data becomes linearly separable.

How to make the data separable

Next in this SVM Tutorial, we will see the implementation of SVM in Python.

Implementation of SVM in python from scratch

Steps that are involved in writing Support Vector Machine (SVM) code are

Step 1– We import all the required libraries

%matplotlib inline
import matplotlib.pyplot as plt 
#to plot our data and model visually
import numpy as np 
#To help us perform math operations

Step 2Define our data that is the input data which is in the form of (X, Y, bias term). After that, we define our output labels which are in the form of -1 or 1.

X = np.array([
    [-2,4,-1],
    [4,1,-1],
    [1, 6, -1],
    [2, 4, -1],
    [6, 2, -1],
])

y = np.array([-1,-1,1,1,1])

Step 3 The third step is to plot these data on a 2d graph

#for each example
for d, sample in enumerate(X):
    # Plot the negative samples (the first 2)
    if d < 2:
        plt.scatter(sample[0], sample[1], s=120, marker='_', linewidths=2)
    # Plot the positive samples (the last 3)
    else:
        plt.scatter(sample[0], sample[1], s=120, marker='+', linewidths=2)

# Draw a possible hyperplane, that separates the two classes.
#we will be using two points and draw the line between them (random guess)
plt.plot([-2,6],[6,0.5])

Step 4– In this, we define a function name svm_plot to learn the separating hyperplane between both classes.

def svm_plot(X,Y):
    #Initialize SVMs weight vector with zeros 
    w = np.zeros(len(X[0]))
    #Now we set the learning rate
    eta = 1
    #how many iterations to train for
    epochs = 100000
    #store misclassifications so we can plot how they change over time
    errors = []
    
    #After that we loop over the 100k epochs and perform, gradient descent part
    for epoch in range(1,epochs):
        error = 0
        for i, x in enumerate(X):
            #misclassification
            if (Y[i]*np.dot(X[i], w)) < 1:
                #misclassified update for ours weights
                w = w + eta * ( (X[i] * Y[i]) + (-2  *(1/epoch)* w) )
                error = 1
            else:
                #correct classification, update our weights
                w = w + eta * (-2  *(1/epoch)* w)
        errors.append(error)
        
   #let us plot the rate of classification errors during training 
    plt.plot(errors, '|')
    plt.ylim(0.5,1.5)
    plt.axes().set_yticklabels([])
    plt.xlabel('Epoch')
    plt.ylabel('Misclassified')
    plt.show()
    
    return w

Step 5– After that, we call our svm_plot function and store the result in the w variable.

w = svm_plot(X,y)

Output Screenshot:

plotting the classification error- SVM tutorial

Step 6– After that we plot the final result

for d, sample in enumerate(X):
 # Plot the negative samples
 if d < 2:
        plt.scatter(sample[0], sample[1], s=120, marker='_', linewidths=2)
 # Plot the positive samples
 else:
        plt.scatter(sample[0], sample[1], s=120, marker='+', linewidths=2)

# Add our test samples
plt.scatter(2,2, s=120, marker='_', linewidths=2, color='yellow')
plt.scatter(4,3, s=120, marker='+', linewidths=2, color='blue')

# Print the hyperplane calculated by svm_sgd()
x2=[w[0],w[1],-w[1],w[0]]
x3=[w[0],w[1],w[1],-w[0]]

x2x3 =np.array([x2,x3])
X,Y,U,V = zip(*x2x3)
ax = plt.gca()
ax.quiver(X,Y,U,V,scale=1, color='blue')  

Output Screenshot:

Plot the final result- SVM tutorial

Pros and Cons of using Support Vector Machine

Pros:

  1. Support Vector Machine works very well with a clear margin of separation
  2. It is effective in high dimensional spaces.
  3. It is effective in cases where the number of dimensions is bigger than the number of samples.
  4. It uses a subset of training points within the decision function (called support vectors), so it’s also memory efficient

 Cons:

  1. It doesn’t perform well when we have a large data set because the specified training time is higher.
  2. It also doesn’t perform alright, when the data set has more noise i.e. target classes are overlapped 
  3. Speed and size requirements while training as well as in testing.

Real-life Applications of SVM are

1- Classification of Images: Like classifying between alcoholic beverages and non-alcoholic beverages.

2- Face detection: Identifying faces from the images by using a kernel function.

3- Documents Classification: Whether it is a sports news or tech news.

4- Detecting whether the email is spam or not.

5- Classification between Human handwriting and Computer alphabets.

6- Time series forecasting: Forecasting the price of the electricity bill in next month.

7- Customer churn prediction: whether the customer is going to cancel our membership plans in the future or not.

Wrap up the Session

In this SVM tutorial, we learn about what is SVM and how does the SVM works. After that, we learned about the types of Support Vector Machine and then we implement the SVM algorithm using python from scratch. Then we talk about some of the benefits and drawbacks of using SVM. And at last, we learned about the application of SVM in real life.

If you like this SVM tutorial please like it and share it with your friends and colleagues And Subscribe to our newsletter to keep up to date. 

You can connect with me on social media profiles like LinkedIn, Twitter, and Instagram.

LinkedIn – https://www.linkedin.com/in/abhishek-kumar-singh-8a6326148

Twitter- https://twitter.com/Abhi007si

Instagram- www.instagram.com/dataspoof