You are currently viewing Chapter 1- PyTorch for Beginners: Basics

Chapter 1- PyTorch for Beginners: Basics

PyTorch Tutorial: Chapter 1- PyTorch basics

,

We are living in a tech-driven world where each year many new technologies come and serve the people. So it’s important for everyone to keep up to date with these new technologies. Artificial Intelligence is the most trending field in this era. Every company tries to integrate AI with their machinery.

Training a model requires a large amount of data to perform the model really well. So where it comes Deep learning that is used in a variety of tasks like image translation, image captioning, predicting next sentences, generating new images, and many more.

Pytorch is the best choice when you are starting deep learning.

Taking into account all the advantages of knowing PyTorch, we have decided to write a series of blog posts on Deep Learning with PyTorch. We are going to start with the first-day tutorial which is PyTorch basics.

What is PyTorch

Pytorch is a deep learning framework and a scientific computing package. This is how the PyTorch team defines it. Originally torch was built on Lua programming language and for the ease of use, it is converted in Python by the Facebook AI research teams and many others.

It’s a Python-based scientific computing package targeted at two sets of audiences:

1- A replacement for NumPy to use the power of GPUs.

2- A deep learning research platform that provides maximum flexibility and speed

Deep Learning with PyTorch: A 60 Minute Blitz

PyTorch uses Tensor as its core data structure, which is similar to Numpy’s ndarrays. If you are wondering about this specific choice of data structure, the answer lies in the fact that with appropriate software and hardware available, tensors provide acceleration of various mathematical operations. These operations when carried out in a large number in Deep Learning make a huge difference in speed.

Why should I learn PyTorch?

In the previous block, we learn what is PyTorch and in this section, we learn about why should I learn PyTorch.

There are many deep learning frameworks that are available other than PyTorch such as Keras, Tensorflow, Mxnet, Caffe, and many more. But what makes PyTorch different?

The goal of the PyTorch is to have maximum flexibility and speed in building our scientific algorithms while making the process extremely simple.

There are many features of PyTorch are

1- Pytorch offers native supports for Python and the use of all of its libraries and in this way, it becomes “Pythonic” in nature.

2- It is actively used by the huge MNC’s like the Facebook AI team along with many other subsidiaries.

3- Pytorch APIs are easy to use due to its easy nature it is used many Ph.D. scholars and researchers for their research purpose.

4- The main features oaf PyTorch is Imperative programming which means the program describes steps that change the state of the computer. And in this way, it generates the graph at each step making graph dynamic in nature.

Overview of Pytorch libraries

The most important PyTorch libraries are torch.nn, torch.optim, torch.utils and torch.autograd.

Let us see what is the use of all these libraries

1- Loading Dataset

The first step in any machine learning or deep learning project is we have to load the dataset and handling the dataset.

There are 2 important classes in the torch.utils.data are:

1- Dataset: It is used for loading custom datasets and built-in datasets. 

2- DataLoader: It is used for loading large datasets parallel. It gives us the option to shuffle data, determining the batch size, and the number of workers to load data in parallel.

2- Defining the neural network

To define the neural network we torch.nn module. It helps to set up the neural network layers like fully connected layers, convolutional layers, activation and loss functions, etc.

After that, we have to update the weight and biases so that our neural network can learn. For this task we use torch.optim module. Now we perform a backward pass to compute the gradient of input Tensors with respect to that same scalar value. This can easily be done torch.autograd module.

3- Performing inference and converting into other dl frameworks

Finally, we save the model by using torch.save the module and then we load the model for further predictions. This process is referred to as model inference.

You can only convert your Pytorch model into the ONNX model so that other deep learning frameworks can use like MXNet, CNTK, Caffe2. 

Other Pytorch libraries

torchaudio: It is an audio library for PyTorch that is used to deal with audio data and perform audio preprocessing and deploy it on production.

Some examples are Cornell BirdCall Identification, UrbanSound8k, and many others.

torchtext: It is used for text data and it is majorly used in natural language processing tasks. It provides many modules for text preprocessing tasks.

Some examples are Sentiment Analysis, Question- Answering, and many others.

torchvision: It deals with image data and its transformation. It is used in computer vision and deep learning.

Some examples are MNIST, COCO, CIFAR, and many more.

torchserve: It is used to deploy our machine learning model to production.

Introduction to Tensors and it’s operation

Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

If the tensor is of 1 dimension it is called 1-D tensors, similarly, If the tensor is of 2 dimensions it is called 2-D tensors, If the tensor is of 3 dimensions it is called 3-D tensors, Similarly, If the tensor is of n dimensions it is called n-D tensors.

Creating tensors

Let us created our first tensors using PyTorch. So the first step is to import our torch library.

import torch

# Create tensors with custom values
x=torch.tensor(1.)
print(x)

# Create tensors with just ones in a column
y=torch.ones(5)
print(y)

# Create tensors with just zeros in a column
z=torch.zeros(5)
print(z)

# Create tensors with multiple custom values
c = torch.tensor([21.0, 22.0, 23.0, 34.0, 55.0])
print(z)

In the above block of code, we have shown the examples of a 1D tensor. In the next block of code, we will code tensors of more than 1 dimension.

# 2D tensors
m = torch.tensor([[5.0, 7.0],[2.0, 4.0]])
print(m)

# 3D tensors
n = torch.tensor([[[3., 4.], [5., 7.]], 
              [[15., 6.], [1., 2.]]])
print(n)
              
# To find out the shape of tensors we use shape method
print(m.shape) 
print(n.shape)

Accessing elements in a tensor

To access an element in a tensor we use square bracket [ ] and specify the index position or range of values in that bracket.

# Accessing a single element
# printing element at index 4
print(c[4])

# Accessing a range of values
print(c[1:4]) 

# Accessing element in 2D tensors
# Accessing element at row 1, column 0
print(m[1][0])   

# Accessing element in 3D tensors
print(n[1][0][0])

# All elements
print(m[:]) 

Converting Numpy to tensors and vice-versa

In the above section of code, we learn that how to create tensors and access its elements, in this section we will how to convert a tensor to Numpy and vice-versa.

# Create a numpy array.
x=np.array([[1, 2], [3, 4]])

# Convert the numpy array to a torch tensor.
y=torch.from_numpy(x)

# Convert the torch tensor to a numpy array.
z=y.numpy() 

Performing Arithmetic Operations on tensors

There are some common operations that you can perform on tensors like addition, subtraction, multiplication, matrix multiplication, and division.

# Addition
t1 = torch.tensor([[5,7,4],[4,9,6]])
t2 = torch.tensor([[3,2,-3],[-9,3,3]])
print(t1+t2) or
print(torch.add(t1,t2))

# Subtraction
print(torch.sub(t1,t2))

# Multiplication
print(torch.mm(t1,t2))

# Division
print(t1/t2))

# Some additional function
a = torch.randn(4)
print(a)
tensor([-2.0755,  1.0226,  0.0831,  0.4806])
torch.sqrt(a)
tensor([nan,  1.0112,  0.2883,  0.6933])

How to load the tensor into CPU & GPU and vice-versa

There are two versions of the implementation of PyTorch tensor. One is for Cpu and the other is for GPU. Gpu is used when we have to perform massively parallel, fast computations.

To use the power of GPU, first, we have to every tensor to GPU and then it performs at a much faster rate.

If you don’t have GPU you can use ,Google Colab or Kaggle kernels to get one. In our case, we are using Google Colab. And then go to the runtime menu and change the runtime type from None to GPU.

Let’s see the code

# First we have to create tensor for cpu
  
tensor_cpu = torch.tensor([[3.0, 7.0], [5.0, 2.0]],
             device='cpu')
# Then we have to create tensor for gpu
tensor_gpu = torch.tensor([[3.0, 8.0], [4.0, 1.0]],
             device='cuda')
             
 
#To use the Cpu ram and Gpu ram

tensor_cpu = tensor_cpu * 2
tensor_gpu = tensor_gpu * 2

# Use below command to check gpu ram and its consumption
!nvidia-smi 
Checking GPU ram and its consumption

As we can see that we using P100 GPU and the memory usage is 791MiB out of 16280MiB.

Wrap up the Session

So in this blog, we cover about what is PyTorch, why should we learn PyTorch, Pytorch pipeline, libraries in Pytorch, and introduction to tensor and its operation.

In the next blog post, we will cover how to implement machine learning models using PyTorch. So stay tuned.