CNN Components

Convolutional Neural Network Components

Introduction

The goal of this module is to explore convolutional operations and the building blocks of convolutional neural networks (CNNs) before we dive into building CNNs using PyTorch. Fortunately, CNNs, especially simple architectures, rely on a small set of building blocks. As a result, understanding these building blocks goes a long way towards understanding how CNNs work and are built.

In the code block below, I am importing torch, torch.nn, and torch.nn.functional, which is assigned the alias F. The torch.nn.functional subpackage provides functional implementations (i.e., they are coded and executed like functions) to compliment many of the class-based layers and building blocks available in PyTorch. For the demonstrations in this module, the functional implementations are much simpler to implement since I don’t have to subclass nn.Module and instantiate an instance.

import torch
import torch.nn
import torch.nn.functional as F

2D Convolution

We will begin by building a random tensor on which to perform operations. This tensor is 4D with a shape of (1,1,9,9) and is meant to represent a tensor with the following dimensions: (batch, channels, height, width). As you will see in later modules, this is a common tensor shape when feeding images into a CNN in batches. I have also rescaled the random values from 0 to 1 to -1 to 1.

Next, I create another tensor that is meant to represent a kernel or moving window. This moving window has a shape of (1,1,3,3). So, it represents a single kernel that will pass over the input tensor to create a single feature map. The weights associated with this tensor would be learned during the training process to create a transformation of the input image or feature map that represents some useful information. However, here I am not actually building a CNN architecture or training it. Instead, I am just demonstrating convolutional operations using some random data of the correct shape. As a result, I have just filled the tensor with 1s.

inT = (torch.rand((1,1,9,9))*2)-1
inT
tensor([[[[-0.1700,  0.6824, -0.9379,  0.7623, -0.5802, -0.3197, -0.5944,
           -0.2823,  0.3378],
          [-0.9101, -0.2737, -0.0789,  0.9319,  0.0469, -0.6607, -0.4745,
            0.1037,  0.9737],
          [-0.5054, -0.7446,  0.9377, -0.3681, -0.3094,  0.9986, -0.0977,
           -0.9672, -0.3601],
          [ 0.9141, -0.0173,  0.5228,  0.9288,  0.0704,  0.5076,  0.1044,
            0.0997, -0.4263],
          [ 0.8673, -0.7768,  0.4729,  0.9279,  0.7871,  0.8790,  0.1523,
            0.0539, -0.5975],
          [-0.3491, -0.1336,  0.5392,  0.4455,  0.1161,  0.9067,  0.9359,
           -0.2171,  0.3809],
          [-0.5335,  0.3503, -0.1585,  0.9505,  0.3940, -0.8488, -0.0352,
            0.7643, -0.3146],
          [-0.1068, -0.4457,  0.7698, -0.7586,  0.8501,  0.3596,  0.7482,
           -0.3678,  0.1423],
          [-0.0236, -0.9979,  0.7835, -0.4363, -0.0214,  0.4094,  0.0372,
            0.6866,  0.4809]]]])
inW = torch.ones((1,1,3,3))
inW
tensor([[[[1., 1., 1.],
          [1., 1., 1.],
          [1., 1., 1.]]]])

In order to apply the kernel to the image, I use the functional version of conv2d(), which performs two-dimensional convolution. There are also functions for 1D and 3D convolution (conv1d() and conv3d()). The conv2d() function accepts an input tensor, an input kernel, and stride and padding arguments. The stride represents how much the kernel moves as it steps over the image. Since the stride is set to 1, each pixel will be placed at the center of the 3x3 kernel as it passes over the image. The padding argument is set to “same”, which indicates that padding will be added such that the height and width of the resulting feature map will be the same as the input image. Since a 3x3 kernel with a stride of 1 is being used here, “same” would yield the same result as a padding of 1.

In the first example, the result is equivalent to adding all of the values in the 3x3 window around each cell. This is because each weight is set to 1, so each value will be multiplied by 1 then all the values will be added. In the next example, I have defined a new kernel in which only the center weight of the 3x3 kernel is 1 while all other weights are 0. This results in simply returning the center value or replicating the original values or “image”.

outT = F.conv2d(inT, inW, stride=1, padding="same")
outT
tensor([[[[-0.6713, -1.6881,  1.0861,  0.1441,  0.1805, -2.5826, -2.2278,
            0.0641,  1.1330],
          [-1.9214, -2.0004,  0.9112,  0.4043,  0.5015, -1.9911, -2.2942,
           -1.3610, -0.1944],
          [-1.5370, -0.1554,  1.8387,  2.6822,  2.1460,  0.1856, -0.3861,
           -1.0443, -0.5765],
          [-0.2627,  1.6708,  1.8833,  3.9701,  4.4219,  3.0923,  1.7305,
           -2.0386, -2.1976],
          [ 0.5047,  2.0396,  2.9094,  4.8106,  5.5691,  4.4595,  3.4224,
            0.4862, -0.7064],
          [-0.5753,  0.2783,  2.6173,  4.4745,  4.5578,  3.2869,  2.5909,
            1.1229,  0.0700],
          [-1.2183, -0.0679,  1.5588,  3.1479,  2.4149,  3.4264,  2.2457,
            2.0369,  0.3881],
          [-1.7572, -0.3624,  0.0570,  2.3730,  0.8983,  1.8930,  1.7534,
            2.1419,  1.3917],
          [-1.5740, -0.0207, -1.0852,  1.1871,  0.4027,  2.3830,  1.8732,
            1.7274,  0.9420]]]])
inW = torch.tensor([[[[0., 0., 0.],[0., 1., 0.],[0., 0., 0.]]]])
outT = F.conv2d(inT, inW, stride=1, padding="same")
outT
tensor([[[[-0.1700,  0.6824, -0.9379,  0.7623, -0.5802, -0.3197, -0.5944,
           -0.2823,  0.3378],
          [-0.9101, -0.2737, -0.0789,  0.9319,  0.0469, -0.6607, -0.4745,
            0.1037,  0.9737],
          [-0.5054, -0.7446,  0.9377, -0.3681, -0.3094,  0.9986, -0.0977,
           -0.9672, -0.3601],
          [ 0.9141, -0.0173,  0.5228,  0.9288,  0.0704,  0.5076,  0.1044,
            0.0997, -0.4263],
          [ 0.8673, -0.7768,  0.4729,  0.9279,  0.7871,  0.8790,  0.1523,
            0.0539, -0.5975],
          [-0.3491, -0.1336,  0.5392,  0.4455,  0.1161,  0.9067,  0.9359,
           -0.2171,  0.3809],
          [-0.5335,  0.3503, -0.1585,  0.9505,  0.3940, -0.8488, -0.0352,
            0.7643, -0.3146],
          [-0.1068, -0.4457,  0.7698, -0.7586,  0.8501,  0.3596,  0.7482,
           -0.3678,  0.1423],
          [-0.0236, -0.9979,  0.7835, -0.4363, -0.0214,  0.4094,  0.0372,
            0.6866,  0.4809]]]])
print(inT.shape)
torch.Size([1, 1, 9, 9])
print(outT.shape)
torch.Size([1, 1, 9, 9])

As the code blocks above demonstrate, when padding is set to “same” the height and width of the resulting feature map will be the same as the input image or tensor. If the padding is set to 0, no padding will be added, and only cells that have a full set of neighbors within a 3x3 window will be processed. This will result in dropping the outer most rows and columns to yield a tensor that has a height and width of 7x7 as opposed to 9x9.

inW = torch.tensor([[[[0., 0., 0.],[0., 1., 0.],[0., 0., 0.]]]])
outT = F.conv2d(inT, inW, stride=1, padding=0)
outT
tensor([[[[-0.2737, -0.0789,  0.9319,  0.0469, -0.6607, -0.4745,  0.1037],
          [-0.7446,  0.9377, -0.3681, -0.3094,  0.9986, -0.0977, -0.9672],
          [-0.0173,  0.5228,  0.9288,  0.0704,  0.5076,  0.1044,  0.0997],
          [-0.7768,  0.4729,  0.9279,  0.7871,  0.8790,  0.1523,  0.0539],
          [-0.1336,  0.5392,  0.4455,  0.1161,  0.9067,  0.9359, -0.2171],
          [ 0.3503, -0.1585,  0.9505,  0.3940, -0.8488, -0.0352,  0.7643],
          [-0.4457,  0.7698, -0.7586,  0.8501,  0.3596,  0.7482, -0.3678]]]])
print(inT.shape)
torch.Size([1, 1, 9, 9])
print(outT.shape)
torch.Size([1, 1, 7, 7])

It is also possible to apply multiple kernels to an input. Below, I am creating 3 3x3 kernels, all filled with weights of 1. When these kernels are applied to the input tensor, three feature maps are generated.

inW = torch.ones((3,1,3,3))
outT = F.conv2d(inT, inW, stride=1, padding="same")
outT
tensor([[[[-0.6713, -1.6881,  1.0861,  0.1441,  0.1805, -2.5826, -2.2278,
            0.0641,  1.1330],
          [-1.9214, -2.0004,  0.9112,  0.4043,  0.5015, -1.9911, -2.2942,
           -1.3610, -0.1944],
          [-1.5370, -0.1554,  1.8387,  2.6822,  2.1460,  0.1856, -0.3861,
           -1.0443, -0.5765],
          [-0.2627,  1.6708,  1.8833,  3.9701,  4.4219,  3.0923,  1.7305,
           -2.0386, -2.1976],
          [ 0.5047,  2.0396,  2.9094,  4.8106,  5.5691,  4.4595,  3.4224,
            0.4862, -0.7064],
          [-0.5753,  0.2783,  2.6173,  4.4745,  4.5578,  3.2869,  2.5909,
            1.1229,  0.0700],
          [-1.2183, -0.0679,  1.5588,  3.1479,  2.4149,  3.4264,  2.2457,
            2.0369,  0.3881],
          [-1.7572, -0.3624,  0.0570,  2.3730,  0.8983,  1.8930,  1.7534,
            2.1419,  1.3917],
          [-1.5740, -0.0207, -1.0852,  1.1871,  0.4027,  2.3830,  1.8732,
            1.7274,  0.9420]],

         [[-0.6713, -1.6881,  1.0861,  0.1441,  0.1805, -2.5826, -2.2278,
            0.0641,  1.1330],
          [-1.9214, -2.0004,  0.9112,  0.4043,  0.5015, -1.9911, -2.2942,
           -1.3610, -0.1944],
          [-1.5370, -0.1554,  1.8387,  2.6822,  2.1460,  0.1856, -0.3861,
           -1.0443, -0.5765],
          [-0.2627,  1.6708,  1.8833,  3.9701,  4.4219,  3.0923,  1.7305,
           -2.0386, -2.1976],
          [ 0.5047,  2.0396,  2.9094,  4.8106,  5.5691,  4.4595,  3.4224,
            0.4862, -0.7064],
          [-0.5753,  0.2783,  2.6173,  4.4745,  4.5578,  3.2869,  2.5909,
            1.1229,  0.0700],
          [-1.2183, -0.0679,  1.5588,  3.1479,  2.4149,  3.4264,  2.2457,
            2.0369,  0.3881],
          [-1.7572, -0.3624,  0.0570,  2.3730,  0.8983,  1.8930,  1.7534,
            2.1419,  1.3917],
          [-1.5740, -0.0207, -1.0852,  1.1871,  0.4027,  2.3830,  1.8732,
            1.7274,  0.9420]],

         [[-0.6713, -1.6881,  1.0861,  0.1441,  0.1805, -2.5826, -2.2278,
            0.0641,  1.1330],
          [-1.9214, -2.0004,  0.9112,  0.4043,  0.5015, -1.9911, -2.2942,
           -1.3610, -0.1944],
          [-1.5370, -0.1554,  1.8387,  2.6822,  2.1460,  0.1856, -0.3861,
           -1.0443, -0.5765],
          [-0.2627,  1.6708,  1.8833,  3.9701,  4.4219,  3.0923,  1.7305,
           -2.0386, -2.1976],
          [ 0.5047,  2.0396,  2.9094,  4.8106,  5.5691,  4.4595,  3.4224,
            0.4862, -0.7064],
          [-0.5753,  0.2783,  2.6173,  4.4745,  4.5578,  3.2869,  2.5909,
            1.1229,  0.0700],
          [-1.2183, -0.0679,  1.5588,  3.1479,  2.4149,  3.4264,  2.2457,
            2.0369,  0.3881],
          [-1.7572, -0.3624,  0.0570,  2.3730,  0.8983,  1.8930,  1.7534,
            2.1419,  1.3917],
          [-1.5740, -0.0207, -1.0852,  1.1871,  0.4027,  2.3830,  1.8732,
            1.7274,  0.9420]]]])

If a stride greater than 1 is used, the size of the resulting array in the spatial dimensions will decrease in comparison to the input since the kernel will not be centered over each cell but will skip cells. This is one means to decrease the size of the spatial dimensions of an array. However, this is not commonly used. Instead, pooling operations are applied, which will be discussed below. When building CNNs, we will generally use a stride of 1. However, there are some applications where other strides are used. For example, when we discuss the ResNet architecture, you will see that it uses a stride of 2 as a means to decrease the size of the arrays at certain points in the network architecture.

inW = torch.ones((1,1,3,3))
outT = F.conv2d(inT, inW, stride=2)
outT
tensor([[[[-2.0004,  0.4043, -1.9911, -1.3610],
          [ 1.6708,  3.9701,  3.0923, -2.0386],
          [ 0.2783,  4.4745,  3.2869,  1.1229],
          [-0.3624,  2.3730,  1.8930,  2.1419]]]])
print(inT.shape)
torch.Size([1, 1, 9, 9])
print(outT.shape)
torch.Size([1, 1, 4, 4])

Pooling

In CNN architectures, pooling operations are commonly used to decrease the size of the array in the spatial dimensions as opposed to accomplising this by increasing the stride in the convolution operations. The most commonly used pooling operation is max pooling, which is demonstrated below. Here, an input tensor with a shape of (1,1,10,10) is transformed to a shape of (1,1,5,5) using max pooling. This is a very simple operation. In a 2x2 window, the largest or maximum value is returned. You can see that the max_pool2d() function accepts an input tensor, a window size, and a stride. By using a window size of 2x2, I essentially combine 4 cells to a single cell by returning the maximum value of the 4 values. When using a stride of 2, there is no overlap between the 2x2 windows.

inT = (torch.rand((1,1,10,10))*2)-1
inT
tensor([[[[-0.9355, -0.1541,  0.1897, -0.5308,  0.2893,  0.0389, -0.3414,
            0.9599, -0.1224,  0.0528],
          [-0.0761,  0.3269,  0.5637,  0.0958, -0.3872,  0.8240, -0.4473,
           -0.6271,  0.7073, -0.5689],
          [ 0.4157,  0.3850,  0.1815, -0.7801, -0.7153, -0.8076,  0.5288,
           -0.3641,  0.1044,  0.7395],
          [-0.2003, -0.8078,  0.5724,  0.2092,  0.0570,  0.3671,  0.7098,
           -0.7977,  0.4219, -0.9084],
          [-0.3119,  0.5318,  0.2227, -0.2706,  0.7025,  0.4496, -0.6996,
            0.9201, -0.3680, -0.2652],
          [-0.3106, -0.0950,  0.7198, -0.1344, -0.2667, -0.2776, -0.7113,
           -0.7244,  0.4504, -0.2942],
          [ 0.7442, -0.5639,  0.8534,  0.6825,  0.7097, -0.8962,  0.6471,
           -0.3142,  0.6381,  0.3449],
          [ 0.8096, -0.3563,  0.4900, -0.7922,  0.9684,  0.4300,  0.4769,
            0.7817,  0.3645,  0.9102],
          [ 0.7841,  0.0016, -0.7368, -0.8073, -0.9752,  0.0180, -0.9556,
           -0.2192, -0.5864,  0.1252],
          [-0.0528, -0.3762, -0.9329, -0.4520,  0.2297,  0.0297, -0.3944,
            0.9947,  0.3036, -0.1026]]]])
outT = F.max_pool2d(inT, (2,2), stride=2)
outT
tensor([[[[ 0.3269,  0.5637,  0.8240,  0.9599,  0.7073],
          [ 0.4157,  0.5724,  0.3671,  0.7098,  0.7395],
          [ 0.5318,  0.7198,  0.7025,  0.9201,  0.4504],
          [ 0.8096,  0.8534,  0.9684,  0.7817,  0.9102],
          [ 0.7841, -0.4520,  0.2297,  0.9947,  0.3036]]]])
print(inT.shape)
torch.Size([1, 1, 10, 10])
print(outT.shape)
torch.Size([1, 1, 5, 5])

Although there are other pooling operations, max pooling tends to be used most often. This is because the maximum value in the window is generally associated with the most defining feature or the largest activation. However, this is just a simple interpretation and may not hold true in all cases.

Below, I demonstrate another pooling operation: average pooling. This is the same as max pooling accept that the average is returned as opposed to the maximum. You will see this applied when we discuss ResNet.

Similar to convolutional operations, there are 1D and 3D versions of pooling operations.

outT = F.avg_pool2d(inT, (2,2), stride=2)
outT
tensor([[[[-0.2097,  0.0796,  0.1912, -0.1140,  0.0172],
          [-0.0518,  0.0457, -0.2747,  0.0192,  0.0893],
          [-0.0464,  0.1344,  0.1519, -0.3038, -0.1192],
          [ 0.1584,  0.3084,  0.3030,  0.3979,  0.5644],
          [ 0.0892, -0.7322, -0.1744, -0.1436, -0.0650]]]])

Activation Functions

One of the issues with only applying convolution operations or fully connected layers is that this only allows for linear transformations of values. In order to introduce non-linearity into the CNN, we need to incorporate activation functions. Below, I have implemented a few activation functions. Currently, one of the most commonly used activation functions is the rectified linear unit (ReLU). This is an operation that simply converts any negative values to 0 and does not alter any positive values. Leaky ReLU is an alternative version of ReLU that maintains negative values but reduces the magnitude using a slope term. This is sometimes useful to combat the “dying ReLU” problem.

Some activation functions have an inplace parameter, which allows you to modify the values in place as opposed to saving the result to a new location in memory.

inT = torch.rand(1,2,5,5)*2-1
inT
tensor([[[[ 0.5639, -0.5058, -0.3127,  0.5489,  0.9305],
          [ 0.9206,  0.8311,  0.3708,  0.6857, -0.1170],
          [ 0.9017,  0.0922, -0.8883,  0.8020, -0.8508],
          [ 0.5480, -0.0435, -0.1898,  0.7492,  0.3219],
          [-0.2502,  0.1252, -0.2704,  0.1221, -0.5676]],

         [[-0.3846, -0.1610, -0.2821,  0.0106,  0.3561],
          [ 0.5461,  0.2997,  0.5303,  0.7464,  0.8616],
          [ 0.7897,  0.4739,  0.0751, -0.0444,  0.5533],
          [-0.2143,  0.6001, -0.8247, -0.0565, -0.5706],
          [-0.6607, -0.7264, -0.7638, -0.2042,  0.8909]]]])
outT = F.relu(inT)
outT
tensor([[[[0.5639, 0.0000, 0.0000, 0.5489, 0.9305],
          [0.9206, 0.8311, 0.3708, 0.6857, 0.0000],
          [0.9017, 0.0922, 0.0000, 0.8020, 0.0000],
          [0.5480, 0.0000, 0.0000, 0.7492, 0.3219],
          [0.0000, 0.1252, 0.0000, 0.1221, 0.0000]],

         [[0.0000, 0.0000, 0.0000, 0.0106, 0.3561],
          [0.5461, 0.2997, 0.5303, 0.7464, 0.8616],
          [0.7897, 0.4739, 0.0751, 0.0000, 0.5533],
          [0.0000, 0.6001, 0.0000, 0.0000, 0.0000],
          [0.0000, 0.0000, 0.0000, 0.0000, 0.8909]]]])
outT = F.leaky_relu(inT, negative_slope=.1)
outT
tensor([[[[ 0.5639, -0.0506, -0.0313,  0.5489,  0.9305],
          [ 0.9206,  0.8311,  0.3708,  0.6857, -0.0117],
          [ 0.9017,  0.0922, -0.0888,  0.8020, -0.0851],
          [ 0.5480, -0.0043, -0.0190,  0.7492,  0.3219],
          [-0.0250,  0.1252, -0.0270,  0.1221, -0.0568]],

         [[-0.0385, -0.0161, -0.0282,  0.0106,  0.3561],
          [ 0.5461,  0.2997,  0.5303,  0.7464,  0.8616],
          [ 0.7897,  0.4739,  0.0751, -0.0044,  0.5533],
          [-0.0214,  0.6001, -0.0825, -0.0057, -0.0571],
          [-0.0661, -0.0726, -0.0764, -0.0204,  0.8909]]]])

The sigmoid activation function can be used to convert values to a range between 0 and 1. This is a common activation function after the last linear layer in a fully connected neural network or a CNN that is performing a binary classification. For binary classification, the last fully connected layer will have an output size of 1. The raw logits can be passed through a sigmoid activation function in order to scale them from 0 to 1. In the example below, I generate a 1D array of logits representing results for four separate data points. Each data point is then rescaled separately, or element-wise, using the sigmoid function to be between 0 and 1.

inT = torch.tensor([4.1, -1.1, 3.3, 2.2])
print(torch.sigmoid(inT))
tensor([0.9837, 0.2497, 0.9644, 0.9002])

For multiclass classification, a softmax activation is commonly used instead of a sigmoid activation. When performing a multiclass classification, the last layer will output one node for each class. As a result, raw logits will be returned for each class, and the prediction will correspond to the class with the highest logit. In order to convert these raw logits to probabilities that sum to 1, the softmax activation is applied. This requires defining a dimension over which the probabilities should sum to 1. In the example, the 2nd dimension (i.e., index 1) represents all the predicted logits for a single observation.

inT = torch.tensor([[4.1, -1.1, 3.3, 2.2], [3.6, 2.3, -0.5, -1.2], [1.7, 3.3, .7, .2]])
print(torch.softmax(inT, dim=1))
tensor([[0.6233, 0.0034, 0.2801, 0.0932],
        [0.7708, 0.2101, 0.0128, 0.0063],
        [0.1528, 0.7569, 0.0562, 0.0341]])

It should be noted that some loss functions may expect raw logits while others may expect the data to be passed through a sigmoid or softmax activation beforehand. Here are a few tips.

  • nn.CrossEntropyLoss() for multiclass problems expects logits

  • nn.NLLLoss() for multiclass problems expects probabilities

  • nn.BCELoss() for binary classification expects probabilities

  • nn.BCEWithLogitsLoss() for binary classification expects logits (this combines the sigmoid and BCE loss operations)

Given the above tips, when using PyTorch for a multiclass classification, we generally do not incorporate a softmax activation at the end of the network if we are using cross entropy loss since the PyTorch implementation of this loss metric expects raw logits. If you are using an alternative loss metric, such as DICE or Tversky, you will need to investigate the implementation and make sure you are providing it with the expected data: either raw logits or probabilities (i.e., logits have been passed through a softmax activation).

For binary classification and if you use nn.BCELoss(), you must pass the logits through a sigmoid activation beforehand. If you use nn.BCEWithLogitsLoss(), you should provide the raw logits since this loss incorporates the sigmoid activation. If you are using an alternative loss metric, such as DICE or Tversky, you will need to explore the implementation and make sure you are providing it with the expected data: either raw logits or probabilities (i.e., logits have been passed through a sigmoid activation).

Finally, it is possible to treat any binary classification problem as a multiclass classification problem in which the last layer outputs two logits, one for each class, as opposed to one for the positive case. If you treat a binary classification problem the same as a multiclass classification problem, then you would want to make use of nn.CrossEntropyLoss(). If you use a loss that expects probabilities, you would want to use softmax as opposed to a sigmoid activation.

Deconvolution and Upsampling

For regular CNNs in which the goal is to perform labeling of an entire image as opposed to labeling or predicting each individual pixel in the image, deconvolution is not required. However, when performing pixel-level classification (i.e., semantic segmentation) or regression the decoder component must have a means to scale up the feature maps in the spatial dimensions in order to regenerate the original spatial dimensions of the scene.

This upscaling or upsampling can be performed using a couple of different methods. The upsample() function actually operates similarly to resampling methods that we use in the geospatial sciences. New cell values are estimated using nearby original values mathematically using some interpolation method, such as nearest neighbor, bilinear, or bicubic. This form of upsampling is not really deconvolution since no kernels and associated weights are being applied.

As examples below, I have performed upsampling on a tensor with spatial dimensions of 4x4 using the nearest neighbor and bilinear interpolation methods. When using nearest neighbor, the nearest cell value is used, and you can see that values have been replicated in the result. In contrast, bilinear interpolation performs a distance-weighted averaging of the 4 nearest original cells to the output cell.

inT = torch.rand(1,2,4,4)*2-1
inT
tensor([[[[-0.2509, -0.4521, -0.0384,  0.1119],
          [-0.1733,  0.4236,  0.6165, -0.2086],
          [ 0.7386,  0.0091, -0.0638, -0.5912],
          [-0.5811,  0.7182,  0.3243, -0.3356]],

         [[ 0.5619,  0.4559,  0.9062, -0.2723],
          [-0.2521, -0.9229, -0.6650, -0.0866],
          [ 0.4503, -0.9432, -0.2611, -0.9642],
          [-0.7973, -0.1293,  0.9353, -0.3350]]]])
outT = F.upsample(inT, size=(8,8), mode="nearest")
C:\Users\vidcg\ANACON~1\envs\torchENV\lib\site-packages\torch\nn\functional.py:3734: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
outT
tensor([[[[-0.2509, -0.2509, -0.4521, -0.4521, -0.0384, -0.0384,  0.1119,
            0.1119],
          [-0.2509, -0.2509, -0.4521, -0.4521, -0.0384, -0.0384,  0.1119,
            0.1119],
          [-0.1733, -0.1733,  0.4236,  0.4236,  0.6165,  0.6165, -0.2086,
           -0.2086],
          [-0.1733, -0.1733,  0.4236,  0.4236,  0.6165,  0.6165, -0.2086,
           -0.2086],
          [ 0.7386,  0.7386,  0.0091,  0.0091, -0.0638, -0.0638, -0.5912,
           -0.5912],
          [ 0.7386,  0.7386,  0.0091,  0.0091, -0.0638, -0.0638, -0.5912,
           -0.5912],
          [-0.5811, -0.5811,  0.7182,  0.7182,  0.3243,  0.3243, -0.3356,
           -0.3356],
          [-0.5811, -0.5811,  0.7182,  0.7182,  0.3243,  0.3243, -0.3356,
           -0.3356]],

         [[ 0.5619,  0.5619,  0.4559,  0.4559,  0.9062,  0.9062, -0.2723,
           -0.2723],
          [ 0.5619,  0.5619,  0.4559,  0.4559,  0.9062,  0.9062, -0.2723,
           -0.2723],
          [-0.2521, -0.2521, -0.9229, -0.9229, -0.6650, -0.6650, -0.0866,
           -0.0866],
          [-0.2521, -0.2521, -0.9229, -0.9229, -0.6650, -0.6650, -0.0866,
           -0.0866],
          [ 0.4503,  0.4503, -0.9432, -0.9432, -0.2611, -0.2611, -0.9642,
           -0.9642],
          [ 0.4503,  0.4503, -0.9432, -0.9432, -0.2611, -0.2611, -0.9642,
           -0.9642],
          [-0.7973, -0.7973, -0.1293, -0.1293,  0.9353,  0.9353, -0.3350,
           -0.3350],
          [-0.7973, -0.7973, -0.1293, -0.1293,  0.9353,  0.9353, -0.3350,
           -0.3350]]]])
outT = F.upsample(inT, size=(8,8), mode="bilinear")
outT
tensor([[[[-0.2509, -0.3012, -0.4018, -0.3487, -0.1418, -0.0008,  0.0743,
            0.1119],
          [-0.2315, -0.2319, -0.2328, -0.1436,  0.0357,  0.1019,  0.0551,
            0.0318],
          [-0.1927, -0.0934,  0.1053,  0.2667,  0.3907,  0.3074,  0.0168,
           -0.1285],
          [ 0.0546,  0.1210,  0.2536,  0.3516,  0.4148,  0.2587, -0.1166,
           -0.3042],
          [ 0.5106,  0.4111,  0.2122,  0.1111,  0.1079, -0.0442, -0.3451,
           -0.4955],
          [ 0.4087,  0.3531,  0.2419,  0.1481,  0.0715, -0.1069, -0.3872,
           -0.5273],
          [-0.2512, -0.0532,  0.3429,  0.4625,  0.3057,  0.0706, -0.2428,
           -0.3995],
          [-0.5811, -0.2563,  0.3933,  0.6197,  0.4228,  0.1593, -0.1706,
           -0.3356]],

         [[ 0.5619,  0.5354,  0.4824,  0.5685,  0.7936,  0.6116,  0.0224,
           -0.2723],
          [ 0.3584,  0.2966,  0.1730,  0.2118,  0.4129,  0.3286, -0.0410,
           -0.2258],
          [-0.0486, -0.1810, -0.4458, -0.5017, -0.3487, -0.2374, -0.1678,
           -0.1330],
          [-0.0765, -0.2894, -0.7151, -0.8370, -0.6550, -0.4995, -0.3705,
           -0.3060],
          [ 0.2747, -0.0285, -0.6349, -0.7941, -0.5061, -0.4578, -0.6491,
           -0.7448],
          [ 0.1384, -0.0811, -0.5202, -0.5453, -0.1565, -0.1733, -0.5957,
           -0.8069],
          [-0.4854, -0.4472, -0.3709, -0.0905,  0.3939,  0.3540, -0.2102,
           -0.4923],
          [-0.7973, -0.6303, -0.2963,  0.1369,  0.6691,  0.6177, -0.0174,
           -0.3350]]]])

As noted above, the upsampling method is not a convolutional, or “deconvolutional”, operation since no kernels are applied. Since there are no kernels and associated weights, there are no trainable parameters associated with upsampling.

The transpose2d() function allows for upsampling a tensor in the spatial dimensions using kernels that have trainable weights. If transpose convolution is applied with a kernel size of 2x2 and a stride of 2 then this will result in doubling the size of the tensor in the spatial dimensions. Practically, this operation can return the original tensor spatial dimensions before max pooling with a 2x2 kernel and a stride of 2 was applied. To be clear, 2D transpose does not undo prior convolutional operations. Instead, it learns new weights as it scales up the data.

As demonstrated using the code below, 2D transpose with a kernel size of 2x2 and a stride of 2 converts a tensor of shape (1,1,9,9) to a shape of (1,1,18,18). Since weights are applied, it requires that both an input tensor and kernel be provided.

inT = torch.rand(1,1,9,9)
inT
tensor([[[[3.0494e-01, 6.6059e-03, 1.5180e-01, 1.0452e-01, 9.5771e-02,
           5.9101e-01, 8.7551e-01, 5.4813e-01, 2.3709e-01],
          [8.0917e-01, 4.4598e-01, 1.6671e-01, 3.0333e-04, 5.4100e-01,
           2.6351e-01, 2.3562e-01, 3.5333e-01, 3.2555e-03],
          [7.2846e-01, 4.3116e-01, 3.5533e-01, 5.9743e-01, 1.4766e-01,
           7.7174e-01, 8.8454e-01, 3.1102e-01, 2.5577e-01],
          [2.4108e-01, 8.9597e-01, 1.9065e-01, 7.7841e-02, 9.9905e-01,
           4.9544e-01, 5.3307e-01, 4.0326e-01, 8.7396e-01],
          [5.8937e-01, 6.3878e-01, 3.3665e-01, 6.3055e-02, 3.4877e-01,
           5.7652e-01, 9.6403e-01, 9.1441e-01, 3.0954e-02],
          [4.4115e-01, 8.3246e-01, 6.6964e-01, 4.3413e-01, 5.8948e-01,
           5.6783e-01, 1.5777e-01, 9.8821e-02, 3.3986e-01],
          [6.6099e-01, 4.4053e-01, 5.2903e-01, 2.7643e-01, 9.7024e-01,
           4.5704e-01, 5.4045e-01, 9.3200e-01, 8.0507e-03],
          [1.1022e-02, 2.9857e-01, 4.6856e-01, 6.8943e-01, 2.0012e-01,
           3.3718e-04, 3.6202e-01, 3.5228e-01, 2.8393e-01],
          [7.9934e-01, 7.0670e-02, 4.6713e-01, 8.2546e-01, 3.3520e-01,
           1.2836e-01, 2.3002e-01, 7.2869e-03, 8.4285e-01]]]])
inW = torch.rand(1,1,2,2)
inW
tensor([[[[0.5529, 0.8481],
          [0.3952, 0.2295]]]])
outT = F.conv_transpose2d(inT, inW, stride=2, padding=0)
outT.shape
torch.Size([1, 1, 18, 18])

In the example below, I have created a new 2x2 kernel in which the bottom-right value is 1 and all other values are 0. When applying this kernel, you can see that it results in expanding the array by adding rows and columns of zeros. When the weights are not set to zero, these added rows and columns can hold non-zero values. The kernel weights are trainable, resulting in the model learning how to upsample the image as opposed to simply applying an interpolation algorithm.

In the next example, I have performed 2D transpose convolution using a 2x2 kernel in which all weights are set to 1. This result is equivalent to performing nearest neighbor interpolation.

In later modules, you will see examples of deconvolution in action in the context of semantic segmentation. I will primarily rely on 2D transpose convolution as opposed to upsampling.

inW = torch.tensor([[[[0., 0.], [0., 1.]]]])
outT = F.conv_transpose2d(inT, inW, stride=2, padding=0)
outT.shape
torch.Size([1, 1, 18, 18])
outT
tensor([[[[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 3.0494e-01, 0.0000e+00, 6.6059e-03, 0.0000e+00,
           1.5180e-01, 0.0000e+00, 1.0452e-01, 0.0000e+00, 9.5771e-02,
           0.0000e+00, 5.9101e-01, 0.0000e+00, 8.7551e-01, 0.0000e+00,
           5.4813e-01, 0.0000e+00, 2.3709e-01],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 8.0917e-01, 0.0000e+00, 4.4598e-01, 0.0000e+00,
           1.6671e-01, 0.0000e+00, 3.0333e-04, 0.0000e+00, 5.4100e-01,
           0.0000e+00, 2.6351e-01, 0.0000e+00, 2.3562e-01, 0.0000e+00,
           3.5333e-01, 0.0000e+00, 3.2555e-03],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 7.2846e-01, 0.0000e+00, 4.3116e-01, 0.0000e+00,
           3.5533e-01, 0.0000e+00, 5.9743e-01, 0.0000e+00, 1.4766e-01,
           0.0000e+00, 7.7174e-01, 0.0000e+00, 8.8454e-01, 0.0000e+00,
           3.1102e-01, 0.0000e+00, 2.5577e-01],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 2.4108e-01, 0.0000e+00, 8.9597e-01, 0.0000e+00,
           1.9065e-01, 0.0000e+00, 7.7841e-02, 0.0000e+00, 9.9905e-01,
           0.0000e+00, 4.9544e-01, 0.0000e+00, 5.3307e-01, 0.0000e+00,
           4.0326e-01, 0.0000e+00, 8.7396e-01],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 5.8937e-01, 0.0000e+00, 6.3878e-01, 0.0000e+00,
           3.3665e-01, 0.0000e+00, 6.3055e-02, 0.0000e+00, 3.4877e-01,
           0.0000e+00, 5.7652e-01, 0.0000e+00, 9.6403e-01, 0.0000e+00,
           9.1441e-01, 0.0000e+00, 3.0954e-02],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 4.4115e-01, 0.0000e+00, 8.3246e-01, 0.0000e+00,
           6.6964e-01, 0.0000e+00, 4.3413e-01, 0.0000e+00, 5.8948e-01,
           0.0000e+00, 5.6783e-01, 0.0000e+00, 1.5777e-01, 0.0000e+00,
           9.8821e-02, 0.0000e+00, 3.3986e-01],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 6.6099e-01, 0.0000e+00, 4.4053e-01, 0.0000e+00,
           5.2903e-01, 0.0000e+00, 2.7643e-01, 0.0000e+00, 9.7024e-01,
           0.0000e+00, 4.5704e-01, 0.0000e+00, 5.4045e-01, 0.0000e+00,
           9.3200e-01, 0.0000e+00, 8.0507e-03],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 1.1022e-02, 0.0000e+00, 2.9857e-01, 0.0000e+00,
           4.6856e-01, 0.0000e+00, 6.8943e-01, 0.0000e+00, 2.0012e-01,
           0.0000e+00, 3.3718e-04, 0.0000e+00, 3.6202e-01, 0.0000e+00,
           3.5228e-01, 0.0000e+00, 2.8393e-01],
          [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
           0.0000e+00, 0.0000e+00, 0.0000e+00],
          [0.0000e+00, 7.9934e-01, 0.0000e+00, 7.0670e-02, 0.0000e+00,
           4.6713e-01, 0.0000e+00, 8.2546e-01, 0.0000e+00, 3.3520e-01,
           0.0000e+00, 1.2836e-01, 0.0000e+00, 2.3002e-01, 0.0000e+00,
           7.2869e-03, 0.0000e+00, 8.4285e-01]]]])
inW = torch.tensor([[[[1., 1.], [1., 1.]]]])
outT = F.conv_transpose2d(inT, inW, stride=2, padding=0)
outT.shape
torch.Size([1, 1, 18, 18])
outT
tensor([[[[3.0494e-01, 3.0494e-01, 6.6059e-03, 6.6059e-03, 1.5180e-01,
           1.5180e-01, 1.0452e-01, 1.0452e-01, 9.5771e-02, 9.5771e-02,
           5.9101e-01, 5.9101e-01, 8.7551e-01, 8.7551e-01, 5.4813e-01,
           5.4813e-01, 2.3709e-01, 2.3709e-01],
          [3.0494e-01, 3.0494e-01, 6.6059e-03, 6.6059e-03, 1.5180e-01,
           1.5180e-01, 1.0452e-01, 1.0452e-01, 9.5771e-02, 9.5771e-02,
           5.9101e-01, 5.9101e-01, 8.7551e-01, 8.7551e-01, 5.4813e-01,
           5.4813e-01, 2.3709e-01, 2.3709e-01],
          [8.0917e-01, 8.0917e-01, 4.4598e-01, 4.4598e-01, 1.6671e-01,
           1.6671e-01, 3.0333e-04, 3.0333e-04, 5.4100e-01, 5.4100e-01,
           2.6351e-01, 2.6351e-01, 2.3562e-01, 2.3562e-01, 3.5333e-01,
           3.5333e-01, 3.2555e-03, 3.2555e-03],
          [8.0917e-01, 8.0917e-01, 4.4598e-01, 4.4598e-01, 1.6671e-01,
           1.6671e-01, 3.0333e-04, 3.0333e-04, 5.4100e-01, 5.4100e-01,
           2.6351e-01, 2.6351e-01, 2.3562e-01, 2.3562e-01, 3.5333e-01,
           3.5333e-01, 3.2555e-03, 3.2555e-03],
          [7.2846e-01, 7.2846e-01, 4.3116e-01, 4.3116e-01, 3.5533e-01,
           3.5533e-01, 5.9743e-01, 5.9743e-01, 1.4766e-01, 1.4766e-01,
           7.7174e-01, 7.7174e-01, 8.8454e-01, 8.8454e-01, 3.1102e-01,
           3.1102e-01, 2.5577e-01, 2.5577e-01],
          [7.2846e-01, 7.2846e-01, 4.3116e-01, 4.3116e-01, 3.5533e-01,
           3.5533e-01, 5.9743e-01, 5.9743e-01, 1.4766e-01, 1.4766e-01,
           7.7174e-01, 7.7174e-01, 8.8454e-01, 8.8454e-01, 3.1102e-01,
           3.1102e-01, 2.5577e-01, 2.5577e-01],
          [2.4108e-01, 2.4108e-01, 8.9597e-01, 8.9597e-01, 1.9065e-01,
           1.9065e-01, 7.7841e-02, 7.7841e-02, 9.9905e-01, 9.9905e-01,
           4.9544e-01, 4.9544e-01, 5.3307e-01, 5.3307e-01, 4.0326e-01,
           4.0326e-01, 8.7396e-01, 8.7396e-01],
          [2.4108e-01, 2.4108e-01, 8.9597e-01, 8.9597e-01, 1.9065e-01,
           1.9065e-01, 7.7841e-02, 7.7841e-02, 9.9905e-01, 9.9905e-01,
           4.9544e-01, 4.9544e-01, 5.3307e-01, 5.3307e-01, 4.0326e-01,
           4.0326e-01, 8.7396e-01, 8.7396e-01],
          [5.8937e-01, 5.8937e-01, 6.3878e-01, 6.3878e-01, 3.3665e-01,
           3.3665e-01, 6.3055e-02, 6.3055e-02, 3.4877e-01, 3.4877e-01,
           5.7652e-01, 5.7652e-01, 9.6403e-01, 9.6403e-01, 9.1441e-01,
           9.1441e-01, 3.0954e-02, 3.0954e-02],
          [5.8937e-01, 5.8937e-01, 6.3878e-01, 6.3878e-01, 3.3665e-01,
           3.3665e-01, 6.3055e-02, 6.3055e-02, 3.4877e-01, 3.4877e-01,
           5.7652e-01, 5.7652e-01, 9.6403e-01, 9.6403e-01, 9.1441e-01,
           9.1441e-01, 3.0954e-02, 3.0954e-02],
          [4.4115e-01, 4.4115e-01, 8.3246e-01, 8.3246e-01, 6.6964e-01,
           6.6964e-01, 4.3413e-01, 4.3413e-01, 5.8948e-01, 5.8948e-01,
           5.6783e-01, 5.6783e-01, 1.5777e-01, 1.5777e-01, 9.8821e-02,
           9.8821e-02, 3.3986e-01, 3.3986e-01],
          [4.4115e-01, 4.4115e-01, 8.3246e-01, 8.3246e-01, 6.6964e-01,
           6.6964e-01, 4.3413e-01, 4.3413e-01, 5.8948e-01, 5.8948e-01,
           5.6783e-01, 5.6783e-01, 1.5777e-01, 1.5777e-01, 9.8821e-02,
           9.8821e-02, 3.3986e-01, 3.3986e-01],
          [6.6099e-01, 6.6099e-01, 4.4053e-01, 4.4053e-01, 5.2903e-01,
           5.2903e-01, 2.7643e-01, 2.7643e-01, 9.7024e-01, 9.7024e-01,
           4.5704e-01, 4.5704e-01, 5.4045e-01, 5.4045e-01, 9.3200e-01,
           9.3200e-01, 8.0507e-03, 8.0507e-03],
          [6.6099e-01, 6.6099e-01, 4.4053e-01, 4.4053e-01, 5.2903e-01,
           5.2903e-01, 2.7643e-01, 2.7643e-01, 9.7024e-01, 9.7024e-01,
           4.5704e-01, 4.5704e-01, 5.4045e-01, 5.4045e-01, 9.3200e-01,
           9.3200e-01, 8.0507e-03, 8.0507e-03],
          [1.1022e-02, 1.1022e-02, 2.9857e-01, 2.9857e-01, 4.6856e-01,
           4.6856e-01, 6.8943e-01, 6.8943e-01, 2.0012e-01, 2.0012e-01,
           3.3718e-04, 3.3718e-04, 3.6202e-01, 3.6202e-01, 3.5228e-01,
           3.5228e-01, 2.8393e-01, 2.8393e-01],
          [1.1022e-02, 1.1022e-02, 2.9857e-01, 2.9857e-01, 4.6856e-01,
           4.6856e-01, 6.8943e-01, 6.8943e-01, 2.0012e-01, 2.0012e-01,
           3.3718e-04, 3.3718e-04, 3.6202e-01, 3.6202e-01, 3.5228e-01,
           3.5228e-01, 2.8393e-01, 2.8393e-01],
          [7.9934e-01, 7.9934e-01, 7.0670e-02, 7.0670e-02, 4.6713e-01,
           4.6713e-01, 8.2546e-01, 8.2546e-01, 3.3520e-01, 3.3520e-01,
           1.2836e-01, 1.2836e-01, 2.3002e-01, 2.3002e-01, 7.2869e-03,
           7.2869e-03, 8.4285e-01, 8.4285e-01],
          [7.9934e-01, 7.9934e-01, 7.0670e-02, 7.0670e-02, 4.6713e-01,
           4.6713e-01, 8.2546e-01, 8.2546e-01, 3.3520e-01, 3.3520e-01,
           1.2836e-01, 1.2836e-01, 2.3002e-01, 2.3002e-01, 7.2869e-03,
           7.2869e-03, 8.4285e-01, 8.4285e-01]]]])

Concluding Remarks

Now that we have investigated the common layers used to build CNNs, we will move on to actually creating them. In the next section relating to CNNs for scene labeling, we will primarily make use of nn.Conv2d() and nn.MaxPool2d() along with layers we have investigated while implementing fully connected neural networks: nn.Linear(), nn.ReLU(), and nn.BatchNorm2d(). In later modules you will see applications of nn.ConvTranspose2d() for upsampling.