Working with Tensors

Working with Tensors

Introduction

You can know define a simple neural network using PyTorch by subclassing the nn.Module class. Before we further discuss neural networks, in this section we will explore tensors and how they are defined and manipulated using PyTorch. Prior knowledge of numpy and its associated data types, methods, and functions is useful.

As you probably already know, the numpy package allows for working with data arrays in Python, and this package is central to using Python to work with and analyze data. In fact, many other data science packages in Python, such as pandas, relies on numpy.

Since numpy already allows for working with multidimensional arrays, why is it necessary to define a new data model as part of PyTorch? First, numpy arrays cannot be moved between devices or processed on GPUs. PyTorch tensors allow for GPU computation, which is very important for implementing deep learning. Second, numpy arrays do not allow for automatic differentiation and calculation of gradients. In order to perform backpropagation, we must be able to calculate the gradient of the loss with respect to each learned parameter using derivatives and the chain rule. PyTorch tensors allow for storing a computational graph and support automatic differentiation. This functionality is central to training models and updating weights.

Tensors are multidimensional arrays, and a wide variety of data types can be represented using this data model. This allows us to be able to apply deep learning to a variety of problems across domains. In later modules we will explore how raw data can be converted into PyTorch tensors so that they can be used as input to deep learning.

import torch 
import torch.nn
import numpy as np

Lists, numpy Arrays, and torch Tensors

Let’s begin with a review of generating array or array-like data in Python using lists and numpy. First, I am importing torch, torch.nn, and numpy. In the next code block, I am creating a list of lists. Remember that a list is a base Python data type or model that allows you to store values or data in multiple dimensions. They are defined using square brackets. Using the type() function, we can see that the data type of lst1 is a list.

In the next code block, I convert the list to a numpy array using the np.array() function. I then print some information about the array. Here is a quick review:

  • type returns the data type, in this case numpy.ndarray

  • shape returns the shape of the array, in this case it is two dimensional with a length of three in each dimension; the shape is returned as a tuple

  • dtype returns the data type of the elements in the array, in this case the default numeric data type of int64, or a 64-bit integer, is used

  • the len() function when applied to a numpy array returns the number of dimensions of the array

  • ndim returns the number of dimensions, in this case 2

  • size returns the number of elements in the array

lst1 = [[123,117,91],[201,195,217],[81, 77, 101]]
type(lst1)
<class 'list'>
arr1 = np.array(lst1)
print(type(arr1))
<class 'numpy.ndarray'>
print(arr1.shape)
(3, 3)
print(arr1.dtype)
int32
print(len(arr1))
3
print(arr1.ndim)
2
print(arr1.size)
9

One very convenient aspect of PyTorch tensors is that they tend to have properties and methods similar to numpy arrays, making the transition from numpy to PyTorch pretty easy and intuitive.

You can create a PyTorch tensor using torch.tensor(). Below, I have created three tensors. In the first example, a tensor is created from a list. In the second example, a tensor is created from a numpy array. The last example demonstrates using the from_numpy() function to convert a numpy array to a torch tensor.

The type() and len() functions return the same result when they are applied to a torch tensor as when they are applied to a numpy array; they return the type and number of dimensions, respectively. The shape, dtype, and ndim properties are also the same for a PyTorch tensor. Note that the data type or class of a PyTorch tensor is torch.Tensor. Also, the data type of each element is 64-bit integer, but now defined using torch as opposed to numpy.

Lastly, the numel() function returns the number of elements in the tensor, similar to the size property of a numpy array.

t1 = torch.tensor(lst1)
t2 = torch.tensor(arr1)
t3 = torch.from_numpy(arr1)

print(type(t1))
<class 'torch.Tensor'>
print(t1.shape)
torch.Size([3, 3])
print(t1.dtype)
torch.int64
print(len(t1))
3
print(t1.ndim)
2
print(torch.numel(t1))
9
print(type(t2))
<class 'torch.Tensor'>
print(t2.shape)
torch.Size([3, 3])
print(t2.dtype)
torch.int32
print(len(t2))
3
print(t2.ndim)
2
print(torch.numel(t2))
9
print(type(t3))
<class 'torch.Tensor'>
print(t3.shape)
torch.Size([3, 3])
print(t3.dtype)
torch.int32
print(len(t3))
3
print(t3.ndim)
2
print(torch.numel(t3))
9

Creating torch Tensors

Also similar to numpy, PyTorch provides several functions for defining tensors.

  • torch.zeros: array of zeros

  • torch.ones: array of ones

  • torch.arange: values defined using a start value, end value, and step size

  • torch.linspace: define a specified number of evenly spaced values between two values.

In all cases, the length of each dimension is defined using a tuple.

t4 = torch.zeros(size=(3, 12, 12))
print(t4)
tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],

        [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],

        [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]])
t5 = torch.ones(size=(2,4,4))
print(t5)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]])
t6 = torch.arange(start=10, end=100, step=5)
print(t6)
tensor([10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95])
t7 = torch.linspace(start=16, end=40, steps=4)
print(t7)
tensor([16., 24., 32., 40.])

The torch.concat() function is used to merge or concatenate tensors along an axis or dimension. In the example below, I am concatenating two 1D tensors along the first dimension (0). Dimension indexing starts at 0 as opposed to 1. If you have a multidimensional tensor, you can concatenate along different dimensions as long as the shapes are consistent. For example, you could concatenate tensors with the following shapes along the third dimension since the other two dimensions have the same lengths: (256, 256, 4) and (256, 256, 3). The result would be a tensor with shape (256, 256, 7).

t8 = torch.concat([t6, t7], dim=0)
print(t8)
tensor([10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60., 65., 70., 75.,
        80., 85., 90., 95., 16., 24., 32., 40.])

The torch.rand() function creates random floating point values between 0 and 1. Again, the shape of the tensor is defined using a tuple.

In contrast to torch.rand(), torch.randint() returns random integers between a defined low and high value. Lastly, torch.randn() returns random values selected from a normal or Gaussian distribution with a mean of 0 and a standard deviation of 1.

t9 = torch.rand(size=(2,2,2))
print(t9)
tensor([[[0.5088, 0.6332],
         [0.4100, 0.1202]],

        [[0.1760, 0.0839],
         [0.6496, 0.0463]]])
t10 = torch.randint(low=0, high=256, size=(3,6,6))
print(t10)
tensor([[[ 30, 138, 236, 212, 242, 225],
         [ 55,  44, 253, 216, 202, 163],
         [  9,  53, 160,  55, 208, 250],
         [133,  38, 110, 177, 146, 100],
         [215,  12, 107,  22,  40, 144],
         [189, 100, 120, 142, 125,  75]],

        [[ 81, 127, 167, 121, 197, 103],
         [230,  20, 177, 251, 216, 108],
         [181,  15,  27, 143, 251, 164],
         [ 99, 127,  55,  52, 103, 165],
         [ 79,  24,  54,  40, 205,  62],
         [150, 128, 156, 207,   7, 226]],

        [[ 60,  24, 230, 198,  94,  86],
         [ 81, 229,  44,  74, 243, 169],
         [223,  30, 123,  78,  36,  59],
         [ 50,  57, 124, 169,  77,  54],
         [ 55,  61, 179, 242,  74, 232],
         [168, 201, 133, 116, 100, 164]]])
t11 = torch.randn(size=(2,4,5))
print(t11)
tensor([[[ 1.4028,  1.6869,  0.6613,  0.3316,  0.4455],
         [ 1.5748,  1.7529,  1.4220,  0.8112,  1.3022],
         [-0.9498, -0.9153,  0.1527, -0.3553, -1.1981],
         [-0.2188,  0.3268, -0.6541,  1.0378, -1.1947]],

        [[ 2.2796,  0.5987,  0.3648,  0.4837, -0.1495],
         [-0.2999, -0.2634, -0.2026, -1.9113, -0.0959],
         [-0.8716,  0.5107,  0.6094,  1.7372, -1.1832],
         [ 0.6302,  1.2067,  0.7124, -1.9014, -0.4008]]])

torch Data Types

Similar to numpy, PyTorch has a set of defined numeric data types, which can be specified using the dtype parameter. Below, I have provided examples of some commonly used numeric data types. The default data type is 32-bit float. Other options include 64-bit float, 16-bit float, unsigned 8-bit integer, and signed 8-bit integer. Throughout the later modules, I will make notes about numeric data types. For example, some operations may expect float data while others may expect integer data.

t12 = torch.ones(size=(2,4,4))
print(t12.dtype)
torch.float32
t13 = torch.ones(size=(2,4,4), dtype=torch.float64)
print(t13.dtype)
torch.float64
t14 = torch.ones(size=(2,4,4), dtype=torch.float16)
print(t14.dtype)
torch.float16
t15 = torch.ones(size=(2,4,4), dtype=torch.uint8)
print(t15.dtype)
torch.uint8
t16 = torch.ones(size=(2,4,4), dtype=torch.int8)
print(t16.dtype)
torch.int8

Reshaping Tensors

We will now explore methods for reshaping tensors. First, I instantiate a 1D tensor of 1s with a length of 32. The torch.reshape() function can be used to reshape a tensor as long as the dimensions of the new tensor can be filled with the values from the original tensor. For example, our original (32) tensor could be reshaped into any of the following shapes since the total number of elements is still 32: (2, 16), (8, 4), (4, 8), (2, 4, 4), (2, 2, 8), and (2, 2, 2, 2, 2). In the second code block below, I have reshaped the 1D tensor into a (2, 4, 4) 3D tensor.

arr2 = np.ones(32)
t17 = torch.tensor(arr2)
print(t17)
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       dtype=torch.float64)
t18 = torch.reshape(t17, (2, 4, 4))
print(t18)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]], dtype=torch.float64)

You can use -1 in the tensor shape tuple if the length of the tensor in that dimension is unknown. PyTorch will then calculate the required length of that dimension to accommodate all of the elements. An error will be raised if there is no length that will accommodate all of the elements.

t19 = torch.reshape(t17, (-1, 4, 4))
print(t19)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]], dtype=torch.float64)
t20 = torch.reshape(t17, (2, 4, -1))
print(t20)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]], dtype=torch.float64)

The torch.flatten() function will convert any tensor into a 1D tensor. In other words, all dimensions will be combined to a single dimension to generate a vector.

t21 = torch.flatten(t17)
print(t21)
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       dtype=torch.float64)

The view() method allows for representing the values stored in a tensor using a different shape. This is memory efficient since the original tensor and the view of the tensor reference the same values in memory.

t22 = t17.view((2,16))
print(t22)
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]],
       dtype=torch.float64)

Subsetting Tensors

Subsetting or slicing out values from a larger tensor uses the same indexing and bracket notation that is used with numpy and base Python lists. Here is a review of indexing rules:

  1. Indexing starts at 0 as opposed to 1

  2. A colon is used to indicate a range of indices (3:8)

  3. The first index is included while the last index is not

  4. For multidimensional arrays, you can provide indices to subset within each dimension

  5. A colon means to select all elements in that dimension

  6. A number followed by a colon means to select from that index and up to the last element in that dimension

  7. A colon followed by a number indicates to select all elements up to but not including that index

Work through the examples below and make sure you understand why each specific subset of the original array was returned.

t23 = torch.randint(low=0, high=256, size = (8,8))
print(t23)
tensor([[243,  53, 219,  13, 134, 221, 253, 144],
        [139, 246, 162, 184,  12, 156,  41,  28],
        [231,  16, 135,  42, 191,   3,  36, 198],
        [122,   2, 232, 167, 245,  52, 146, 120],
        [ 28,  92,  12,  16,  58,  67, 199, 111],
        [253, 163, 189,  77, 185, 114,  65,  67],
        [183, 136,   6, 139,   5, 201, 157, 181],
        [112,  20,  88, 252, 177, 157,  35, 149]])
t24 = t23[:,0:4]
print(t24)
tensor([[243,  53, 219,  13],
        [139, 246, 162, 184],
        [231,  16, 135,  42],
        [122,   2, 232, 167],
        [ 28,  92,  12,  16],
        [253, 163, 189,  77],
        [183, 136,   6, 139],
        [112,  20,  88, 252]])
t25 = t23[0:4,:]
print(t25)
tensor([[243,  53, 219,  13, 134, 221, 253, 144],
        [139, 246, 162, 184,  12, 156,  41,  28],
        [231,  16, 135,  42, 191,   3,  36, 198],
        [122,   2, 232, 167, 245,  52, 146, 120]])
t26 = t23[0:4,0:4]
print(t26)
tensor([[243,  53, 219,  13],
        [139, 246, 162, 184],
        [231,  16, 135,  42],
        [122,   2, 232, 167]])
t27 = t23[:,0:4]
print(t27)
tensor([[243,  53, 219,  13],
        [139, 246, 162, 184],
        [231,  16, 135,  42],
        [122,   2, 232, 167],
        [ 28,  92,  12,  16],
        [253, 163, 189,  77],
        [183, 136,   6, 139],
        [112,  20,  88, 252]])
t28 = t23[:,3:]
print(t28)
tensor([[ 13, 134, 221, 253, 144],
        [184,  12, 156,  41,  28],
        [ 42, 191,   3,  36, 198],
        [167, 245,  52, 146, 120],
        [ 16,  58,  67, 199, 111],
        [ 77, 185, 114,  65,  67],
        [139,   5, 201, 157, 181],
        [252, 177, 157,  35, 149]])

Squeeze and Unsqueeze

The squeeze() and unsqueeze() methods are also useful for reshaping arrays; squeeze() is used to remove dimensions with a length of 1. For example, a tensor with shape (1, 3, 256, 256) could be converted to (3, 256, 256). Note that both of these shapes have the same number of elements.

In contrast, unsqueeze() can add a dimension with a length of 1 at a defined position. In the last two examples, I have added a new dimension with a length of 1 at the end and then at the beginning of the original tensor by specifying the dim argument.

In later modules you will see some uses of these methods, such as changing the representation of a grayscale image between a 2D and 3D tensor and adding or removing the batch dimension.

t29 = torch.ones((1,4,4))
print(t29.shape)
torch.Size([1, 4, 4])
t30 = t29.squeeze()
print(t30.shape)
torch.Size([4, 4])
t31 = t30.unsqueeze(dim=2)
print(t31.shape)
torch.Size([4, 4, 1])
t32 = t30.unsqueeze(dim=0)
print(t32.shape)
torch.Size([1, 4, 4])

Reordering Dimensions

The torch.permute() function can be used to reorder the dimensions of a tensor. One common application of this method is to convert a tensor representing a multiband image between channels-first and channels-last configurations: (Channels, Height, Width) vs. (Height, Width, Channels). This is required since not all Python packages represent images as arrays or tensors using the same ordering.

t33 = torch.randint(low=0, high=256, size = (3, 8, 8))
print(t33)
tensor([[[103, 140,  16,  69, 109,  75, 107, 167],
         [ 65, 150, 216, 145, 187, 223,  99,  59],
         [ 50,  99, 133, 188, 153, 107,  39, 119],
         [141, 137, 128, 175, 184,  66,  19, 146],
         [ 77,  41,  60,  57, 120,  88, 245, 238],
         [239,  87,  73, 140, 121, 185,  87,  38],
         [  8, 249, 103, 123, 153, 179,  99, 163],
         [243,  44, 153, 223, 179,  70,  41, 210]],

        [[ 97, 118, 222,  39,  96, 128, 207, 118],
         [152,  34,   5,  80, 231,   0, 195,  53],
         [147,  27, 114, 177, 176,  84, 162,  86],
         [156, 113,  25, 169, 128, 119,   6,  46],
         [158, 245,  81, 223, 165, 142, 180, 181],
         [ 94, 143, 236, 229, 184, 160, 219,  55],
         [ 46, 241,  35,  33,   6,  22, 153,  76],
         [242, 247,  74, 185, 220, 200, 251, 100]],

        [[231, 129,  35,  58, 180,   2,  93,  36],
         [178,   6,  46, 208, 242, 152, 194,  43],
         [172,  44,  95,   9, 203, 171, 108, 242],
         [182, 164, 221, 107,  20,  13, 209, 181],
         [ 56, 106,  94, 165, 132, 160,  24,  27],
         [219, 195, 173,  76, 156,   2,  63,  37],
         [189, 174, 203, 152,  67,  65, 229,  34],
         [ 76,  65, 217, 245,  14,  82,   5,  58]]])
t34 = torch.permute(t33, (1,2,0))
t34
tensor([[[103,  97, 231],
         [140, 118, 129],
         [ 16, 222,  35],
         [ 69,  39,  58],
         [109,  96, 180],
         [ 75, 128,   2],
         [107, 207,  93],
         [167, 118,  36]],

        [[ 65, 152, 178],
         [150,  34,   6],
         [216,   5,  46],
         [145,  80, 208],
         [187, 231, 242],
         [223,   0, 152],
         [ 99, 195, 194],
         [ 59,  53,  43]],

        [[ 50, 147, 172],
         [ 99,  27,  44],
         [133, 114,  95],
         [188, 177,   9],
         [153, 176, 203],
         [107,  84, 171],
         [ 39, 162, 108],
         [119,  86, 242]],

        [[141, 156, 182],
         [137, 113, 164],
         [128,  25, 221],
         [175, 169, 107],
         [184, 128,  20],
         [ 66, 119,  13],
         [ 19,   6, 209],
         [146,  46, 181]],

        [[ 77, 158,  56],
         [ 41, 245, 106],
         [ 60,  81,  94],
         [ 57, 223, 165],
         [120, 165, 132],
         [ 88, 142, 160],
         [245, 180,  24],
         [238, 181,  27]],

        [[239,  94, 219],
         [ 87, 143, 195],
         [ 73, 236, 173],
         [140, 229,  76],
         [121, 184, 156],
         [185, 160,   2],
         [ 87, 219,  63],
         [ 38,  55,  37]],

        [[  8,  46, 189],
         [249, 241, 174],
         [103,  35, 203],
         [123,  33, 152],
         [153,   6,  67],
         [179,  22,  65],
         [ 99, 153, 229],
         [163,  76,  34]],

        [[243, 242,  76],
         [ 44, 247,  65],
         [153,  74, 217],
         [223, 185, 245],
         [179, 220,  14],
         [ 70, 200,  82],
         [ 41, 251,   5],
         [210, 100,  58]]])

Operations on Tensors

A variety of mathematical operations can be performed on tensors. A series of examples are provided below using the same input data. The first few examples demonstrate adding a constant value to each element, clamping values between defined minimum and maximum values, calculating the cosine, and calculating the square root. If you need to perform a specific mathematical operation, please reference the PyTorch documentation for the required syntax.

t35 = torch.randint(low=0, high=256, size=(3,6,6))
print(t35)
tensor([[[ 45, 178, 167, 188, 104, 184],
         [ 97, 174, 230,  58, 143,  74],
         [112,  69, 219, 235, 185, 226],
         [ 17,  70,   7, 108,  36, 100],
         [224, 153, 167, 187, 116, 164],
         [241, 233,  15, 113, 109, 171]],

        [[ 61, 206, 180, 102, 153, 208],
         [110, 209, 166, 216,   2, 123],
         [232, 192, 123,  68,  96,  96],
         [173,  38, 109, 154, 167, 134],
         [ 95, 167,  95, 228,  20, 100],
         [245, 204,  78, 204,  79, 239]],

        [[  2, 193,  77, 124,  24, 122],
         [197, 107,  46,  16, 199,  47],
         [ 76, 185, 180,  66,  66, 223],
         [ 29,  22,  52, 140, 131,  37],
         [145, 195,  30, 183, 214,  86],
         [ 51, 253, 131, 125, 139, 178]]])
t36 = torch.add(t35, 4)
print(t36)
tensor([[[ 49, 182, 171, 192, 108, 188],
         [101, 178, 234,  62, 147,  78],
         [116,  73, 223, 239, 189, 230],
         [ 21,  74,  11, 112,  40, 104],
         [228, 157, 171, 191, 120, 168],
         [245, 237,  19, 117, 113, 175]],

        [[ 65, 210, 184, 106, 157, 212],
         [114, 213, 170, 220,   6, 127],
         [236, 196, 127,  72, 100, 100],
         [177,  42, 113, 158, 171, 138],
         [ 99, 171,  99, 232,  24, 104],
         [249, 208,  82, 208,  83, 243]],

        [[  6, 197,  81, 128,  28, 126],
         [201, 111,  50,  20, 203,  51],
         [ 80, 189, 184,  70,  70, 227],
         [ 33,  26,  56, 144, 135,  41],
         [149, 199,  34, 187, 218,  90],
         [ 55, 257, 135, 129, 143, 182]]])
t37 = torch.clamp(t35, min=100,max=200)
print(t37)
tensor([[[100, 178, 167, 188, 104, 184],
         [100, 174, 200, 100, 143, 100],
         [112, 100, 200, 200, 185, 200],
         [100, 100, 100, 108, 100, 100],
         [200, 153, 167, 187, 116, 164],
         [200, 200, 100, 113, 109, 171]],

        [[100, 200, 180, 102, 153, 200],
         [110, 200, 166, 200, 100, 123],
         [200, 192, 123, 100, 100, 100],
         [173, 100, 109, 154, 167, 134],
         [100, 167, 100, 200, 100, 100],
         [200, 200, 100, 200, 100, 200]],

        [[100, 193, 100, 124, 100, 122],
         [197, 107, 100, 100, 199, 100],
         [100, 185, 180, 100, 100, 200],
         [100, 100, 100, 140, 131, 100],
         [145, 195, 100, 183, 200, 100],
         [100, 200, 131, 125, 139, 178]]])
t38 = torch.cos(t35)
print(t38)
tensor([[[ 0.5253, -0.4794, -0.8797,  0.8797, -0.9469, -0.2151],
         [-0.9251, -0.3508, -0.7877,  0.1192,  0.0575,  0.1717],
         [ 0.4560,  0.9934,  0.6126, -0.8142, -0.9380,  0.9811],
         [-0.2752,  0.6333,  0.7539,  0.3755, -0.1280,  0.8623],
         [-0.5842, -0.5914, -0.8797,  0.0752, -0.9716,  0.8038],
         [-0.6195,  0.8668, -0.7597,  0.9953, -0.5770,  0.2151]],

        [[-0.2581,  0.2238, -0.5985,  0.1016, -0.5914,  0.7931],
         [-0.9990, -0.0840, -0.8755, -0.7180, -0.4161, -0.8880],
         [ 0.8880, -0.9349, -0.8880,  0.4401, -0.1804, -0.1804],
         [-0.9775,  0.9551, -0.5770, -0.9981, -0.8797, -0.4638],
         [ 0.7302, -0.8797,  0.7302, -0.2324,  0.4081,  0.8623],
         [ 0.9990, -0.9794, -0.8578, -0.9794, -0.8960,  0.9716]],

        [[-0.4161, -0.2065, -0.0310, -0.0928,  0.4242, -0.8668],
         [-0.6056,  0.9828, -0.4322, -0.9577, -0.4716, -0.9923],
         [ 0.8243, -0.9380, -0.5985, -0.9996, -0.9996, -0.9986],
         [-0.7481, -1.0000, -0.1630, -0.1978,  0.5842,  0.7654],
         [ 0.8839,  0.9756,  0.1543,  0.7055,  0.9317, -0.3837],
         [ 0.7422, -0.1016,  0.5842,  0.7877,  0.7180, -0.4794]]])
t39 = torch.sqrt(t35)
print(t39)
tensor([[[ 6.7082, 13.3417, 12.9228, 13.7113, 10.1980, 13.5647],
         [ 9.8489, 13.1909, 15.1658,  7.6158, 11.9583,  8.6023],
         [10.5830,  8.3066, 14.7986, 15.3297, 13.6015, 15.0333],
         [ 4.1231,  8.3666,  2.6458, 10.3923,  6.0000, 10.0000],
         [14.9666, 12.3693, 12.9228, 13.6748, 10.7703, 12.8062],
         [15.5242, 15.2643,  3.8730, 10.6301, 10.4403, 13.0767]],

        [[ 7.8102, 14.3527, 13.4164, 10.0995, 12.3693, 14.4222],
         [10.4881, 14.4568, 12.8841, 14.6969,  1.4142, 11.0905],
         [15.2315, 13.8564, 11.0905,  8.2462,  9.7980,  9.7980],
         [13.1529,  6.1644, 10.4403, 12.4097, 12.9228, 11.5758],
         [ 9.7468, 12.9228,  9.7468, 15.0997,  4.4721, 10.0000],
         [15.6525, 14.2829,  8.8318, 14.2829,  8.8882, 15.4596]],

        [[ 1.4142, 13.8924,  8.7750, 11.1355,  4.8990, 11.0454],
         [14.0357, 10.3441,  6.7823,  4.0000, 14.1067,  6.8557],
         [ 8.7178, 13.6015, 13.4164,  8.1240,  8.1240, 14.9332],
         [ 5.3852,  4.6904,  7.2111, 11.8322, 11.4455,  6.0828],
         [12.0416, 13.9642,  5.4772, 13.5277, 14.6287,  9.2736],
         [ 7.1414, 15.9060, 11.4455, 11.1803, 11.7898, 13.3417]]])

You can perform element-wise addition, subtraction, multiplication, and division between two tensors as long as they have the same shape. It is possible to perform these operations on arrays of different shapes as long as they adhere to broadcasting rules. However, we will not explore broadcasting rules here since broadcasting is not generally something that is required for implementing deep learning.

t40 = torch.randint(low=0, high=256, size=(3,6,6))
t41 = t38+t35
print(t41)
tensor([[[ 45.5253, 177.5206, 166.1203, 188.8797, 103.0531, 183.7849],
         [ 96.0749, 173.6492, 229.2123,  58.1192, 143.0575,  74.1717],
         [112.4560,  69.9934, 219.6126, 234.1858, 184.0620, 226.9811],
         [ 16.7248,  70.6333,   7.7539, 108.3755,  35.8720, 100.8623],
         [223.4158, 152.4086, 166.1203, 187.0752, 115.0284, 164.8038],
         [240.3805, 233.8667,  14.2403, 113.9953, 108.4230, 171.2151]],

        [[ 60.7419, 206.2238, 179.4015, 102.1016, 152.4086, 208.7931],
         [109.0010, 208.9160, 165.1245, 215.2820,   1.5839, 122.1120],
         [232.8880, 191.0651, 122.1120,  68.4401,  95.8196,  95.8196],
         [172.0225,  38.9551, 108.4230, 153.0019, 166.1203, 133.5362],
         [ 95.7302, 166.1203,  95.7302, 227.7676,  20.4081, 100.8623],
         [245.9990, 203.0206,  77.1422, 203.0206,  78.1040, 239.9716]],

        [[  1.5839, 192.7935,  76.9690, 123.9072,  24.4242, 121.1332],
         [196.3945, 107.9828,  45.5678,  15.0423, 198.5284,  46.0077],
         [ 76.8243, 184.0620, 179.4015,  65.0004,  65.0004, 222.0014],
         [ 28.2519,  21.0000,  51.8370, 139.8022, 131.5842,  37.7654],
         [145.8839, 195.9756,  30.1543, 183.7055, 214.9317,  85.6163],
         [ 51.7422, 252.8984, 131.5842, 125.7877, 139.7180, 177.5206]]])

It is possible to add or subtract a constant value or multiply or divide by a constant value on an element-wise basis as demonstrated in the code block below.

t42 = t35/2
print(t42)
tensor([[[ 22.5000,  89.0000,  83.5000,  94.0000,  52.0000,  92.0000],
         [ 48.5000,  87.0000, 115.0000,  29.0000,  71.5000,  37.0000],
         [ 56.0000,  34.5000, 109.5000, 117.5000,  92.5000, 113.0000],
         [  8.5000,  35.0000,   3.5000,  54.0000,  18.0000,  50.0000],
         [112.0000,  76.5000,  83.5000,  93.5000,  58.0000,  82.0000],
         [120.5000, 116.5000,   7.5000,  56.5000,  54.5000,  85.5000]],

        [[ 30.5000, 103.0000,  90.0000,  51.0000,  76.5000, 104.0000],
         [ 55.0000, 104.5000,  83.0000, 108.0000,   1.0000,  61.5000],
         [116.0000,  96.0000,  61.5000,  34.0000,  48.0000,  48.0000],
         [ 86.5000,  19.0000,  54.5000,  77.0000,  83.5000,  67.0000],
         [ 47.5000,  83.5000,  47.5000, 114.0000,  10.0000,  50.0000],
         [122.5000, 102.0000,  39.0000, 102.0000,  39.5000, 119.5000]],

        [[  1.0000,  96.5000,  38.5000,  62.0000,  12.0000,  61.0000],
         [ 98.5000,  53.5000,  23.0000,   8.0000,  99.5000,  23.5000],
         [ 38.0000,  92.5000,  90.0000,  33.0000,  33.0000, 111.5000],
         [ 14.5000,  11.0000,  26.0000,  70.0000,  65.5000,  18.5000],
         [ 72.5000,  97.5000,  15.0000,  91.5000, 107.0000,  43.0000],
         [ 25.5000, 126.5000,  65.5000,  62.5000,  69.5000,  89.0000]]])

You can also use comparison operators to perform logical tests. The result will generally be a Boolean (True/False) tensor with the torch.bool data type.

t43 = t35 > 180
print(t43)
tensor([[[False, False, False,  True, False,  True],
         [False, False,  True, False, False, False],
         [False, False,  True,  True,  True,  True],
         [False, False, False, False, False, False],
         [ True, False, False,  True, False, False],
         [ True,  True, False, False, False, False]],

        [[False,  True, False, False, False,  True],
         [False,  True, False,  True, False, False],
         [ True,  True, False, False, False, False],
         [False, False, False, False, False, False],
         [False, False, False,  True, False, False],
         [ True,  True, False,  True, False,  True]],

        [[False,  True, False, False, False, False],
         [ True, False, False, False,  True, False],
         [False,  True, False, False, False,  True],
         [False, False, False, False, False, False],
         [False,  True, False,  True,  True, False],
         [False,  True, False, False, False, False]]])
print(t43.dtype)
torch.bool

Similar to numpy, it is possible to perform linear algebra operations on tensors. This is actually very important for deep learning, machine learning, and statistical analysis, since they rely on linear algebra. Below, I have provided an example of obtaining a dot product between two matrices, which consists of element-wise multiplication followed by addition to obtain a single value.

As an example, the dot product is used to calculate the activation at each node by performing an element-wise multiplication of each input value and its associated weight and then adding the results.

t44 = torch.dot(t35.flatten(), t35.flatten())
t44
tensor(2342724)

Reducing Tensors

In this last section, I will provide some examples of how tensors can be reduced. Generally, this consists of calculating a summary statistic across a defined dimension.

After creating a random tensor with shape (5, 5), I apply torch.argmax(). This function returns the index of the highest value in that dimension. With dim=0, I am finding the index of the row with the highest value within each column. For example, the largest value in the first column is 0.9339, which is in row 2 (which has an index of 1).

In the next code block, I have changed the dim argument to 1 so that I am now finding which column has the highest value within each row.

You will see this applied when you want to find which class had the highest predicted logit or probability since it offers a means to find the index, which corresponds to a class, associated with the highest logit or probability. This allows for obtaining a “hard” classification from a set of class probabilities.

t45 = torch.rand(size=(5,5))
print(t45)
tensor([[0.4763, 0.9678, 0.9774, 0.7022, 0.7488],
        [0.9649, 0.3798, 0.9918, 0.7301, 0.0228],
        [0.0141, 0.6280, 0.3133, 0.6962, 0.2413],
        [0.4008, 0.8181, 0.8981, 0.2644, 0.5129],
        [0.8336, 0.4729, 0.4717, 0.6492, 0.6132]])
t46 = torch.argmax(t45, dim=0)
print(t46)
tensor([1, 0, 1, 1, 0])
t47 = torch.argmax(t45, dim=1)
print(t47)
tensor([2, 2, 3, 2, 0])

In contrast to argmax(), amax() returns the largest value along each dimension as opposed to its associated index. Functions are also available to return a single measure or statistics for the entire tensor, such as max() and mean().

t48 = torch.amax(t45, dim=0)
print(t48)
tensor([0.9649, 0.9678, 0.9918, 0.7301, 0.7488])
t49 = torch.max(t45)
print(t49)
tensor(0.9918)
t50 = torch.mean(torch.tensor(t45, dtype=torch.float32))
<string>:1: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
print(t50)
tensor(0.5916)

Concluding Remarks

Now that you have a better understanding of tensors, which serve as a means to represent a wide variety of data types as inputs to neural networks, we can continue our discussion of neural networks. In the next section, we will implement linear regression using PyTorch as a means to explain stochastic gradient decent and how to train a neural network to create a model.