Tensors in PyTorch : Part-2

Table of Content

Overview of Tensors in PyTorch

In our previous blog, "Understanding Tensors in PyTorch - Part 1" we explored the basics of tensors, including their importance, various methods of creation, and the different data types they support. In this blog, we will delve deeper into tensor operations. We will cover essential mathematical operations, as well as the concepts of vectorization and broadcasting, which enhance the computational power of tensors. Additionally, we will demonstrate how to seamlessly switch between CPU and GPU in PyTorch, further optimizing your tensor computations.  Let's continue our journey into the world of PyTorch tensors!

Maths and Logic with Tensor Pytorch

Now that you know how to create tensors, let's dive into what you can actually do with them. The fun starts with basic arithmetic operations and how tensors interact with simple scalars.

Vectorization

Let's begin with some fundamental operations:

blog image

As shown, arithmetic operations like addition, subtraction, multiplication, division, and exponentiation apply element-wise to the tensor. The result of each operation is a new tensor, which means you can chain operations together, following the usual operator precedence rules.

But what about operations between two tensors? They behave just as you would expect:

blog image

Notice that all tensors in these examples share the same shape. But what happens if we try to perform operations on tensors of different shapes?

Here's a hint: it doesn't go well. The following example throws a runtime error on purpose:

blog image

In general, you can not perform binary operations on tensors with different shapes, even if they contain the same number of elements. This concept is crucial to understanding tensor operations and avoiding common pitfalls in your PyTorch projects.

Note: For vectorization two tensors must have the same shape.

Tensor Broadcasting

The exception to the same-shapes rule is tensor broadcasting. Here’s an example:

blog image

What is the trick here? How did we multiply a 2x4 tensor by a 1x4 tensor?

Broadcasting allows you to perform operations between tensors that have compatible shapes. In the example above, the one-row, four-column tensor is multiplied by both rows of the two-row, four-column tensor.

This operation is essential in deep learning, especially when multiplying learning weights by a batch of input tensors, applying the operation to each instance in the batch separately, and returning a tensor of identical shape.

The rules for broadcasting are:

1. Each tensor must have at least one dimension—no empty tensors.

2. When comparing dimension sizes of the two tensors from last to first:

  • Each dimension must be equal, or

  • One of the dimensions must be of size 1, or

  • The dimension does not exist in one of the tensors


Tensors of identical shape, of course, are trivially “broadcastable,” as seen earlier.

More Maths with Tensor

PyTorch tensors have over three hundred operations that can be performed on them.

blog image
blog image

Altering Tensors in Place

When you perform operations on tensors, they usually create new tensors. For example, if you have c = a * b (where a and b are tensors), c will be a new tensor stored in a different memory location from a and b.
However, sometimes you may want to modify a tensor directly, especially if you don't need to keep the original values. PyTorch provides in-place operations for this purpose. These functions have an underscore (_) at the end of their names.

Here's an example:

blog image

These in-place arithmetic functions are methods of the torch.Tensor object, not the torch module.
For example, a.add_(b) modifies a directly.
Another way to place the result of a computation in an existing tensor is by using the out argument.
Many PyTorch functions and methods, including tensor creation methods, have this argument.

Here's an example:

blog image

Using in-place operations and the out argument helps manage memory more efficiently by reusing existing tensors. This can be especially important in deep learning applications where memory usage is critical.

Copying Tensor:

When you assign a tensor to a variable in Python, you are not creating a new copy of the tensor; you are just creating a new reference to the same tensor. For example:

blog image

In this case, modifying a also changes b because both variables point to the same tensor. But what if you need an actual copy? That's where the clone() method comes in:

blog image

Using clone(), b is a separate tensor with the same data as a, but changes to a don't affect b.

Important Note on clone() and Autograd

If your source tensor has autograd enabled, the cloned tensor will also have autograd enabled. This can be useful if both the original tensor and its clone are involved in a model's forward pass and need to track gradients for learning. However, there are situations where you might want to disable autograd for the clone to save on performance. For this, you can use the detach() method.

blog image

The detach() method effectively removes the tensor from its computation history, allowing you to perform operations without tracking gradients, which can improve performance in certain cases.

Using GPU with PyTorch

One of the key benefits of PyTorch is its ability to use Nvidia GPUs for faster computation. Nvidia GPUs use a technology called CUDA (Compute Unified Device Architecture) to handle many calculations at once, making them much faster than CPUs for certain tasks.

Here’s how you can take advantage of this:

1. Check for GPU Availability

Before you can use a GPU, you need to check if your system has one available. You can do this with the torch.cuda.is_available() method:

blog image

2.Creating Tensors on the GPU

By default, PyTorch creates tensors on the CPU. To create a tensor directly on the GPU, you can specify the device argument:

blog image

3. Handling Multiple GPUs

If you have more than one GPU, you can select which one to use by specifying its index. Use torch.cuda.device_count() to check how many GPUs are available:

blog image

4. Using a Device Handle

Instead of using hardcoded strings, it is better to use a device handle. This way, your code can automatically use the GPU if available, or fall back to the CPU if not:

blog image

5. Moving Existing Tensors to a Different Device

If you already have a tensor and want to move it to a different device (e.g., from CPU to GPU), you can use the to() method:

blog image

6. Ensure Tensors Are on the Same Device6.Ensure Tensors Are on the Same Device

When performing operations with multiple tensors, they must all be on the same device. Otherwise, you’ll get an error:

blog image

Changing Tensor Shape in PyTorch

Sometimes you need to change the shape of a tensor to fit your needs. Here are some common scenarios and how to handle them:

Adding and Removing Dimensions

Adding Dimensions:

PyTorch models often expect inputs in batches. For instance, if your model works with images of shape (3, 226, 226) (3 color channels, 226x226 pixels), it expects the input shape to be (N, 3, 226, 226), where N is the number of images in the batch. To create a batch with just one image, you need to add an extra dimension:

blog image

Removing Dimensions:

If you have a tensor with unnecessary dimensions of size 1, you can remove them with the squeeze() method:

blog image

If you need to remove a dimension that is not of size 1, squeeze() won’t change the shape:

blog image

Reshaping Tensors

To change a tensor's shape while keeping the same number of elements, use reshape():

blog image

When using reshape(), PyTorch tries to return a view of the original tensor if possible. This means changes to the original tensor will affect the reshaped tensor. To avoid this, you can use clone() to make a copy.

Conclusion

Tensors are the fundamental building blocks in PyTorch, offering a versatile way to handle and manipulate data. From understanding basic operations to managing tensor shapes, mastering tensors is crucial for efficient and effective use of PyTorch in your deep learning projects.

Key Takeaways:

  1. Creating Tensors: PyTorch provides flexible methods to create tensors with different dimensions and data types, allowing you to start building your models right away.
  2. Manipulating Shapes: Techniques like unsqueeze() and squeeze() help you adjust tensor dimensions to meet the requirements of your models or specific computations. Using reshape() enables you to transform tensors while preserving their data.
  3. Tensor Operations: PyTorch supports a wide range of operations, from simple arithmetic to complex transformations. Knowing how to perform these operations, both in-place and out-of-place, is essential for efficient computation and memory management.
  4. GPU Acceleration: PyTorch seamlessly integrates with CUDA-enabled GPUs to accelerate tensor computations. Ensuring your tensors are on the appropriate device helps leverage the full power of your hardware.

By understanding and utilizing these tensor operations, you can optimize your workflow, debug more effectively, and enhance the performance of your deep learning models. Continue to explore and experiment with tensors in PyTorch to fully harness their capabilities and elevate your machine learning projects.

Stay tuned for our next installment, where we will introduce you to the basics of building a Neural Network from scratch in our upcoming topic, “Neural Network from Scratch"!

Impetus Img

Written By

Impetus Ai Solutions

Impetus is a pioneer in AI and ML, specializing in developing cutting-edge solutions that drive innovation and efficiency. Our expertise extends to product engineering, warranty management, and building robust cloud infrastructures. We leverage advanced AI and ML techniques to provide state-of-the-art technological and IT-related services, ensuring our clients stay ahead in the digital era.

We run all kinds of IT services that vow your success