Shape: (3, 4) • Click cells to edit values
Slice: [0]
No elements selected

Quick Presets

Custom Slice

Syntax:

  • [n] - Single row n
  • [start:stop] - Rows from start to stop-1
  • [::step] - Every step-th row
  • [:, n] - Single column n
  • [rows, cols] - Combine row and column slices

Slice Notation

tensor[rows,cols]\text{tensor}[\text{rows}, \text{cols}]

Shape Transformation

(m,n)slice(m,n)(m, n) \xrightarrow{\text{slice}} (m', n')

View vs Copy

TviewTvsT=clone(T)T' \xrightarrow{\text{view}} T \quad \text{vs} \quad T' = \text{clone}(T)

What is Tensor Slicing?

Tensor slicing extracts a subset of elements using index notation, similar to NumPy arrays. It's fundamental for data manipulation in PyTorch.

Views vs Copies

Basic slicing operations return views that share memory with the original tensor. Modifications to a view affect the original. Use .clone() for independent copies.

Gradient Flow

Slice operations maintain the computational graph. Gradients propagate through slices during backpropagation, enabling gradient-based optimization on tensor subsets.

Memory Contiguity

Slicing can create non-contiguous tensors. Some operations require contiguous memory and may need .contiguous() calls for optimal performance.

1import torch
2
3x = torch.tensor([[1,2,3,4],
4                  [5,6,7,8],
5                  [9,10,11,12]])
6
7# Row slicing
8x[0]        # First row
9x[1:3]      # Rows 1-2
10
11# Column slicing
12x[:, 0]     # First column
13x[:, 1:3]   # Columns 1-2
14
15# Combined slicing
16x[0:2, 1:3] # Rows 0-1, Cols 1-2
17
18# Views vs Copies
19view = x[0]           # Shares memory
20copy = x[0].clone()   # Independent