1. 程式人生 > 其它 >pytorch張量操作基礎

pytorch張量操作基礎

技術標籤:PyTorchTensorpytorch

張量是pytorch的基本資料型別,因此有必要重新快速的學習下怎麼使用操作張量。

1:使用tensor直接建立

b = torch.tensor([[11, 22, 33, 66], [77, 44, 88, 99]])
print(b)
b = torch.tensor([11, 22, 33, 66, 77, 44, 88, 99])
print(b)

結果輸出是:
tensor([[11, 22, 33, 66],
[77, 44, 88, 99]])
tensor([11, 22, 33, 66, 77, 44, 88, 99])

2:從numpy中匯入

a = np.array([2, 5.5])
print(a)
b = torch.from_numpy(a)
print(b)

a = np.array([[2, 5.5], [1, 3]])
print(a)
b = torch.from_numpy(a)
print(b)

輸出結果是:
[2. 5.5]
tensor([2.0000, 5.5000], dtype=torch.float64)
[[2. 5.5]
[1. 3. ]]
tensor([[2.0000, 5.5000],
[1.0000, 3.0000]], dtype=torch.float64)

3:從一些分佈函式中得到

a = torch.
randn(3, 3) # [0,1]之間正太分佈 print(a) a = torch.rand(3, 3) # [0,1]之間均勻分佈 print(a) b = torch.rand_like(a) # 建立一個與a shape相同的陣列 print(b) c = torch.randint(1, 10, (3, 3)) # [0, 10)之間,3*3形狀的整數 print(c)

結果輸出是:
tensor([[-0.1965, -1.2576, 0.9081],
[ 0.2345, 1.3528, 1.5679],
[-1.2760, -0.9808, 0.5914]])
tensor([[0.6441, 0.5676, 0.7393],

[0.0952, 0.9967, 0.8745],
[0.1752, 0.0379, 0.0169]])
tensor([[0.7620, 0.1802, 0.1150],
[0.5347, 0.6680, 0.5420],
[0.3381, 0.3097, 0.4028]])
tensor([[2, 8, 3],
[9, 1, 9],
[7, 4, 4]])

4:同一個數值

a = torch.full([2, 3], 7)  # 2行3列,全是7
print(a)

結果是:
tensor([[7, 7, 7],
[7, 7, 7]])

5:在一定範圍帶有步長的值

a = torch.arange(0, 10, 2)  # 0到10,步長為2
print(a)
a = torch.linspace(0, 10, steps=5)  # 0到10,分5份
print(a)

結果是:
tensor([0, 2, 4, 6, 8])
tensor([ 0.0000, 2.5000, 5.0000, 7.5000, 10.0000])

6:全0,全1,對角陣

a = torch.ones(3, 3)  # 3*3,全1矩陣
print(a)
a = torch.zeros(3, 3)  # 3*3,全0矩陣
print(a)
a = torch.eye(3, 3)  # 3*3,對角矩陣
print(a)

結果是:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])

7:對張量進行下標選擇,其中… 表示所有

a = torch.rand(5, 3, 28, 28)
print(a.shape)
b = a[:, :, 0:28:2, 0:28:2]  # 步長為2,對2,3維度進行索引
print(b.shape)
c = a.index_select(2, torch.arange(8))  # 對第2維度進行索引,範圍為0-7
print(c.shape)
d = a[..., 0:28:2]  # 步長為2,對2,3維度進行索引
print(d.shape)
e = a[2:5, ..., 0:28:2]  # 步長為2,對2,3維度進行索引
print(e.shape)

結果是:
torch.Size([5, 3, 28, 28])
torch.Size([5, 3, 14, 14])
torch.Size([5, 3, 8, 28])
torch.Size([5, 3, 28, 14])
torch.Size([3, 3, 28, 14])

8:改變張量的維度資訊

a = torch.rand(5, 3, 4, 4)
print(a.shape)
b = a.view(-1)
print(b.shape)
b = a.view(5, -1)
print(b.shape)
b = a.view(5, 3, -1)
print(b.shape)
b = a.view(5, -1, 4)
print(b.shape)

結果是:
torch.Size([5, 3, 4, 4])
torch.Size([240])
torch.Size([5, 48])
torch.Size([5, 3, 16])
torch.Size([5, 12, 4])

9:張量乘法和加法

a = torch.rand(2, 3)
print(a)
b = torch.rand(2, 3)
print(b)
print(a.add(b))  # 矩陣加法
print(a + b)  # 矩陣加法
print(a.mm(b.T))  # 矩陣乘法

結果是:
tensor([[1., 2., 3.],
[6., 5., 4.]])
tensor([[4., 1., 3.],
[9., 7., 2.]])
tensor([[ 5., 3., 6.],
[15., 12., 6.]])
tensor([[ 5., 3., 6.],
[15., 12., 6.]])
tensor([[15., 29.],
[41., 97.]])

類似可以得到類似的Div和Sub函式的操作。

10:聚合運算
Max/min:沿行/列取最大/最小值,dim=0表示列,dim=1表示行。返回值和索引。
Sum:沿行/列取求和,dim=0表示列,dim=1表示行

a = torch.Tensor([[1, 2, 3], [6, 5, 4]])
print(a.shape)
min_val, min_idx = torch.min(a, dim=0)  # 按列求最小值
print(min_val)
print(min_idx)
max_val, max_idx = torch.max(a, dim=1)  # 按行求最大值
print(max_val)
print(max_idx)
sum_idx = torch.sum(a, dim=0)  # 按列求和
print(sum_idx)
sum_idx = torch.sum(a, dim=1)  # 按行求和
print(sum_idx)

結果是
torch.Size([2, 3])
tensor([1., 2., 3.])
tensor([0, 0, 0])
tensor([3., 6.])
tensor([2, 0])
tensor([7., 7., 7.])
tensor([ 6., 15.])

11:張量的操作函式
1):cat。功能:將張量按維度dim進行拼接
• tensors: 張量序列 • dim : 要拼接的維度

low_layer_features = torch.rand(2, 3, 3, 2)
deep_layer_features = torch.rand(2, 3, 3, 2)
y = torch.cat([low_layer_features, deep_layer_features], dim=0)
print(y.shape)
y = torch.cat([low_layer_features, deep_layer_features], dim=1)
print(y.shape)
y = torch.cat([low_layer_features, deep_layer_features], dim=2)
print(y.shape)
y = torch.cat([low_layer_features, deep_layer_features], dim=3)
print(y.shape)

結果是:
torch.Size([4, 3, 3, 2])
torch.Size([2, 6, 3, 2])
torch.Size([2, 3, 6, 2])
torch.Size([2, 3, 3, 4])

2):chunk。 功能:將張量按維度dim進行平均切分
返回值:張量列表
注意事項:若不能整除,最後一份張量小於 其他張量
• input: 要切分的張量
• chunks : 要切分的份數,向上取整
• dim : 要切分的維度

a = torch.ones((2, 7))
list_of_tensors = torch.chunk(a, dim=1, chunks=3)  # 在維度1切分3份
for idx, t in enumerate(list_of_tensors):
    print("第{}個張量 shape is {}".format(idx + 1, t.shape))

結果是:
第1個張量 shape is torch.Size([2, 3])
第2個張量 shape is torch.Size([2, 3])
第3個張量 shape is torch.Size([2, 1])

3):split。功能:將張量按維度dim進行切分,和chunk類似
返回值:張量列表
• tensor: 要切分的張量
• split_size_or_sections : 為int時,表示 每一份的長度;為list時,按list元素切分
• dim : 要切分的維度

t = torch.ones((2, 5))
list_of_tensors = torch.split(t, split_size_or_sections=2, dim=1)  # int
for idx, t in enumerate(list_of_tensors):
    print("第{}個張量:shape is {}".format(idx + 1, t.shape))

結果是:
第1個張量:shape is torch.Size([2, 2])
第2個張量:shape is torch.Size([2, 2])
第3個張量:shape is torch.Size([2, 1])