Pytorch深入學習階段二
Pytorch學習二階段
一、自動求導
訓練神經網路包含兩步:
- 前向傳播
- 後向傳播:後向傳播中,NN將調整他的引數,並通過loss_function來自計算誤差,並通過優化器來優化引數。
import torch, torchvision model = torchvision.models.resnet18(pretrained=True) data = torch.rand(1, 3, 64, 64) labels = torch.rand(1, 1000) prediction = model(data) # forward pass >>>data tensor([[[[0.0792, 0.3683, 0.3258, ..., 0.8572, 0.9326, 0.5032], [0.3238, 0.3992, 0.6769, ..., 0.7879, 0.6261, 0.4239], [0.0839, 0.7466, 0.7469, ..., 0.0616, 0.5267, 0.0221], ..., [0.4114, 0.2793, 0.4946, ..., 0.3337, 0.0151, 0.9790], [0.4874, 0.2718, 0.3890, ..., 0.6204, 0.2941, 0.9589], [0.8202, 0.3904, 0.9375, ..., 0.1282, 0.2416, 0.0420]], [[0.2958, 0.6416, 0.2069, ..., 0.0054, 0.3710, 0.8716], [0.2861, 0.6640, 0.3595, ..., 0.4552, 0.6691, 0.9000], [0.1908, 0.1988, 0.0502, ..., 0.9516, 0.0986, 0.2951], ..., [0.3542, 0.6152, 0.8829, ..., 0.7102, 0.7418, 0.2471], [0.1259, 0.4121, 0.4195, ..., 0.0277, 0.7919, 0.1961], [0.6761, 0.1635, 0.6317, ..., 0.5082, 0.8117, 0.4959]],... >>>labels tensor([[9.7649e-01, 7.4230e-01, 8.9876e-01, 3.9301e-01, 4.3104e-01, 2.5916e-01, 2.1638e-01, 2.3715e-01, 3.6239e-01, 5.1230e-02, 5.0033e-01, 9.3420e-01, 7.3738e-01, 5.1232e-01, 6.1602e-01, 3.1946e-01, 3.3043e-01, 6.6394e-01, 6.5134e-01, 4.4163e-01, 3.2559e-01, 1.1167e-01, 9.5033e-01, 2.6302e-01, 4.9590e-01, 1.1047e-01, 6.7810e-01, 1.6822e-01, 3.9666e-01, 9.3511e-01,...
計算完畢前向傳播後,通過與真實標籤計算誤差(目前常用交叉熵去計算)。然後下一步就是後向傳播誤差在網路引數中,自動梯度計算並存儲了梯度為每個引數,通過.grad
屬性。
loss = (prediction - labels).sum()
loss.backward() # backward pass
>>> loss
tensor(-491.3782, grad_fn=<SumBackward0>)
接下來定義優化器,用隨機梯度下降,SGD,並且Learning_rate設定為0.01,動量設定為0.9。
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9) >>> optim SGD ( Parameter Group 0 dampening: 0 lr: 0.01 momentum: 0.9 nesterov: False weight_decay: 0 )
最後我們呼叫.step()
來初始玩成梯度下降,優化器最後會調整引數通過模型屬性.grad
。
optim.step() #gradient descent
(一)Autograd中的分化
autograd是怎樣收集梯度的呢?
舉例如下:
import torch
a = torch.tensor([2., 3.], requires_grad=True)
b = torch.tensor([6., 4.], requires_grad=True)
\[Q=3a^3-b^2
\]
假設a,b都是NN的引數,Q是損失,在訓練過程,優化引數求導:
\[\frac{∂Q}{∂a}=9a^2 \] \[\frac{∂Q}{∂b}=-2b \]當呼叫.backward()
.grad
屬性。
在idea中可以看到,未學習前,grad為0空,下圖是autograd計算後,得到的a的值,b同理可得。
(二)向量計算使用autograd
在數學裡,雅各比行列式被用來儲存函式的導數:
而autograd也可以用來計算雅各比行列式。
(三)計算圖
計算圖是autograd的一種計算方法,稱之為directed acyclic graph,在DAG中,葉子是輸入張量,根是輸出張量,通過追蹤計算圖的葉子到根,我們可以自動計算出梯度,使用鏈式法則。
前向傳播中,autograd同時做兩件事情:
- 執行必要的計算操作
- 保持梯度函式的操作,在DAG中
後向傳播當.backward()
被呼叫在DAG的根上,autograd則:
- 計算梯度的每個
.grad_fn
- 加快張量的
.grad
屬性計算 - 使用鏈式法則,傳播給所連的葉子張量
DAG記錄了每個張量的操作,當其requires_grad
標誌位設定為True
,反之,將不會記錄。在NN中,引數不更新叫做凍結引數(frozen parameters),如果你事先直到不需要梯度更新引數,這將是有用的。
from torch import nn, optim
model = torchvision.models.resnet18(pretrained=True)
# Freeze all the parameters in the network
for param in model.parameters():
param.requires_grad = False
二、NEURAL NETWORKS
一個典型的神經網路訓練過程:
- 定義NN(一些可以學習的引數)
- 迭代輸入的資料集
- 計算損失
- 運用後向傳播和梯度
- 更新權重,一般簡單的運用公式:
weight = weight - learning_rate * gradient
(一)定義網路
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square, you can specify with a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
>>>Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
forward
必須被定義,但是backward
函式不需要,因為已經被自動定義。forward
可以被用於任何張量操作。
Note:
torch.nn
僅支援mini-batches。nn.Conv2d
將喂入一個4dimension的張量,如nSamples x nChannels x Height x Width
。如果是一個單個樣本,使用input.unsqueeze(0)
去增加一個虛擬batch維度。
Recap:
-
torch.Tensor
是一個多維向量,支援autograd -
nn.Module
neural network模型,轉化引數型別,可以移步到GPU去計算 -
nn.Parameter
一種張量,可以自動地註冊 -
autograd.Function
實現了autograd操作的前後傳播定義,每一個張量操作創造至少一個Function節點,以此連結特殊的函式,這個特殊的函式建立了一個張量,並且編碼了它的生命記錄
(二)損失函式
損失函式需要輸出和目標值作為一對輸入,並且計算他們倆之間的差距。
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
>>>
tensor(0.6493, grad_fn=<MseLossBackward0>)
如果我們跟隨Loss在後向傳播方向,使用Loss的.grad_fn
,可以看到計算圖。
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> flatten -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
(三)權重更新
最簡單的隨機梯度下降(SGD),更新方法:
weight = weight - learning_rate * gradient
運用程式碼:
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
更多的更新方法在包torch.optim
中。
三、CIFAR-10訓練
import torch
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision import datasets
from torchvision.transforms import ToTensor
torch.manual_seed(1)
# hyper parameters
Epoch = 10
Batch_size = 64
Learning_rate = 0.001
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=Batch_size)
test_dataloader = DataLoader(test_data, batch_size=Batch_size)
for X, y in test_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential( # input shape (1,28,28)
nn.Conv2d(in_channels=1, # input height
out_channels=16, # n_filter
kernel_size=5, # filter size
stride=1, # filter step
padding=2 # con2d出來的圖片大小不變
), # output shape (16,28,28)
nn.ReLU(),
nn.MaxPool2d(kernel_size=2) # 2x2取樣,output shape (16,14,14)
)
self.conv2 = nn.Sequential(nn.Conv2d(16, 32, 5, 1, 2), # output shape (32,7,7)
nn.ReLU(),
nn.MaxPool2d(2))
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1) # flat (batch_size, 32*7*7)
output = self.out(x)
return output
cnn = CNN().to(device)
print(cnn)
# optimizer
optimizer = torch.optim.Adam(cnn.parameters(), lr=Learning_rate)
# loss_fun
loss_func = nn.CrossEntropyLoss()
write = SummaryWriter('logs')
def train(dataloader, cnn, loss_fn, optimizer):
size = len(dataloader.dataset)
cnn.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = cnn(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
write.add_scalar("train_loss",loss.item())
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, cnn, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
cnn.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = cnn(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
print("------start training------")
for epoch in range(Epoch):
print(f"Epoch {epoch + 1}\n-------------------------------")
train(train_dataloader, cnn, loss_func, optimizer)
test(test_dataloader, cnn, loss_func)
print("Done!")
out:
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
Using cuda device
CNN(
(conv1): Sequential(
(0): Conv2d(1, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv2): Sequential(
(0): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(out): Linear(in_features=1568, out_features=10, bias=True)
)
------start training------
Epoch 1
-------------------------------
loss: 2.307641 [ 0/60000]
loss: 0.737086 [ 6400/60000]
loss: 0.368927 [12800/60000]
loss: 0.530034 [19200/60000]
loss: 0.556181 [25600/60000]
loss: 0.511927 [32000/60000]
loss: 0.382789 [38400/60000]
loss: 0.543811 [44800/60000]
loss: 0.516559 [51200/60000]
loss: 0.427986 [57600/60000]
Test Error:
Accuracy: 85.4%, Avg loss: 0.404072
執行後在cmd輸入以下命令檢視訓練狀態:
tensorboard --logdir=logs --port=8080