【580】PyTorch 實現 CNN 例子
阿新 • • 發佈:2021-06-25
參考:PyTorch 神經網路
實現下面這個網路:
- 第一層:卷積 5*5*6、ReLU、Max Pooling
- 第二層:卷積 5*5*16、ReLU、Max Pooling
- 第三層:Flatten、Linear NN
- 第四層:Linear NN
- 第五層:Linear NN
這是一個簡單的前饋神經網路,它接收輸入,讓輸入一個接著一個的通過一些層,最後給出輸出。
一個典型的神經網路訓練過程包括以下幾點:
- 定義一個包含可訓練引數的神經網路
- 迭代整個輸入
- 通過神經網路處理輸入
- 計算損失(loss)
- 反向傳播梯度到神經網路的引數
- 更新網路的引數,典型的用一個簡單的更新方法:weight=weight-learning_rate*gradient
定義神經網路:
import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution kernel # 第一層 self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5) # 第二層 self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5) # an affine operation: y = Wx + b # 第三層 self.fc1 = nn.Linear(in_features=16 * 5 * 5, out_features=120) # 第四層 self.fc2 = nn.Linear(in_features=120, out_features=84) # 第五層 self.fc3 = nn.Linear(in_features=84, out_features=10) def forward(self, x): # 第一層 (conv1 -> relu -> max pooling) x = self.conv1(x) x = F.relu(x) # Max pooling over a (2, 2) window x = F.max_pool2d(x, (2, 2)) # 第二層 (conv2 -> relu -> max pooling) x = self.conv2(x) x = F.relu(x) # If the size is a square you can only specify a single number x = F.max_pool2d(x, 2) # 第三層 (fc -> relu) x = x.view(-1, self.num_flat_features(x)) x = self.fc1(x) x = F.relu(x) # 第四層 (fc -> relu) x = self.fc2(x) x = F.relu(x) # 第五層 (fc -> relu) x = self.fc3(x) x = F.relu(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net)
輸出:
Net( (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) )
在Pytorch中訓練模型包括以下幾個步驟:
- 在每批訓練開始時初始化梯度
- 前向傳播
- 反向傳播
- 計算損失並更新權重
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
通用
# 在資料集上迴圈多次 for epoch in range(2): for i, data in enumerate(trainloader, 0): # 獲取輸入; data是列表[inputs, labels] inputs, labels = data # (1) 初始化梯度 optimizer.zero_grad() # (2) 前向傳播 outputs = net(inputs) loss = criterion(outputs, labels) # (3) 反向傳播 loss.backward() # (4) 計算損失並更新權重 optimizer.step()