程式碼訓練2 影象分類測試程式碼
阿新 • • 發佈:2021-06-16
影象分類通用測試程式碼
損失函式
損失函式 nn.CrossEntropyLoss
。nn.CrossEntropyLoss()是nn.logSoftmax()和nn.NLLLoss()的整合,可以直接使用它來替換網路中的這兩個操作,這個函式可以用於多分類問題。函式的引數:weight(Tensor, optional):如果輸入這個引數的話必須是一個1維的tensor,長度為類別數C,每個值對應每一類的權重。reduction (string, optional) :指定最終的輸出型別,預設為‘mean’。
優化器
pytorch中常用的四種優化器。SGD、Momentum、RMSProp、Adam。
opt_SGD = torch.optim.SGD(net_SGD.parameters(),lr=LR)
opt_Momentum = torch.optim.SGD(net_Momentum.parameters(),lr=LR,momentum=0.8)
opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(),lr=LR,alpha=0.9)
opt_Adam = torch.optim.Adam(net_Adam.parameters(),lr=LR,betas=(0.9,0.99))
SGD 是最普通的優化器, 也可以說沒有加速效果, 而 Momentum 是 SGD 的改良版, 它加入了動量原則。後面的 RMSprop 又是 Momentum 的升級版。而 Adam 又是 RMSprop 的升級版。不過,並不是越先進的優化器, 結果越佳。
這裡用的優化器optim.Adam
網路訓練
每一個epoch進行一次訓練和測試。對於訓練,將訓練引數傳入網路net.train()
,定義執行時的損失以及開始時間初始化。
net.train() running_loss = 0.0 t1 = time.perf_counter() for step, data in enumerate(train_loader, start=0): images,labels = data optimizer.zero_grad() outputs = net(images.to(device)) loss = loss_function(outputs,labels.to(device)) loss.backward() optimizer.step() running_loss += loss.item() rate = (step + 1) / len(train_loader) a ="*" * int(rate * 50) b ="." * int((1-rate) * 50) print("\rtrain loss: {:^3.0f}%[{}->{}]{:.f}".format(int(rate * 100), a, b, loss), end="") print() print(time.perf_counter()-t1)
總程式碼下
net = AlexNet(num_classes=5,init_weights=True)
net.to(device)
loss_function = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.0002)
save_path = './AlexNet.pth'
best_acc = 0.0
for epoch in range(10):
net.train()
running_loss = 0.0
t1 = time.perf_counter()
for step, data in enumerate(train_loader, start=0):
images,labels = data
optimizer.zero_grad()
outputs = net(images.to(device))
loss = loss_function(outputs,labels.to(device))
loss.backward()
optimizer.step()
running_loss += loss.item()
rate = (step + 1) / len(train_loader)
a ="*" * int(rate * 50)
b ="." * int((1-rate) * 50)
print("\rtrain loss: {:^3.0f}%[{}->{}]{:.f}".format(int(rate * 100), a, b, loss), end="")
print()
print(time.perf_counter()-t1)
net.eval()
acc = 0.0
with torch.no_grad():
for val_data in validate_loader:
val_images,val_labels = val_data
outputs = net(val_images.to(device))
predict_y = torch.max(outputs, dim=1)[1]
acc += (predict_y == val_labels.to(device)).sum().item()
val_accurate = acc / val_num
if val_accurate > best_acc:
best_acc = val_accurate
torch.save(net.state_dict(), save_path)
print('[epoch %d] train_loss: %.3f test_accuracy;%.3f' % (epoch + 1, running_loss / step, val_accurate))
print('Finished Training')