Pytorch-Visdom視覺化工具
Visdom相比TensorBoardX,更簡潔方便一些(例如對image資料的視覺化可以直接使用Tensor,而不必轉到cpu上再轉為numpy資料),重新整理率也更快。
1.安裝visdom
pip install visdom
2.開啟監聽程序
visdom本質上是一個web伺服器,開啟web伺服器之後程式才能向伺服器丟資料,web伺服器吧資料渲染到網頁中去。
python -m visdom.server
但是很不幸報錯了!ERROR:root:Error [Errno 2] No such file or directory while downloading https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_SVG,所以從頭再來,先pip uninstall visdom卸掉visdom,再手動安裝。
- 從網站下載visdom原始檔https://github.com/facebookresearch/visdom並解壓
- command進入visdom所在檔案目錄,比如我的是cd F:\Chrome_Download\visdom-master
- 進入目錄後執行pip install -e .
- 執行成功後,退回到使用者目錄,重新執行上面的python -m visidom.server
- 然後又報錯了,一直提示Downloading scripts, this may take a little while,解決方案見https://github.com/casuallyName/document-sharing/tree/master/static
- 直到如圖所示即啟動成功
3.訪問
用chrome瀏覽器訪問url連線:http://localhost:8097
沒想到又又報錯了,頁面載入失敗(藍色空白頁面如下)
在visdom安裝目錄下(我的是F:\Anaconda\Lib\site-packages\visdom),將static資料夾換掉,下載地址為
連結:https://pan.baidu.com/s/1fZb-3GSZvk0kRpL73MBgcw
提取碼:np04
直到出現橫條框即visdom可用。
4.視覺化訓練
在之前定義網路結構(參考上一節)的基礎上加上Visdom視覺化。
- 在訓練-測試的迭代過程之前,定義兩條曲線,在訓練-測試的過程中再不斷填充點以實現曲線隨著訓練動態增長:
1 from visdom import Visdom 2 viz = Visdom() 3 viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss')) 4 viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.',legend=['loss', 'acc.']))
第二行Visdom(env="xxx")引數env來設定環境視窗的名稱,這裡什麼都沒傳,在預設的main視窗下。
viz.line的前兩個引數是曲線的Y和X的座標(前面是縱軸後面才是橫軸),設定了不同的win引數,它們就會在不同的視窗中展示,
第四行定義的是測試集的loss和acc兩條曲線,所以在X等於0時Y給了兩個初始值。
- 為了知道訓練了多少個batch,設定一個全域性的計數器:
1 global_step = 0
- 在每個batch訓練完後,為訓練曲線新增點,來讓曲線實時增長:
1 global_step += 1 2 viz.line([loss.item()], [global_step], win='train_loss', update='append')
這裡用win引數來選擇是哪條曲線,用update='append'的方式新增曲線的增長點,前面是Y座標,後面是X座標。
- 在每次測試結束後,並在另外兩個視窗(用win引數設定)中展示影象(.images)和真實值(文字用.text):
1 viz.line([[test_loss, correct / len(test_loader.dataset)]], 2 [global_step], win='test', update='append') 3 viz.images(data.view(-1, 1, 28, 28), win='x') 4 viz.text(str(pred.detach().numpy()), win='pred', 5 opts=dict(title='pred'))
附上完整程式碼:
1 import torch 2 import torch.nn as nn 3 import torch.nn.functional as F 4 import torch.optim as optim 5 from torchvision import datasets, transforms 6 from visdom import Visdom 7 8 #超引數 9 batch_size=200 10 learning_rate=0.01 11 epochs=10 12 13 #獲取訓練資料 14 train_loader = torch.utils.data.DataLoader( 15 datasets.MNIST('../data', train=True, download=True, #train=True則得到的是訓練集 16 transform=transforms.Compose([ #transform進行資料預處理 17 transforms.ToTensor(), #轉成Tensor型別的資料 18 transforms.Normalize((0.1307,), (0.3081,)) #進行資料標準化(減去均值除以方差) 19 ])), 20 batch_size=batch_size, shuffle=True) #按batch_size分出一個batch維度在最前面,shuffle=True打亂順序 21 22 #獲取測試資料 23 test_loader = torch.utils.data.DataLoader( 24 datasets.MNIST('../data', train=False, transform=transforms.Compose([ 25 transforms.ToTensor(), 26 transforms.Normalize((0.1307,), (0.3081,)) 27 ])), 28 batch_size=batch_size, shuffle=True) 29 30 31 class MLP(nn.Module): 32 33 def __init__(self): 34 super(MLP, self).__init__() 35 36 self.model = nn.Sequential( #定義網路的每一層, 37 nn.Linear(784, 200), 38 nn.ReLU(inplace=True), 39 nn.Linear(200, 200), 40 nn.ReLU(inplace=True), 41 nn.Linear(200, 10), 42 nn.ReLU(inplace=True), 43 ) 44 45 def forward(self, x): 46 x = self.model(x) 47 return x 48 49 50 net = MLP() 51 #定義sgd優化器,指明優化引數、學習率,net.parameters()得到這個類所定義的網路的引數[[w1,b1,w2,b2,...] 52 optimizer = optim.SGD(net.parameters(), lr=learning_rate) 53 criteon = nn.CrossEntropyLoss() 54 55 viz = Visdom() 56 viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss')) 57 viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.', 58 legend=['loss', 'acc.'])) 59 global_step = 0 60 61 62 for epoch in range(epochs): 63 64 for batch_idx, (data, target) in enumerate(train_loader): 65 data = data.view(-1, 28*28) #將二維的圖片資料攤平[樣本數,784] 66 67 logits = net(data) #前向傳播 68 loss = criteon(logits, target) #nn.CrossEntropyLoss()自帶Softmax 69 70 optimizer.zero_grad() #梯度資訊清空 71 loss.backward() #反向傳播獲取梯度 72 optimizer.step() #優化器更新 73 74 global_step += 1 75 viz.line([loss.item()], [global_step], win='train_loss', update='append') 76 77 78 if batch_idx % 100 == 0: #每100個batch輸出一次資訊 79 print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( 80 epoch, batch_idx * len(data), len(train_loader.dataset), 81 100. * batch_idx / len(train_loader), loss.item())) 82 83 84 test_loss = 0 85 correct = 0 #correct記錄正確分類的樣本數 86 for data, target in test_loader: 87 data = data.view(-1, 28 * 28) 88 logits = net(data) 89 test_loss += criteon(logits, target).item() #其實就是criteon(logits, target)的值,標量 90 91 pred = logits.data.max(dim=1)[1] #也可以寫成pred=logits.argmax(dim=1) 92 correct += pred.eq(target.data).sum() 93 94 95 viz.line([[test_loss, correct / len(test_loader.dataset)]], 96 [global_step], win='test', update='append') 97 viz.images(data.view(-1, 1, 28, 28), win='x') 98 viz.text(str(pred.detach().numpy()), win='pred', 99 opts=dict(title='pred')) 100 101 102 test_loss /= len(test_loader.dataset) 103 print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( 104 test_loss, correct, len(test_loader.dataset), 105 100. * correct / len(test_loader.dataset)))
Train Epoch: 0 [0/60000 (0%)] Loss: 2.307811
Train Epoch: 0 [20000/60000 (33%)] Loss: 2.051105
Train Epoch: 0 [40000/60000 (67%)] Loss: 1.513345
..\aten\src\ATen\native\BinaryOps.cpp:81: UserWarning: Integer division of tensors using div or / is deprecated, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
Test set: Average loss: 0.0051, Accuracy: 7476/10000 (75%)
Train Epoch: 1 [0/60000 (0%)] Loss: 1.056841
Train Epoch: 1 [20000/60000 (33%)] Loss: 0.721334
Train Epoch: 1 [40000/60000 (67%)] Loss: 0.790637
Test set: Average loss: 0.0033, Accuracy: 8069/10000 (81%)
Train Epoch: 2 [0/60000 (0%)] Loss: 0.680886
Train Epoch: 2 [20000/60000 (33%)] Loss: 0.629937
Train Epoch: 2 [40000/60000 (67%)] Loss: 0.627497
Test set: Average loss: 0.0021, Accuracy: 8971/10000 (90%)
Train Epoch: 3 [0/60000 (0%)] Loss: 0.410005
Train Epoch: 3 [20000/60000 (33%)] Loss: 0.332373
Train Epoch: 3 [40000/60000 (67%)] Loss: 0.293972
Test set: Average loss: 0.0016, Accuracy: 9104/10000 (91%)
Train Epoch: 4 [0/60000 (0%)] Loss: 0.318976
Train Epoch: 4 [20000/60000 (33%)] Loss: 0.325024
Train Epoch: 4 [40000/60000 (67%)] Loss: 0.279787
Test set: Average loss: 0.0014, Accuracy: 9171/10000 (92%)
Train Epoch: 5 [0/60000 (0%)] Loss: 0.237663
Train Epoch: 5 [20000/60000 (33%)] Loss: 0.272126
Train Epoch: 5 [40000/60000 (67%)] Loss: 0.182882
Test set: Average loss: 0.0013, Accuracy: 9227/10000 (92%)
Train Epoch: 6 [0/60000 (0%)] Loss: 0.280532
Train Epoch: 6 [20000/60000 (33%)] Loss: 0.239808
Train Epoch: 6 [40000/60000 (67%)] Loss: 0.372246
Test set: Average loss: 0.0012, Accuracy: 9297/10000 (93%)
Train Epoch: 7 [0/60000 (0%)] Loss: 0.291511
Train Epoch: 7 [20000/60000 (33%)] Loss: 0.225020
Train Epoch: 7 [40000/60000 (67%)] Loss: 0.265182
Test set: Average loss: 0.0012, Accuracy: 9321/10000 (93%)
Train Epoch: 8 [0/60000 (0%)] Loss: 0.227891
Train Epoch: 8 [20000/60000 (33%)] Loss: 0.270453
Train Epoch: 8 [40000/60000 (67%)] Loss: 0.191862
Test set: Average loss: 0.0011, Accuracy: 9361/10000 (94%)
Train Epoch: 9 [0/60000 (0%)] Loss: 0.188959
Train Epoch: 9 [20000/60000 (33%)] Loss: 0.161353
Train Epoch: 9 [40000/60000 (67%)] Loss: 0.293424
Test set: Average loss: 0.0011, Accuracy: 9374/10000 (94%)
但是viz.images()那一句圖片沒有顯示,沒有找到原因,先放一放吧。