Pytorch 模型的網路結構視覺化
阿新 • • 發佈:2018-11-09
Keras 中 keras.summary() 即可很好的將模型結構視覺化,但 Pytorch 暫還沒有提供網路模型視覺化的工具.
總結兩種pytorch網路結構的視覺化方法
Pytorch使用Tensorboard視覺化網路結構
GitHub地址:點選開啟
1.下載視覺化程式碼
git clone https://github.com/lanpa/tensorboard-pytorch.git
2.安裝PyTorch 0.4 +torchvision 0.2
3.安裝Tensorflow和Tensorboard:
pip install tensorflow pip install tensorboard==1.7.0
4.安裝視覺化工具:
pip install tensorboardX
5.執行下面的測試程式碼demo_LeNet.py :
import torch import torch.nn as nn from tensorboardX import SummaryWriter class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Sequential( #input_size=(1*28*28) nn.Conv2d(1, 6, 5, 1, 2), nn.ReLU(), #(6*28*28) nn.MaxPool2d(kernel_size=2, stride=2), #output_size=(6*14*14) ) self.conv2 = nn.Sequential( nn.Conv2d(6, 16, 5), nn.ReLU(), #(16*10*10) nn.MaxPool2d(2, 2) #output_size=(16*5*5) ) self.fc1 = nn.Sequential( nn.Linear(16 * 5 * 5, 120), nn.ReLU() ) self.fc2 = nn.Sequential( nn.Linear(120, 84), nn.ReLU() ) self.fc3 = nn.Linear(84, 10) # 定義前向傳播過程,輸入為x def forward(self, x): x = self.conv1(x) x = self.conv2(x) # nn.Linear()的輸入輸出都是維度為一的值,所以要把多維度的tensor展平成一維 x = x.view(x.size()[0], -1) x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x dummy_input = torch.rand(13, 1, 28, 28) #假設輸入13張1*28*28的圖片 model = LeNet() with SummaryWriter(comment='LeNet') as w: w.add_graph(model, (dummy_input, ))
5.上面的程式碼執行結束後,會在當前目錄生成一個叫run的資料夾,裡面儲存了視覺化所需要的日誌資訊。用cmd進入到runs資料夾所在的目錄中(路勁中不能有中文),然後cmd中輸入:
tensorboard --logdir runs
作者:以夢為馬_Sun
來源:CSDN
原文:https://blog.csdn.net/sunqiande88/article/details/80155925?utm_source=copy
使用Github 中的 pytorchviz 可以很不錯的畫出 Pytorch 模型網路結構.
sudo pip install graphviz 或 sudo pip install git+https://github.com/szagoruyko/pytorchviz
模型視覺化函式 - make_dot()
https://github.com/szagoruyko/pytorchviz/blob/master/torchviz/dot.py
import torch
from torch.autograd import Variable
from graphviz import Digraph
def make_dot(var, params=None):
"""
畫出 PyTorch 自動梯度圖 autograd graph 的 Graphviz 表示.
藍色節點表示有梯度計算的變數Variables;
橙色節點表示用於 torch.autograd.Function 中的 backward 的張量 Tensors.
Args:
var: output Variable
params: dict of (name, Variable) to add names to node that
require grad (TODO: make optional)
"""
if params is not None:
assert all(isinstance(p, Variable) for p in params.values())
param_map = {id(v): k for k, v in params.items()}
node_attr = dict(style='filled', shape='box', align='left',
fontsize='12', ranksep='0.1', height='0.2')
dot = Digraph(node_attr=node_attr, graph_attr=dict(size="12,12"))
seen = set()
def size_to_str(size):
return '(' + (', ').join(['%d' % v for v in size]) + ')'
output_nodes = (var.grad_fn,) if not isinstance(var, tuple) else tuple(v.grad_fn for v in var)
def add_nodes(var):
if var not in seen:
if torch.is_tensor(var):
# note: this used to show .saved_tensors in pytorch0.2, but stopped
# working as it was moved to ATen and Variable-Tensor merged
dot.node(str(id(var)), size_to_str(var.size()), fillcolor='orange')
elif hasattr(var, 'variable'):
u = var.variable
name = param_map[id(u)] if params is not None else ''
node_name = '%s\n %s' % (name, size_to_str(u.size()))
dot.node(str(id(var)), node_name, fillcolor='lightblue')
elif var in output_nodes:
dot.node(str(id(var)), str(type(var).__name__), fillcolor='darkolivegreen1')
else:
dot.node(str(id(var)), str(type(var).__name__))
seen.add(var)
if hasattr(var, 'next_functions'):
for u in var.next_functions:
if u[0] is not None:
dot.edge(str(id(u[0])), str(id(var)))
add_nodes(u[0])
if hasattr(var, 'saved_tensors'):
for t in var.saved_tensors:
dot.edge(str(id(t)), str(id(var)))
add_nodes(t)
# 多輸出場景 multiple outputs
if isinstance(var, tuple):
for v in var:
add_nodes(v.grad_fn)
else:
add_nodes(var.grad_fn)
resize_graph(dot)
return dot
Demo - MLP
https://github.com/szagoruyko/pytorchviz/blob/master/examples.ipynb
python2.7
import torch
from torch import nn
from torchviz import make_dot
model = nn.Sequential()
model.add_module('W0', nn.Linear(8, 16))
model.add_module('tanh', nn.Tanh())
model.add_module('W1', nn.Linear(16, 1))
x = torch.randn(1,8)
vis_graph = make_dot(model(x), params=dict(model.named_parameters()))
vise_graph.view()
Demo - AlexNet
import torch
from torch import nn
from torchviz import make_dot
from torchvision.models import AlexNet
model = AlexNet()
x = torch.randn(1, 3, 227, 227).requires_grad_(True)
y = model(x)
vis_graph = make_dot(y, params=dict(list(model.named_parameters()) + [('x', x)]))
vise_graph.view()
模型引數列印
import torch
from torch import nn
from torchviz import make_dot
from torchvision.models import AlexNet
model = AlexNet()
x = torch.randn(1, 3, 227, 227).requires_grad_(True)
y = model(x)
params = list(model.parameters())
k = 0
for i in params:
l = 1
print("該層的結構:" + str(list(i.size())))
for j in i.size():
l *= j
print("該層引數和:" + str(l))
k = k + l
print("總引數數量和:" + str(k))
輸出如下:
該層的結構:[64, 3, 11, 11] 該層引數和:23232 該層的結構:[64] 該層引數和:64 該層的結構:[192, 64, 5, 5] 該層引數和:307200 該層的結構:[192] 該層引數和:192 該層的結構:[384, 192, 3, 3] 該層引數和:663552 該層的結構:[384] 該層引數和:384 該層的結構:[256, 384, 3, 3] 該層引數和:884736 該層的結構:[256] 該層引數和:256 該層的結構:[256, 256, 3, 3] 該層引數和:589824 該層的結構:[256] 該層引數和:256 該層的結構:[4096, 9216] 該層引數和:37748736 該層的結構:[4096] 該層引數和:4096 該層的結構:[4096, 4096] 該層引數和:16777216 該層的結構:[4096] 該層引數和:4096 該層的結構:[1000, 4096] 該層引數和:4096000 該層的結構:[1000] 該層引數和:1000 總引數數量和:1000