torch nn.Sequential
阿新 • • 發佈:2018-11-01
nn.Sequential
A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
一個有序的容器,神經網路模組將按照在傳入構造器的順序依次被新增到計算圖中執行,同時以神經網路模組為元素的有序字典也可以作為傳入引數。
看一下例子:
# Example of using Sequential
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Example of using Sequential with OrderedDict
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d( 1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))
接下來看一下Sequential原始碼,是如何實現的,主要看一下forward
函式:
https://pytorch.org/docs/stable/_modules/torch/nn/modules/container.html#Sequential
因為每一個module都繼承於nn.Module,都會實現__call__
forward
函式,具體講解點選這裡,所以forward函式中通過for迴圈依次呼叫各個神經網路模組的forward函式,最後輸出經過所有神經網路層的結果。
下面是簡單的例子:
class Activation_Net(nn.Module):
def __init__(self, in_dim, n_hidden_1, n_hidden_2, out_dim):
super().__init__()
self.layer1 = nn.Sequential(
nn.Linear(in_dim, n_hidden_1), nn.ReLU(True))
self.layer2 = nn.Sequential(
nn.Linear(n_hidden_1, n_hidden_2), nn.ReLU(True))
# 最後一層不需要新增啟用函式
self.layer3 = nn.Sequential(
nn.Linear(n_hidden_2, out_dim))
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
return x
上面的程式碼就是通過Squential將網路層和啟用函式結合起來,輸出啟用後的網路節點。