pytorch lstm gru rnn 得到每個state輸出
阿新 • • 發佈:2019-01-05
預設只返回最後一個state,所以一次輸入一個step的input
# coding=UTF-8
import torch
import torch.autograd as autograd # torch中自動計算梯度模組
import torch.nn as nn # 神經網路模組
torch.manual_seed(1)
# lstm單元輸入和輸出維度都是3
lstm = nn.LSTM(input_size=3, hidden_size=3)
# 生成一個長度為5,每一個元素為1*3的序列作為輸入,這裡的數字3對應於上句中第一個3
inputs = [autograd.Variable (torch.randn((1, 3)))
for _ in range(5)]
# 設定隱藏層維度,初始化隱藏層的資料
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
out, hidden = lstm(i.view(1, 1, -1), hidden)
print(out.size())
print(hidden[0].size())
print("--------" )
print("-----------------------------------------------")
# 下面是一次輸入多個step的樣子
inputs_stack = torch.stack(inputs)
out,hidden = lstm(inputs_stack,hidden)
print(out.size())
print(hidden[0].size())
print結果:
(1L, 1L, 3L)
(1L, 1L, 3L)
--------
(1L, 1L, 3L)
(1L, 1L, 3L)
--------
(1L, 1L, 3L)
(1L, 1L, 3L)
--------
(1L, 1L, 3L)
(1L, 1L, 3L)
--------
(1L, 1L, 3L)
(1L, 1L, 3L)
--------
----------------------------------------------
(5L, 1L, 3L)
(1L, 1L, 3L)
可見LSTM的定義都是不用變的,根據input的step數目,一次輸入多少step,就一次輸出多少output,但只輸出最後一個state