實現Bidirectional LSTM Classifier----深度學習RNN
阿新 • • 發佈:2019-01-19
forward loss 定義 cost sha graph ota reac 求導 雙向循環神經網絡(Bidirectional Recurrent Neural Networks,Bi-RNN),Schuster、Paliwal,1997年首次提出,和LSTM同年。Bi-RNN,增加RNN可利用信息。普通MLP,數據長度有限制。RNN,可以處理不固定長度時序數據,無法利用歷史輸入未來信息。Bi-RNN,同時使用時序數據輸入歷史及未來數據,時序相反兩個循環神經網絡連接同一輸出,輸出層可以同時獲取歷史未來信息。
Language Modeling,不適合Bi-RNN,目標是通過前文預測下一單詞,不能將下文信息傳給模型。分類問題,手寫文字識別、機器翻譯、蛋白結構預測,Bi-RNN提升模型效果。百度語音識別,通過Bi-RNN綜合上下文語境,提升模型準確率。
Bi-RNN網絡結構核心,普通單向RNN拆成兩個方向,隨時序正向,逆時序反賂。當前時間節點輸出,同時利用正向、反向兩個方向信息。兩個不同方向RNN不共用state,正向RNN輸出state只傳給正向RNN,反向RNN輸出state只傳給反向RNN,正反向RNN沒有直接連接。每個時間節點輸入,分別傳給正反向RNN,根據各自狀態產生輸出,兩份輸出一起連接到Bi-RNN輸出節點,共同合成最終輸出。對當前時間節點輸出貢獻(或loss),在訓練中計算出來,參數根據梯度優化到合適值。
Bi-RNN訓練,正反向RNN沒有交集,分別展開普通前饋網絡。BPTT(back-propagation through time)算法訓練,無法同時更新狀態、輸出。正向state在t=1時未知,反向state在t=T時未知,state在正反向開始處未知,需人工設置。正向狀態導數在t=T時未知,反向狀態導數在t=1時未知,state導數在正反向結晶尾處未知,需設0代表參數更新不重要。
開始訓練,第一步,輸入數據forward pass操作,inference操作,先沿1->T方向計算正向RNN state,再沿T->1方向計算反向RNN state,獲得輸出output。第二步,backward pass操作,目標函數求導操作,先求導輸出output,先沿T->1方向計算正向RNN state導數,再沿1->T方向計算反向RNN state導數。第三步,根據求得梯度值更新模型參數,完成訓練。
Bi-RNN每個RNN單元,可以是傳統RNN,可以是LSTM或GRU單元。可以在一層Bi-RNN上再疊加一層Bi-RNN,上層Bi-RNN輸出作下層Bi-RNN輸入,可以進一步抽象提煉特征。分類任務,Bi-RNN輸出序列連接全連接層,或連接全局平均池化Global Average Pooling,再接Softmax層,和卷積網絡一樣。
TensorFlow實現Bidirectional LSTM Classifier,在MNIST數據集測試。載入TensorFlow、NumPy、TensorFlow自帶MNIST數據讀取器。input_data.read_data_sets下載讀取MNIST數據集。
設置訓練參數。設置學習速率 0.01,優化器選擇Adam,學習速率低。最大訓練樣本數 40萬,batch_size 128,設置每間隔10次訓練展示訓練情況。
MNIST圖像尺寸 28x28,輸入n_input 28(圖像寬),n_steps LSTM展開步數(unrolled steps of LSTM),設28(圖像高),圖像全部信息用上。一次讀取一行像素(28個像素點),下個時間點再傳入下一行像素點。n_hidden(LSTM隱藏節點數)設256,n_classes(MNIST數據集分類數目)設10。
創建輸入x和學習目標y 的place_holder。輸入x每個樣本直接用二維結構。樣本為一個時間序列,第一維度 時間點n_steps,第二維度 每個時間點數據n_input。設置Softmax層weights和biases,tf.random_normal初始化參數。雙向LSTM,forward、backward兩個LSTM cell,weights參數數量翻倍,2*n_hidden。
定義Bidirectional LSTM網絡生成函數。形狀(batch_size,n_steps,n_input)輸入變長度n_steps列表,元素形狀(batch_size,n_input)。輸入轉置,tf.transpose(x,[1,0,2]),第一維度batch_size,第二維度n_steps,交換。tf.reshape,輸入x變(n_steps*batch_size,n_input)形狀。 tf.split,x拆成長度n_steps列表,列表每個tensor尺寸(batch_size,n_input),符合LSTM單元輸入格式。tf.contrib.rnn.BasicLSTMCell,創建forward、backward LSTM單元,隱藏節點數設n_hidden,forget_bias設1。正向lstm_fw_cell和反向lstm_bw_cell傳入Bi-RNN接口tf.nn.bidirectional_rnn,生成雙向LSTM,傳入x輸入。雙向LSTM輸出結果output做矩陣乘法加偏置,參數為前面定義weights、biases。
最後輸出結果,tf.nn.softmax_cross_entropy_with_logits,Softmax處理計算損失。tf.reduce_mean計算平均cost。優化器Adam,學習速率learning_rate。tf.argmax得到模型預測類別,tf.equal判斷是否預測正確。tf.reduce_mean求平均準確率。
執行訓練和測試操作。執行初始化參數,定義一個訓練循環,保持總訓練樣本數(叠代數*batch_size)小於設定值。每輪訓練叠代,mnist.train.next_batch拿到一個batch數據,reshape改變形狀。包含輸入x和訓練目標y的feed_dict傳入,執行訓練操作,更新模型參數。叠代數display_step整數倍,計算當前batch數據預測準確率、loss,展示。
全部訓練叠代結果,訓練好模型,mnist.test.images全部測試數據預測,展示準確率。
完成40萬樣本訓練,訓練集預測準確率基本是1,10000樣本測試集0.983準確率。
Bidirectional LSTM Classifier,MNIST數據集表現不如卷積神經網絡。Bi-RNN、雙向LSTM網絡,時間序列分類任務表現更好,同時利用時間序列歷史和未來信息,結合上下文信息,結果綜合判斷。
復制代碼
import tensorflow as tf
import numpy as np
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
max_samples = 400000
batch_size = 128
display_step = 10
# Network Parameters
n_input = 28 # MNIST data input (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 256 # hidden layer num of features
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {
# Hidden layer weights => 2*n_hidden because of foward + backward cells
‘out‘: tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
}
biases = {
‘out‘: tf.Variable(tf.random_normal([n_classes]))
}
def BiRNN(x, weights, biases):
# Prepare data shape to match `bidirectional_rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: ‘n_steps‘ tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshape to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of ‘n_steps‘ tensors of shape (batch_size, n_input)
x = tf.split(x, n_steps)
# Define lstm cells with tensorflow
# Forward direction cell
lstm_fw_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Backward direction cell
lstm_bw_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
# try:
outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
# except Exception: # Old TensorFlow version only returns outputs not states
# outputs = rnn.bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
# dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights[‘out‘]) + biases[‘out‘]
pred = BiRNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < max_samples:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
# Calculate accuracy for 128 mnist test images
test_len = 10000
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print("Testing Accuracy:", sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
實現Bidirectional LSTM Classifier----深度學習RNN