tensorflow學習筆記(八):dropout
阿新 • • 發佈:2019-01-04
我們都知道dropout對於防止過擬合效果不錯
dropout一般用在全連線的部分,卷積部分一般不會用到dropout,輸出曾也不會使用dropout,適用範圍[輸入,輸出)
- tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)
- tf.nn.rnn_cell.DropoutWrapper(rnn_cell, input_keep_prob=1.0, output_keep_prob=1.0)
普通dropout
def dropout(x, keep_prob, noise_shape=None, seed=None, name=None)
#x: 輸入
#keep_prob: 名字代表的意思, keep_prob 引數可以為 tensor,意味著,訓練時候 feed 為0.5,測試時候 feed 為 1.0 就 OK。
#return:包裝了dropout的x。訓練的時候用,test的時候就不需要dropout了
#例:
w = tf.get_variable("w1",shape=[size, out_size])
x = tf.placeholder(tf.float32, shape=[batch_size, size])
x = tf.nn.dropout(x, keep_prob=0.5)
y = tf.matmul(x,w)
rnn中的dropout
def rnn_cell.DropoutWrapper(rnn_cell, input_keep_prob=1.0, output_keep_prob=1.0):
#例
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(size, forget_bias=0.0, state_is_tuple=True)
lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=0.5)
#經過dropout包裝的lstm_cell就出來了