【tf.keras】tf.keras使用tensorflow中定義的optimizer
我的 tensorflow+keras 版本:
print(tf.VERSION) # '1.10.0'
print(tf.keras.__version__) # '2.1.6-tf'
tf.keras 沒有實現 AdamW,即 Adam with Weight decay。論文《DECOUPLED WEIGHT DECAY REGULARIZATION》提出,在使用 Adam 時,weight decay 不等於 L2 regularization。具體可以參見 當前訓練神經網路最快的方式:AdamW優化演算法+超級收斂 或 L2正則=Weight Decay?並不是這樣。
keras 中沒有實現 AdamW 這個 optimizer,而 tensorflow 中實現了,所以在 tf.keras 中引入 tensorflow 的 optimizer 就好。
如下所示:
import tensorflow as tf from tensorflow.contrib.opt import AdamWOptimizer mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) # adam = tf.train.AdamOptimizer() # adam with weight decay adamw = AdamWOptimizer(weight_decay=1e-4) model.compile(optimizer=adamw, loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_split=0.1) print(model.evaluate(x_test, y_test))
如果只是像上面這樣使用的話,已經沒問題了。但是如果要加入 tf.keras.callbacks 中的某些元素,如 tf.keras.callbacks.ReduceLROnPlateau(),可能就會出現異常 AttributeError: 'TFOptimizer' object has no attribute 'lr'。
以下程式碼將出現 AttributeError: 'TFOptimizer' object has no attribute 'lr',就是因為加入了 tf.keras.callbacks.ReduceLROnPlateau(),其它兩個 callbacks 不會引發異常。
import tensorflow as tf
from tensorflow.contrib.opt import AdamWOptimizer
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
# 按照 val_acc 的值來儲存模型的引數,val_acc 有提升才儲存新的引數
ck_callback = tf.keras.callbacks.ModelCheckpoint('checkpoints/weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5', monitor='val_acc', mode='max',
verbose=1, save_best_only=True, save_weights_only=True)
# 使用 tensorboard 監控訓練過程
tb_callback = tf.keras.callbacks.TensorBoard(log_dir='logs')
# 在 patience 個 epochs 內,被監控的 val_loss 都沒有下降,那麼就降低 learning rate,新的值為 lr = factor * lr_old
lr_callback = tf.keras.callbacks.ReduceLROnPlateau(patience=3)
adam = tf.train.AdamOptimizer()
# adam with weight decay
# adamw = AdamWOptimizer(weight_decay=1e-4)
model.compile(optimizer=adam,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_split=0.1, callbacks=[ck_callback, tb_callback, lr_callback])
print(model.evaluate(x_test, y_test))
解決辦法如下所示:
import tensorflow as tf
from tensorflow.contrib.opt import AdamWOptimizer
from tensorflow.keras import backend as K
from tensorflow.python.keras.optimizers import TFOptimizer
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
# 按照 val_acc 的值來儲存模型的引數,val_acc 有提升才儲存新的引數
ck_callback = tf.keras.callbacks.ModelCheckpoint('checkpoints/weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5', monitor='val_acc', mode='max',
verbose=1, save_best_only=True, save_weights_only=True)
# 使用 tensorboard 監控訓練過程
tb_callback = tf.keras.callbacks.TensorBoard(log_dir='logs')
# 在 patience 個 epochs 內,被監控的 val_loss 都沒有下降,那麼就降低 learning rate,新的值為 lr = factor * lr_old
lr_callback = tf.keras.callbacks.ReduceLROnPlateau(patience=3)
learning_rate = 0.001
learning_rate = K.variable(learning_rate)
# adam = tf.train.AdamOptimizer()
# # 在 tensorflow 1.10 版中,TFOptimizer 在 tensorflow.python.keras.optimizers 中可以找到,而 tensorflow.keras.optimizers 中沒有
# adam = TFOptimizer(adam)
# adam.lr = learning_rate
# adam with weight decay
adamw = AdamWOptimizer(weight_decay=1e-4)
adamw = TFOptimizer(adamw)
adamw.lr = learning_rate
model.compile(optimizer=adamw,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_split=0.1, callbacks=[ck_callback, tb_callback, lr_callback])
print(model.evaluate(x_test, y_test))
用 TFOptimizer 包裹一層就行了,這樣在使用 tf.keras.callbacks.ReduceLROnPlateau() 時也沒有問題了。
在匯入 TFOptimizer 時,注意它所在的位置。1.10 版本的 tensorflow 匯入 keras 就有兩種方式——tensorflow.keras 和 tensorflow.python.keras,這樣其實有點混亂,而 TFOptimizer 的匯入只在後者能找到。(有點神奇。。。似乎 1.14 版本 tensorflow 去掉了第一種匯入方式,但 tensorflow 2.0 又有了。。。)
References
當前訓練神經網路最快的方式:AdamW優化演算法+超級收斂 -- 機器之心
L2正則=Weight Decay?並不是這樣 -- 楊鎰銘
ReduceLROnPlateau with native optimizer: 'TFOptimizer' object has no attribute 'lr' #20