1. 程式人生 > >tensorflow Optimizer.minimize()和gradient clipping

tensorflow Optimizer.minimize()和gradient clipping

mini 節點 通過 修改 glob radi per represent rain

在tensorflow中通常使用下述方法對模型進行訓練

# 定義Optimizer
opt = tf.train.AdamOptimizer(lr)
# 定義train
train = opt.minimize(loss)

for i in range(100):
    sess.run(train)

train指向的是tf.Graph中關於訓練的節點,其中opt.minimize(loss)相當不直觀,它相當於

# Compute the gradients for a list of variables.
grads_and_vars = opt.compute_gradients(loss, <list of variables>)

# grads_and_vars is a list of tuples (gradient, variable). # Ask the optimizer to apply the gradients. opt.apply_gradients(grads_and_vars)

即建立了求梯度的節點和optimizer根據梯度對變量進行修改的節點

因此,可以通過下述方法對梯度進行修改

grads_and_vars = opt.compute_gradients(loss, <list of variables>)
capped_grads_and_vars = [(MyCapper(grad), var) for
grad, var in grads_and_vars] opt.apply_gradients(capped_grads_and_vars)

舉兩個例子

# tf.clip_by_value(
#     t,
#     clip_value_min,
#     clip_value_max,
#     name=None
# )

grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in
grads_and_vars] opt.apply_gradients(capped_grads_and_vars)
# tf.clip_by_global_norm(
#     t_list,
#     clip_norm,
#     use_norm=None,
#     name=None
# )
# Returns:
#     list_clipped: A list of Tensors of the same type as list_t.
#     global_norm: A 0-D (scalar) Tensor representing the global norm.

opt = tf.train.AdamOptimizer(lr)
grads, vars = zip(*opt.compute_gradients(loss))
grads, _ = tf.clip_by_global_norm(grads, 5.0)
train = opt.apply_gradients(zip(grads, vars))

tensorflow Optimizer.minimize()和gradient clipping