TensorFlow梯度求解tf.gradients例項
阿新 • • 發佈:2020-02-05
我就廢話不多說了,直接上程式碼吧!
import tensorflow as tf w1 = tf.Variable([[1,2]]) w2 = tf.Variable([[3,4]]) res = tf.matmul(w1,[[2],[1]]) grads = tf.gradients(res,[w1]) with tf.Session() as sess: tf.global_variables_initializer().run() print sess.run(res) print sess.run(grads)
輸出結果為:
[[4]] [array([[2,1]],dtype=int32)]
可以這樣看res與w1有關,w1的引數設為[a1,a2],則:
2*a1 + a2 = res
所以res對a1,a2求導可得 [[2,1]]為w1對應的梯度資訊。
import tensorflow as tf def gradient_clip(gradients,max_gradient_norm): """Clipping gradients of a model.""" clipped_gradients,gradient_norm = tf.clip_by_global_norm( gradients,max_gradient_norm) gradient_norm_summary = [tf.summary.scalar("grad_norm",gradient_norm)] gradient_norm_summary.append( tf.summary.scalar("clipped_gradient",tf.global_norm(clipped_gradients))) return clipped_gradients w1 = tf.Variable([[3.0,2.0]]) # w2 = tf.Variable([[3,4]]) params = tf.trainable_variables() res = tf.matmul(w1,[[3.0],[1.]]) opt = tf.train.GradientDescentOptimizer(1.0) grads = tf.gradients(res,[w1]) clipped_gradients = gradient_clip(grads,2.0) global_step = tf.Variable(0,name='global_step',trainable=False) #update = opt.apply_gradients(zip(clipped_gradients,params),global_step=global_step) with tf.Session() as sess: tf.global_variables_initializer().run() print sess.run(res) print sess.run(grads) print sess.run(clipped_gradients)
以上這篇TensorFlow梯度求解tf.gradients例項就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支援我們。