1. 程式人生 > >TensorFlow機器學習實戰指南之第二章2

TensorFlow機器學習實戰指南之第二章2

oid tensor 偏差 traceback gradient 反向 擬合 hex 初始

TensorFlow實現反向傳播

本節先舉個簡單的回歸算法的例子。這裏先舉一個簡單的例子,從均值1,標準差為0.1的正態分布中隨機抽樣100個數,然後乘以變量A,損失函數L2正則函數,也就是實現函數X*A=target,X為100個隨機數,target為10,那麽A的最優結果也為10

第二個例子是一個簡單的二值分類算法。從兩個正態分布(N(-1,1)和N(3,1))生成100個數。
所有從正態分布N(-1,1)生成的數據標為目標類0;從正態分布N(3,1)生成的數據標為目標類1,模型
算法通過sigmoid函數將這些生成的數據轉換成目標類數據。換句話講,模型算法是sigmoid(x+A),其中,
A是要擬合的變量,理論上A=-1。假設,兩個正態分布的均值分別是m1和m2,則達到A的取值時,它們通

過-(m1+m2)/2轉換成到0等距的值。後面將會在TensorFlow中見證怎樣取到相應的值。

同時,指定一個合適的學習率對機器學習算法的收斂是有幫助的。優化器類型也需要指定,前面的兩個
例子會使用標準梯度下降法,它在TensorFlow中的實現是GradientDescentOptimizer()函數。

這裏是回歸算法例子:

1.導入Python的數值計算模塊,numpy和tensorflow:

In [15]:
import numpy as np
import tensorflow
as tf

2.創建計算圖會話:¶

In [16]:
sess = tf.Session()

3.生成數據,創建占位符和變量A:¶

In [17]:
x_vals = np.random.normal(1, 0.1, 100)
x_vals
Out[17]:
array([0.85473643, 0.96126079, 0.99987964, 0.94898843, 1.04117097,
       0.97721906, 1.17807261, 0.83860367, 1.28056141, 1.02976099,
       0.90844363, 1.05311543, 1.10732355, 0.94467634, 0.97918689,
       0.94916167, 0.87431717, 1.04365034, 0.9653559 , 0.9738876 ,
       0.94834554, 1.04800372, 0.97612144, 0.97875486, 1.08076762,
       0.89620432, 0.82966182, 1.01347914, 1.00655594, 1.00972554,
       1.0956883 , 1.01281699, 0.88992947, 1.04429882, 1.01027622,
       0.91045714, 1.10571857, 1.0064056 , 1.09069858, 0.91892655,
       0.99566244, 0.96414187, 1.10456956, 1.03746805, 1.05676228,
       1.05400922, 0.91619416, 1.00368318, 1.01889345, 1.01920683,
       0.9712843 , 0.99061975, 0.98477408, 1.02996796, 0.95895593,
       0.94575059, 0.89801272, 1.06555307, 0.85761454, 1.13257007,
       1.13296022, 0.96402961, 1.10022208, 0.99971843, 0.98802702,
       0.94654868, 1.08425381, 0.84186499, 0.95389053, 1.01410783,
       0.91944571, 1.1104405 , 1.04115229, 1.02436364, 1.03605459,
       1.06967948, 1.1200382 , 1.08068316, 0.89911599, 1.0328783 ,
       1.19179204, 1.10538897, 1.07498215, 1.13399276, 1.08425489,
       1.25083017, 1.05845486, 1.07359734, 1.18477225, 0.74738841,
       1.14564339, 1.071004  , 0.80177086, 0.88481168, 1.13268239,
       1.0493438 , 1.06613515, 0.89849749, 0.99410046, 0.99045711])
In [18]:
y_vals = np.repeat(10., 100)
y_vals
Out[18]:
array([10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
       10., 10., 10., 10., 10., 10., 10., 10., 10.])
In [19]:
x_data = tf.placeholder(shape=[1], dtype=tf.float32)
y_target = tf.placeholder(shape=[1], dtype=tf.float32)
A = tf.Variable(tf.random_normal(shape=[1]))
A
Out[19]:
<tf.Variable ‘Variable_2:0‘ shape=(1,) dtype=float32_ref>

4.增加乘法操作:

In [20]:
my_output = tf.multiply(x_data, A)

5.增加L2正則損失函數:

In [21]:
loss = tf.square(my_output - y_target)

6.在運行之前,需要初始化變量:

In [22]:
init = tf.global_variables_initializer()
sess.run(init)

7.現在聲明變量的優化器,使用的是標準梯度下降算法,學習率為0.02

In [23]:
my_opt = tf.train.GradientDescentOptimizer(learning_rate=0.02)
train_step = my_opt.minimize(loss)

8.最後一步是訓練算法。選擇一個隨機的x和y,傳入計算圖中。TensorFlow將自動地計算損失,調整A偏差來最小化損失:

In [25]:
for i in range(100):
    rand_index = np.random.choice(100)
    rand_x = [x_vals[rand_index]]
    rand_y = [y_vals[rand_index]]
    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
    if (i + 1) % 25 == 0:
        print(‘Step #‘ + str(i + 1) + ‘ A = ‘ + str(sess.run(A)))
        print(‘Loss = ‘ + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))
Step #25 A = [10.019832]
Loss = [0.2743014]
Step #50 A = [9.841655]
Loss = [0.00076162]
Step #75 A = [9.859462]
Loss = [0.15839773]
Step #100 A = [9.82074]
Loss = [0.08536417]

TensorFlow機器學習實戰指南之第二章2