1. 程式人生 > >tensorflow課堂筆記(三)

tensorflow課堂筆記(三)

損失函式

"""
神經元模型 f(∑xiwi + b),其中b為偏置項bias,f為啟用函式activation function
啟用函式 activation function
tf.nn.relu()
tf.nn.sigmoid()
tf.nn.tanh()
NN的複雜度
層數 = 隱藏層 + 1個輸出層
總引數 = 總w + 總b
損失函式loss:預測值y與已經答案y_的差距
優化目標是讓loss儘量小
均方誤差mse:MSE(y_, y)
loss_mse = tf.reduce_mean(tf.square(y_-y))
"""
#coding utf-8
import tensorflow as tf
import numpy as np
BATCH_SIZE = 8
seed = 23455
COST = 1    #成本1
PROFIT = 9  #利潤9

rdm = np.random.RandomState(seed) #種下隨機數種子
X = rdm.rand(32, 2)               #0到1的32行2列的隨機數列表
Y_ = [[x1+x2+(rdm.rand()/10.0-0.05)] for [x1, x2] in X]  #rdm.rand()/10.0-0.05產生0.05到-0.05的隨機數

#1定義神經網路的輸入,引數和輸出,定義前向傳播
x = tf.placeholder(tf.float32, shape=[None, 2])
y_ = tf.placeholder(tf.float32, shape=[None, 1])

w1 = tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1))

y = tf.matmul(x, w1)

#2定義損失函式和反向傳播方法
#loss_mse = tf.reduce_mean(tf.square(y-y_))
loss_mse = tf.reduce_sum(tf.where(tf.greater(y,y_),(y-y_)*COST,(y_-y)*PROFIT))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)

#3訓練神經網路
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)

    #進行STEPS輪訓練
    STEPS = 20000
    for i in range(STEPS):
        start = (i % BATCH_SIZE) % 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x:X[start:end], y_:Y_[start:end]})
        if i % 500 == 0:
            print("After %d training step(s), loss in all data is " % i)
            print(sess.run(w1),"\n")

    #列印訓練結果
    print("\n")
    print("Final w1 is:\n")
    print(sess.run(w1),"\n")

"""
自定義損失函式
假如y<y_ 商品少了,損失了利潤
假如y>y_ 商品多了,損失了成本
loss = tf.reduce_sum(tf.where(tf.greater(y,y_),COST(y-y_),PROFIT(y_-y)))
上式類似於?:
執行結果:
Final w1 is:

[[1.0430964 ]
 [0.98464024]]

交叉熵ce(cross entropy):表徵兩個概率分佈之間的距離,距離小的更接近答案
H = -∑y_*logy
ce = -tf.reduce_mean(y_*.log(tf.cpil_by_value(y,1e-12,1.0)))
y<e-12為e-12,大於1.0時為1.0

n分類sotfmax(),滿足概率分佈
ce = tf.nn.sparse_sotfmax_cross_entropy_with_logits(logits=y,labels=tf.argmax(y_,1))
cem = tf.reduce_mean(ce)
"""