tensorflow實戰一---基於線性迴歸簡單實現mnist手寫體識別
阿新 • • 發佈:2019-01-28
Mnist手寫體識別是tensorflow的入門經典教程,此處的mnist的手寫體識別率達到了91%,優化演算法為梯度下降演算法,啟用函式為softmax迴歸,沒有中間層,基本步驟可以分為七步。
1、設定變數
2、設定資料與結果的計算關係(設定圖)
3、設定優化演算法(梯度下降,train_step)
4、設定啟用函式(sotfmax)
5、初始化資料
6、開始訓練
7、模型評估
程式碼如下:
import
tensorflow as
tf
from tensorflow.examples.tutorials.mnist
import input_data
INPUT_NODE = 784
OUTPUT_NODE =
#梯度下降率
GDDOWN = 0.01
#訓練輪數
TRAINING_TIMES = 1000
#每次訓練數量
TRAINING_STEPS = 100
if __name__ == '__main__':
mnist = input_data.read_data_sets("/path/to/MNIST_data/", one_hot=True)
##首先打印出mnist圖片資料的大小
print("input_data\'s train size: " , mnist.train.num_examples)
print("input_data\'s validation size: "
print("input_data\'s test size: " , mnist.test.num_examples)
# 各個變數
x = tf.placeholder("float", shape=[None, INPUT_NODE])
y_ = tf.placeholder("float", shape=[None,OUTPUT_NODE])
W = tf.Variable(tf.zeros([INPUT_NODE, OUTPUT_NODE]))
b = tf.Variable(tf.zeros([OUTPUT_NODE]))
#
init = tf.initialize_all_variables()
sess = tf.InteractiveSession()
sess.run(init)
# 初始化圖
y = tf.nn.softmax(tf.matmul(x, W) + b)
# 優化演算法
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
train_step =tf.train.GradientDescentOptimizer(GDDOWN).minimize(cross_entropy)
for i in range(TRAINING_TIMES):
# 訓練
batch_xs, batch_ys =mnist.train.next_batch(TRAINING_STEPS)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# 模型評估
correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(y_, 1))
accuracy =tf.reduce_mean(tf.cast(correct_prediction, "float"))
j = i+1
print("第%d輪訓練,訓練個數%d個" % (j,j*TRAINING_STEPS))
#print("正確率預測: " + correct_prediction + "\n")
print("當前正確率: ")
print(sess.run(accuracy,feed_dict={x: mnist.test.images, y_: mnist.test.labels}))