手寫數字識別程式碼,可以跑通
來源:
https://github.com/caicloud/tensorflow-tutorial/tree/master/Deep_Learning_with_TensorFlow/1.0.0/Chapter05/
原始檔在Python3有問題,無法解決
#來自實戰谷歌深度學習框架126頁,5.5章節示例程式碼
執行條件:
Python2.7
TF ==1.4
執行方法:
執行檔案2,然後執行檔案3,注意修改模型儲存路徑,
模型下載地址:連結:https://pan.baidu.com/s/1aXLv3K1agYUUZbtGlXTgHg 密碼:x5jc
模型解壓後文件夾是這樣,就可以運行了
檔案1:mnist_inference.py
# -*- coding:utf-8 -*- #來自實戰谷歌深度學習框架126頁,https://github.com/caicloud/tensorflow-tutorial/tree/master/Deep_Learning_with_TensorFlow/1.0.0/Chapter05/5.%20MNIST%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5 #此檔案定義神經網路前向傳播過程和神經網路中的引數,無論訓練還是測試都可以直接呼叫inference這個函式 import tensorflow as tf #神經網路結構相關引數 INPUT_NODE = 784 #輸入層節點個數等於圖片畫素個數 OUTPUT_NODE = 10 #輸出層節點個數,分類的種類數 LAYER1_NODE = 500 #隱藏層節點個數 ''' tf.get_variable()獲取變數 訓練時會建立變數; 測試時候通過儲存的模型載入這些變數; ''' def get_weight_variable(shape, regularizer): weights = tf.get_variable("weights", shape, initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(weights)) return weights #定義神經網路前向傳播過程 def inference(input_tensor, regularizer): #宣告第一層神經網路的變數並完成前向傳播過程 with tf.variable_scope('layer1'): weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer) biases = tf.get_variable("biases", [LAYER1_NODE], initializer=tf.constant_initializer(0.0)) layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases) #宣告第二層神經網路的變數並完成前向傳播過程 with tf.variable_scope('layer2'): weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer) biases = tf.get_variable("biases", [OUTPUT_NODE], initializer=tf.constant_initializer(0.0)) layer2 = tf.matmul(layer1, weights) + biases #返回前向傳播過程的結果 return layer2
檔案2:mnist_train.py
# -*- coding:utf-8 -*- #定義神經網路訓練過程 import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data #載入檔案mnist_inference.py定義神經網路前向傳播過程和神經網路中的引數 import mnist_inference import os #配置神經網路的引數 BATCH_SIZE = 100 #每批次的數目,越小越接近隨機梯度下降,越大越接近梯度下降 LEARNING_RATE_BASE = 0.8 #基礎學習率 LEARNING_RATE_DECAY = 0.99 #學習率的衰減率 REGULARIZATION_RATE = 0.0001 #正則化率:正則化項在損失函式中的係數 TRAINING_STEPS = 30000 #訓練次數 MOVING_AVERAGE_DECAY = 0.99 #滑動平均衰減率 MODEL_SAVE_PATH="/Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_model/" #模型儲存路徑 MODEL_NAME="mnist_model" #模型名字 #訓練模型的過程 def train(mnist): #定義輸入,輸出的placeholder。placeholder:存放資料的地方 x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE], name='y-input') #正確標註結果 #定義l2正則化損失函式 regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE) #正則化 #直接呼叫mnist_inference中定義的常量和前向傳播的函式 y = mnist_inference.inference(x, regularizer) #預測結果 #定義訓練輪數的變數 global_step = tf.Variable(0, trainable=False) #初始化滑動平均類 variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) #在變數上使用滑動平均 variables_averages_op = variable_averages.apply(tf.trainable_variables()) #定義交叉熵損失函式:刻畫預測值和真實值之間的差距,當分類問題中只有一個確定結果時候使用這個函式;logits=y是前向傳播結果。labels=tf.argmax(y_, 1)是正確標註的結果 cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) cross_entropy_mean = tf.reduce_mean(cross_entropy) #計算在當前batch中所有樣例的交叉熵平均值。 loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses')) #生成指數衰減的學習率 learning_rate = tf.train.exponential_decay( LEARNING_RATE_BASE, global_step, mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY, staircase=True) # #優化損失函式 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) with tf.control_dependencies([train_step, variables_averages_op]): train_op = tf.no_op(name='train') #初始化持久化類 saver = tf.train.Saver() with tf.Session() as sess: tf.global_variables_initializer().run() #訓練過程中不再測試模型在驗證資料集上的表現,驗證和測試過程有獨立的檔案 for i in range(TRAINING_STEPS): xs, ys = mnist.train.next_batch(BATCH_SIZE) _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) #每1000輪儲存一次模型 if i % 1000 == 0: #輸出損失函式大小。 print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) #儲存模型,global_step引數可以讓每個被儲存的模型後面加上訓練的輪數 saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) def main(argv=None): #宣告處理mnist資料集的類,自動下載資料 #mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True) mnist = input_data.read_data_sets("/Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data", one_hot=True) train(mnist) if __name__ == '__main__': tf.app.run() ''' python 3.5, 3.7執行報錯:UnboundLocalError: local variable 'self' referenced before assignment Python2.7 tf == 1.4 正確輸出: Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/train-images-idx3-ubyte.gz Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/train-labels-idx1-ubyte.gz Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/t10k-images-idx3-ubyte.gz Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/t10k-labels-idx1-ubyte.gz 2018-11-15 17:39:17.343532: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA After 1 training step(s), loss on training batch is 3.50884. /usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py:954: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if coord_checkpoint_filename == ckpt.model_checkpoint_path: After 1001 training step(s), loss on training batch is 0.253824. After 2001 training step(s), loss on training batch is 0.188473. After 3001 training step(s), loss on training batch is 0.146706. After 4001 training step(s), loss on training batch is 0.139439. After 5001 training step(s), loss on training batch is 0.10672. After 6001 training step(s), loss on training batch is 0.100585. After 7001 training step(s), loss on training batch is 0.0929439. After 8001 training step(s), loss on training batch is 0.0827919. After 9001 training step(s), loss on training batch is 0.0763581. After 10001 training step(s), loss on training batch is 0.0697737. After 11001 training step(s), loss on training batch is 0.0648445. After 12001 training step(s), loss on training batch is 0.0604293. After 13001 training step(s), loss on training batch is 0.05912. After 14001 training step(s), loss on training batch is 0.0504698. After 15001 training step(s), loss on training batch is 0.0480046. After 16001 training step(s), loss on training batch is 0.0489372. After 17001 training step(s), loss on training batch is 0.0454429. After 18001 training step(s), loss on training batch is 0.0451788. After 19001 training step(s), loss on training batch is 0.0475639. After 20001 training step(s), loss on training batch is 0.0405349. After 21001 training step(s), loss on training batch is 0.0395247. After 22001 training step(s), loss on training batch is 0.0376075. After 23001 training step(s), loss on training batch is 0.0420034. After 24001 training step(s), loss on training batch is 0.040975. After 25001 training step(s), loss on training batch is 0.0387627. After 26001 training step(s), loss on training batch is 0.0434365. After 27001 training step(s), loss on training batch is 0.0374968. After 28001 training step(s), loss on training batch is 0.0359461. After 29001 training step(s), loss on training batch is 0.0330341. [Finished in 161.1s] '''
1
檔案3:mnist_eval.py
# -*- coding:utf-8 -*-
#定義了神經網路的測試過程
'''
每隔10秒執行一次,每次讀取最新儲存的模型,因為訓練程式不一定會每10秒輸出一個模型,所以,有些模型會被重複使用。
accuracy_score準確率函式可以改為輸出圖片中的數字,這樣就可以提交到kaggle中去了
'''
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_inference ##載入檔案mnist_inference.py定義神經網路前向傳播過程和神經網路中的引數
import mnist_train #載入訓練函式包
#1. 每10秒載入一次最新的模型¶
# 載入的時間間隔。
EVAL_INTERVAL_SECS = 10
def evaluate(mnist):
with tf.Graph().as_default() as g:
#定義輸入,輸出的格式
x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input')
y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE], name='y-input')
validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
#呼叫封裝的函式計算前向傳播結果
y = mnist_inference.inference(x, None)
#
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) #tf.argmax(y, 1)得到輸入樣例的類別
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#通過變數重新命名方式載入模型,共用檔案mnist_inference.py定義的前向傳播過程
variable_averages = tf.train.ExponentialMovingAverage(mnist_train.MOVING_AVERAGE_DECAY)
variables_to_restore = variable_averages.variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
#每隔多少秒呼叫一次計算正確率的過程來檢測正確率的變化。
while True:
with tf.Session() as sess:
#自動找到目錄中最新模型的檔名
ckpt = tf.train.get_checkpoint_state(mnist_train.MODEL_SAVE_PATH)
if ckpt and ckpt.model_checkpoint_path:
#載入模型
saver.restore(sess, ckpt.model_checkpoint_path)
#通過檔名得到模型儲存時候迭代的次數
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
accuracy_score = sess.run(accuracy, feed_dict=validate_feed)
print("After %s training step(s), validation accuracy = %g" % (global_step, accuracy_score))
else:
print('No checkpoint file found')
return
time.sleep(EVAL_INTERVAL_SECS)
def main(argv=None):
#宣告處理mnist資料集的類,自動下載資料
mnist = input_data.read_data_sets("/Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data", one_hot=True)
evaluate(mnist)
if __name__ == '__main__':
main()
'''
輸出
Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/train-images-idx3-ubyte.gz
Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/train-labels-idx1-ubyte.gz
Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/t10k-images-idx3-ubyte.gz
Extracting /Users/apple/Documents/ST/python/python專案/手寫數字教材/MNIST_data/t10k-labels-idx1-ubyte.gz
2018-11-15 17:45:47.696784: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
After 29001 training step(s), validation accuracy = 0.9864
'''
認識你是我們的緣分,同學,等等,學習人工智慧,記得關注我。
微信掃一掃
關注該公眾號
《灣區人工智慧》
相關推薦
手寫數字識別程式碼,可以跑通
來源: https://github.com/caicloud/tensorflow-tutorial/tree/master/Deep_Learning_with_TensorFlow/1.0.0/Chapter05/ 原始檔在Python3有問題,無法解決 #來自實
機器學習實戰k近鄰演算法(kNN)應用之手寫數字識別程式碼解讀
from numpy import * from os import listdir import operator import time #k-NN簡單實現函式 def classify0(inX,dataSet,labels,k): #求出樣本集的行數,也就是labels標籤的數目
pytorch學習:MNIST手寫數字識別程式碼
# -*- coding: utf-8 -*- """ Created on Mon Sep 3 08:38:27 2018 @author: www """ import torch from torch import nn from torchvision.data
Deep Learning-TensorFlow (1) CNN卷積神經網路_MNIST手寫數字識別程式碼實現詳解
import tensorflow as tf import tensorflow.examples.tutorials.mnist.input_data as input_data import time # 計算開始時間 start = time.clock()
10 行程式碼,實現手寫數字識別
識別手寫的阿拉伯數字,對於人類來說十分簡單,但是對於程式來說還是有些複雜的。 不過隨著機器學習技術的普及,使用10幾行程式碼,實現一個能夠識別手寫數字的程式,並不是一件難事。這是因為有太多的機器學習模型可以拿來直接用,比如tensorflow、caffe,在python下
第二節,TensorFlow 使用前饋神經網絡實現手寫數字識別
com net config return pyplot dataset 運行 算法 但是 一 感知器 感知器學習筆記:https://blog.csdn.net/liyuanbhu/article/details/51622695 感知器(Percep
第三節,TensorFlow 使用CNN實現手寫數字識別
啟用 out min 灰度 HA 打破 gre 大量 gray 上一節,我們已經講解了使用全連接網絡實現手寫數字識別,其正確率大概能達到98%,著一節我們使用卷積神經網絡來實現手寫數字識別, 其準確率可以超過99%,程序主要包括以下幾塊內容 [1]: 導入數據,即測試集和
手寫數字識別 隨機森林 程式碼
資料來源:https://github.com/Janly238/Digit_Recognizer/tree/master/data #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sat Jul 22 14:31:35
lesson22-24 MNIST資料集,模組化搭建神經網路八股,手寫數字識別準確率輸出
import tensorflow as tf #MNIST資料集輸出識別準確率 #MNIST資料集: #提供6w張28*28畫素點的0-9手寫數字圖片和標籤,用於訓練 #提供1w張28*28畫素點的0-9手寫數字圖片和標籤,用於測試 #每張圖片的784個畫素點(
tensorflow實戰:MNIST手寫數字識別的優化2-代價函式優化,準確率98%
最簡單的tensorflow的手寫識別模型,這一節我們將會介紹其簡單的優化模型。我們會從代價函式,多層感知器,防止過擬合,以及優化器的等幾個方面來介紹優化過程。 1.代價函式的優化: 我們可以這樣將代價函式理解為真實值與預測值的差距,我們神經
10 行程式碼實現手寫數字識別
可直接閱讀原文:http://c.raqsoft.com.cn/article/1540374496048?r=alice 識別手寫的阿拉伯數字,對於人類來說十分簡單,但是對於程式來說還是有些複雜的。 不過隨著機器學習技術的普及,使用10幾行程式碼,實現一個能夠識別手
TensorFlow程式碼實現(一)[MNIST手寫數字識別]
最簡單的神經網路結構: 資料來源準備:資料在之前的文章中分析過了 在這裡我們就構造一層神經網路: 前提準備: 引數: train images:因為圖片是28*28的個數,換算成一維陣列就是784,因此我們定義x = tf.placeholder(tf
不用框架,Python實現手寫數字識別
有一句話說得好,要有造輪子的技術和用輪子的覺悟,今年來人工智慧火的不行,大家都爭相學習機器學習,作為學習大軍中的一員,我覺得最好的學習方法就是用python把機器學習演算法實現一遍,下面我介紹一下用邏輯迴歸實現手寫字型的識別。 邏輯迴歸知識點回顧
基於Tensorflow, OpenCV. 使用MNIST資料集訓練卷積神經網路模型,用於手寫數字識別
基於Tensorflow,OpenCV 使用MNIST資料集訓練卷積神經網路模型,用於手寫數字識別 一個單層的神經網路,使用MNIST訓練,識別準確率較低 兩層的卷積神經網路,使用MNIST訓練(模型使用MNIST測試集準確率高於99%
手寫數字識別遇到的問題,希望可以幫到大家
1、 D:\ProgramData\Anaconda3\python.exe "D:/Program Files/JetBrains/My Project/carry out.py" Traceback (most recent call last): File "D:
機器學習筆記 -吳恩達(第七章:邏輯迴歸-手寫數字識別,python實現 附原始碼)
(1)資料集描述 使用邏輯迴歸來識別手寫數字(0到9)。 將我們之前的邏輯迴歸的實現,擴充套件到多分類的實現。 資料集是MATLAB的本機格式,要載入它到Python,我們需要使用一個SciPy工具。影象在martix X中表示為400維向量(其中有5,000個), 400
一看就懂的K近鄰演算法(KNN),K-D樹,並實現手寫數字識別!
1. 什麼是KNN 1.1 KNN的通俗解釋 何謂K近鄰演算法,即K-Nearest Neighbor algorithm,簡稱KNN演算法,單從名字來猜想,可以簡單粗暴的認為是:K個最近的鄰居,當K=1時,演算法便成了最近鄰演算法,即尋找最近的那個鄰居。 用官方的話來說,所謂K近鄰演算法,即是給定一個訓練資
BP神經網絡(手寫數字識別)
根據 公式 輸入 廣泛 不可變 理想 變化 n) 放大 1實驗環境 實驗環境:CPU [email protected]/* */,內存8G,windows10 64位操作系統 實現語言:python 實驗數據:Mnist數據集 程序使用的數據庫是mni
keras入門實戰:手寫數字識別
如果 turn wid 寬度 initial 作用 err examples 預測 近些年由於理論知識的硬件的快速發展,使得深度學習達到了空前的火熱。深度學習已經在很多方面都成功得到了應用,尤其是在圖像識別和分類領域,機器識別圖像的能力甚至超過了人類。 本文用深度學習Pyt
MFC基於對話框 手寫數字識別 SVM+MNIST數據集
識別數字 做了 XML svm 簡單實用 清空 朋友 detail data 完整項目下載地址: http://download.csdn.net/detail/hi_dahaihai/9892004 本項目即拿MFC做了一個畫板,畫一個數字後可自行識別數字。此外還 有保存