1. 程式人生 > >Tensorflow 02: 卷積神經網路-MNIST

Tensorflow 02: 卷積神經網路-MNIST

前言

tensorflow是一個用於大規模數值計算的庫。其後臺依賴於高效的C++實現。連線後臺的橋樑被稱為session。
該篇博文主要介紹採用卷積神經網路實現MNIST手寫體數字識別。
環境:tensorflow 1.0;  ubuntu 14.04,  python2.7

資料載入

# coding=utf-8
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data"
, one_hot=True)

網路引數初始化,卷積,池化

# 卷積核引數初始化
def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

# 偏置引數初始化
def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

# 卷積操作
def conv2d(x, W):
    return
tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') # 池化操作 def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

【注:】引數初始化的一些trick:
權重初始化:用帶一點噪聲擾動的方式去初始化權重來打破對稱,從而避免0梯度。
One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients
偏置初始化:如果使用relu啟用函式,在初始化偏置bias的時候,一般用較小的正數去初始化來避免dead neurons。因為relu的數學表達是max(0, activation_val),如果activation_val始終小於0,則其經過relu計算後其值始終為0。
we’re using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid “dead neurons”

構造計算圖

x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])

# 卷積層1---池化層1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

# 卷積層2---池化層2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

# 全連線層
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = weight_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

# dropout層
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

# softmax層
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

# loss function 代價函式
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 計算模型預測的準確率
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

主要包括2個卷積層,2個池化層,1個全連線層,1個dropout層,1個softmax輸出層。並採用AdamOptimizer優化方法對網路進行引數訓練優化。

網路訓練

sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)

# 訓練
# 記錄每100次迭代的loss值
loss = []
# 記錄每100次迭代後在對應batch上的預測的準確率的值
acc = []
for idx in range(20000):
    batch = mnist.train.next_batch(50)
    if idx % 100 == 0:
        train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
        print('step %d, training accuracy %g' % (idx, train_accuracy))
        loss_tmp = sess.run(cross_entropy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
        acc.append(train_accuracy)
        loss.append(loss_tmp)
    sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))


# 畫圖
plt.figure()
plt.plot(loss)
plt.xlabel('interation')
plt.ylabel('loss value')

plt.figure()
plt.plot(acc)
plt.xlabel('interation')
plt.ylabel('acc')
plt.show()

【注:】在計算圖中,通過引數feed_dict可以替換任何tensor,並不僅限於placeholder。
在tensorflow中,獲取tensor值的2種方法:
(1)採用eval: accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
(2)採用sess.run: sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})

dropout的使用: 一般在網路訓練時開啟,在網路測試時關閉。

結果

loss變化曲線:可以看到收斂速度特別快。
這裡寫圖片描述

準確率變化曲線:
這裡寫圖片描述

用到的tensorfow api介紹

(1)tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)
實現輸入input和卷積核filter之間的卷積操作。
注意input和filter中tensor各維度的順序:
 input: [batch, in_height, in_width, in_channels]
 filter: [filter_height, filter_width, in_channels, out_channels]
卷積結果的輸出維度計算:
當padding=’SAME’時:
 out_height = ceil(float(in_height) / float(strides[1]))
 out_width = ceil(float(in_width) / float(strides[2]))
當padding=’VALID’時:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))

(2)tf.nn.max_pool(value, ksize, strides, padding, data_format=’NHWC’, name=None)
實現輸入value的池化操作。池化原理可參考UFLDL中的教程:
http://ufldl.stanford.edu/wiki/index.php/%E6%B1%A0%E5%8C%96

(3)tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)
這個函式內部包含了:softmax的計算,交叉熵的計算。相當於原來的如下2步。

y = tf.nn.softmax(tf.matmul(x, W) + b)
-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])

參考網址