深度學習之CNN實現
阿新 • • 發佈:2019-01-10
CNN 實現
CNN相比與傳統神經網路,主要區別是引入了卷積層和池化層
卷積是使用tf.nn.conv2d, 池化使用tf.nn.max_pool
CNN之keras實現
import numpy as np
np.random.seed(2017) #為了復現
from __future__ import print_function
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv2D, MaxPooling2D, Flatten
from keras.optimizers import Adam
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#x標準化到0-1 y使用one-hot
X_train = X_train.reshape(-1, 28,28, 1)/255.
X_test = X_test.reshape(-1, 28,28, 1)/255.
y_train = np_utils.to_categorical(y_train, num_classes=10 )
y_test = np_utils.to_categorical(y_test, num_classes=10)
keras.layers.convolutional.Conv2D(filters, kernel_size, strides=(1, 1), padding=’valid’, data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=’glorot_uniform’, bias_initializer=’zeros’, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
#建立模型 使用卷積層
model = Sequential()
#輸出的維度 “same”代表保留邊界處的卷積結果 “valid”代表只進行有效的卷積,即對邊界資料不處理 height & width & channels
model.add(Conv2D(32, (5, 5),padding='same', activation='relu', input_shape=(28, 28, 1)))
#池化層 pool_size下采樣因子 strides步長
model.add(MaxPooling2D(pool_size=(2, 2),strides=(2, 2),border_mode='same'))
#斷開的神經元的比例
#model.add(Dropout(0.25))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
#Flatten層用來將輸入“壓平”,即把多維的輸入一維化,常用在從卷積層到全連線層的過渡
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(10, activation='softmax'))
/Users/yuyin/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:5: UserWarning: Update your `MaxPooling2D` call to the Keras 2 API: `MaxPooling2D(padding="same", strides=(2, 2), pool_size=(2, 2))`
#定義優化器
adam = Adam(lr=1e-4)
#定義loss和評價函式 metrics評價可為cost,accuracy,score
model.compile(optimizer=adam,
loss='categorical_crossentropy',
metrics=['accuracy'])
#訓練模型 epoch訓練次數 batch_size 每批處理32個
model.fit(X_train, y_train, epochs=1, batch_size=32)
#返回測試的指標
loss, accuracy = model.evaluate(X_test, y_test)
print('\n test loss: ', loss)
print('\n test accuracy: ', accuracy)
#預測
y_pre = model.predict(X_test)
#轉換成數字-每列概率最大的位置
y_num=[np.argmax(x) for x in y_pre]
Epoch 1/1
60000/60000 [==============================] - 167s - loss: 0.0617 - acc: 0.9812
9952/10000 [============================>.] - ETA: 0s
test loss: 0.0361482483758
test accuracy: 0.9878
CNN之TensorFlow實現
卷積計算輸出一點得記得步長
from __future__ import print_function
# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('data_dir', '/Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/', 'Directory for storing data') # 第一次啟動會下載文字資料,放在資料夾下
print(FLAGS.data_dir)
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1) # 變數的初始值為截斷正太分佈
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
"""
tf.nn.conv2d功能:給定4維的input和filter,計算出一個2維的卷積結果
前幾個引數分別是input, filter, strides, padding, use_cudnn_on_gpu, ...
input 的格式要求為一個張量,[batch, in_height, in_width, in_channels],批次數,影象高度,影象寬度,通道數
filter 的格式為[filter_height, filter_width, in_channels, out_channels],濾波器高度,寬度,輸入通道數,輸出通道數
strides 一個長為4的list. 表示每次卷積以後在input中滑動的距離
padding 有SAME和VALID兩種選項,表示是否要保留不完全卷積的部分。如果是SAME,則保留
use_cudnn_on_gpu 是否使用cudnn加速。預設是True
"""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
"""
tf.nn.max_pool 進行最大值池化操作,而avg_pool 則進行平均值池化操作
幾個引數分別是:value, ksize, strides, padding,
value: 一個4D張量,格式為[batch, height, width, channels],與conv2d中input格式一樣
ksize: 長為4的list,表示池化視窗的尺寸
strides: 視窗的滑動值,與conv2d中的一樣
padding: 與conv2d中用法一樣。
"""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, [None, 784])
x_image = tf.reshape(x, [-1,28,28,1]) #將輸入按照 conv2d中input的格式來reshape,reshape
"""
# 第一層
# 卷積核(filter)的尺寸是5*5, 通道數為1,輸出通道為32,即feature map 數目為32
# 又因為strides=[1,1,1,1] 所以單個通道的輸出尺寸應該跟輸入影象一樣。即總的卷積輸出應該為 28*28*32
# 也就是單個通道輸出為28*28,共有32個通道,共有?個批次
# 在池化階段,ksize=[1,2,2,1] 那麼卷積結果經過池化以後的結果,其尺寸應該是 14*14*32
"""
W_conv1 = weight_variable([5, 5, 1, 32]) # 卷積是在每個5*5的patch中算出32個特徵,分別是patch大小,輸入通道數目,輸出通道數目
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.elu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
"""
# 第二層
# 卷積核5*5,輸入通道為32,輸出通道為64。
# 卷積前影象的尺寸為 14*14*32, 卷積後為 14*14*64
# 池化後,輸出的影象尺寸為 7*7*64
"""
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.elu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# 第三層 是個全連線層,輸入維數7*7*64, 輸出維數為1024
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.elu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32) # 這裡使用了drop out,即隨機安排一些cell輸出值為0,可以防止過擬合
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# 第四層,輸入1024維,輸出10維,也就是具體的0~9分類
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 使用softmax作為多分類啟用函式
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1])) # 損失函式,交叉熵
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) # 使用adam優化
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) # 計算準確度
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer()) # 變數初始化
for i in range(1000):
batch = mnist.train.next_batch(100)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
/Users/yuyin/anaconda/lib/python2.7/site-packages/pandas/computation/__init__.py:19: UserWarning: The installed version of numexpr 2.4.4 is not supported in pandas and will be not be used
UserWarning)
/Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/
Extracting /Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/train-images-idx3-ubyte.gz
Extracting /Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/train-labels-idx1-ubyte.gz
Extracting /Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/t10k-images-idx3-ubyte.gz
Extracting /Users/yuyin/Downloads/筆記學習/深度學習/TensorFlow實戰Google深度學習框架/datasets/MNIST_data/t10k-labels-idx1-ubyte.gz
step 0, training accuracy 0.09
step 100, training accuracy 0.92
step 200, training accuracy 0.94
step 300, training accuracy 0.96
step 400, training accuracy 0.9
step 500, training accuracy 0.96
step 600, training accuracy 0.99
step 700, training accuracy 0.94
step 800, training accuracy 0.98
step 900, training accuracy 0.98
test accuracy 0.9674