1. 程式人生 > >tensorflow 神經網路基本使用

tensorflow 神經網路基本使用

TF使用ANN(artificial neural network)

簡介

  • 受到生物神經網路的啟發
  • 發展歷史
    • 生物神經網路單元
    • 邏輯運算單元:and、or、xor等運算
    • 感知機(perceptron):hw(x)=step(wTx)
    • 多層感知機和反向傳播(multi-perceptron and backpropagation)

perceptron

  • sklearn中也有感知機的庫,其引數學習規則是
    wnextstepi,j=wi,j+η(y^jyj)xi
    其中η是學習率
  • 感知機與SGD很類似
  • 邏輯斯蒂迴歸可以給出樣本對於每一類的分類概率,而感知機則是直接根據閾值給出分類結果,因此一般在分類時,邏輯斯蒂迴歸相對感知機來說會常用一點
  • 感知機是線性的,難以解決非線性問題;但是如果採用多個感知機,則可以避免這個問題

多層感知機和反向傳播

  • 感知機的啟用函式是step函式,得到的結果非0即1,無法用於反向傳播(需要求取微分),因此利用Logistic函式σ(z)=1/(1+exp(z))替代之前的step函式,這個logistic函式也被稱為啟用函式
  • 常用的啟用函式有
    • logistic函式
    • 雙曲正切函式:tanh(z)=2σ(2z)1
    • RELU函式:relu(z)=max(z,0),RELU函式在z=0處不可導,但是由於它計算耗時十分短,在實際應用中應用很廣泛,同時它也沒有最大值的限制,可以減少GD使用過程中的一些問題
    • softmax函式
  • MLP常常用於分類,輸出層常常用softmax函式作為啟用函式,可以保證所有節點的輸出之和為1,相當於每個節點的輸出值都是這個節點的概率,softmax函式如下
    σ(z)j=ezjKk=1ezk
# 不顯示python使用過程中的警告
import warnings
warnings.filterwarnings("ignore")

%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import os

# 這個options只需要在之後第一次使用Session時使用就可以了
gpu_options = tf.GPUOptions(allow_growth=True) def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) return with tf.Session( config=tf.ConfigProto(gpu_options=gpu_options) ) as sess: print( sess.run( tf.constant(1) ) )
1
# skelarn perceptron
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron

iris = load_iris()
X = iris.data[:, (2,3)]
y = (iris.target ==0 ).astype( np.int )
per_clf = Perceptron( random_state=42 )
per_clf.fit(X, y)
y_pred = per_clf.predict( [[2, 0.5]] )
print( y_pred )
[1]
# 定義一些啟用函式
def logit(z):
    return 1 / (1 + np.exp(-z))

def relu(z):
    return np.maximum(0, z)

def derivative(f, z, eps=0.000001):
    return (f(z + eps) - f(z - eps))/(2 * eps)

# 視覺化啟用函式及其導數
z = np.linspace(-5, 5, 200)

plt.figure(figsize=(10,4))

plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])

plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])

plt.show()

這裡寫圖片描述

使用MLP進行訓練

  • TF中集成了MLP的庫,在tf.learn中
  • 下面是一個MLP訓練的例子
  • 其中infer_real_valued_columns_from_input是根據輸入的資料推斷出資料的型別以及資料特徵的維度等資訊,參考連結:http://www.cnblogs.com/wxshi/p/8053973.html
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.metrics import accuracy_score
# 匯入資料
mnist = input_data.read_data_sets("dataset/mnist")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")

feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier( hidden_units=[300,100], n_classes=10, feature_columns=feature_cols, model_dir="./models/mnist/" )
dnn_clf.fit( x=X_train, y=y_train, batch_size=2000,steps=1000 )

y_pred = list( dnn_clf.predict(X_test) )
print( "accuracy : ", accuracy_score(y_test, y_pred) )
Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f9f659ddd68>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': './models/mnist/'}
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from ./models/mnist/model.ckpt-1000
INFO:tensorflow:Saving checkpoints for 1001 into ./models/mnist/model.ckpt.
INFO:tensorflow:loss = 0.11683539, step = 1001
INFO:tensorflow:global_step/sec: 110.226
INFO:tensorflow:loss = 0.10939115, step = 1101 (0.908 sec)
INFO:tensorflow:global_step/sec: 111.916
INFO:tensorflow:loss = 0.082077585, step = 1201 (0.894 sec)
INFO:tensorflow:global_step/sec: 108.765
INFO:tensorflow:loss = 0.089471206, step = 1301 (0.920 sec)
INFO:tensorflow:global_step/sec: 121.815
INFO:tensorflow:loss = 0.073814414, step = 1401 (0.820 sec)
INFO:tensorflow:global_step/sec: 106.326
INFO:tensorflow:loss = 0.067025915, step = 1501 (0.940 sec)
INFO:tensorflow:global_step/sec: 125.559
INFO:tensorflow:loss = 0.07670402, step = 1601 (0.796 sec)
INFO:tensorflow:global_step/sec: 118.059
INFO:tensorflow:loss = 0.060902975, step = 1701 (0.848 sec)
INFO:tensorflow:global_step/sec: 107.56
INFO:tensorflow:loss = 0.057678875, step = 1801 (0.929 sec)
INFO:tensorflow:global_step/sec: 109.521
INFO:tensorflow:loss = 0.074146144, step = 1901 (0.913 sec)
INFO:tensorflow:Saving checkpoints for 2000 into ./models/mnist/model.ckpt.
INFO:tensorflow:Loss for final step: 0.057994846.
INFO:tensorflow:Restoring parameters from ./models/mnist/model.ckpt-2000
accuracy :  0.9747

TF構建DNN

  • 初始化訓練引數時,引數初始值可以設定為符合標準差為2/ninputs的截斷正態分佈的隨機數,這可以加快模型收斂的速度。在TF裡,截斷的最大值和最小值分別是2和-2。
from tensorflow.contrib.layers import fully_connected

reset_graph()

# MNIST
n_inputs = 28*28
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10

X = tf.placeholder( tf.float32, shape=(None, n_inputs), name="X" )
y = tf.placeholder( tf.int64, shape=(None), name="y" )

# 自己定義的層,和tf中的fully_connected類似
def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope( name ):
        n_inputs = int(X.shape[1])
        sttdev = 2 / np.sqrt( n_inputs )
        init = tf.truncated_normal( (n_inputs, n_neurons), sttdev=stddev ) # 截斷正態分佈,可以去除一些特別大的值
        W = tf.Variable( init, name="weights" )
        b = tf.Variable( tf.zeros([n_neurons]), name="bias" ) 
        z = tf.matmul( X, W ) + b
        if activation == "relu":
            return tf.nn.relu( z )
        else:
            return z

# with tf.name_scope("dnn"):
#     hidden1 = neuron_layer(X, n_hidden1, "hidden1", activation="relu")
#     hidden2 = neuron_layer(hidden1, n_hidden2, "hidden2", activation="relu")
#     logits = neuron_layer( hidden2, n_outputs, "outputs" )

with tf.name_scope("dnn"):
    hidden1 = fully_connected(X, n_hidden1, scope="hidden1")
    hidden2 = fully_connected(hidden1, n_hidden2, scope="hidden2")
    logits = fully_connected( hidden2, n_outputs, scope="outputs", activation_fn=None )

with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=y, logits=logits )
    loss = tf.reduce_mean( xentropy, name="loss" )

lr = 0.01
with tf.name_scope( "train" ):
    optimizer = tf.train.GradientDescentOptimizer( lr )
    training_op = optimizer.minimize( loss )

with tf.name_scope( "eval" ):
    correct = tf.nn.in_top_k( logits, y, 1 )
    accuracy = tf.reduce_mean( tf.cast( correct, tf.float32 ) )

init = tf.global_variables_initializer()
saver = tf.train.Saver()

n_epochs = 30
batch_size = 200

with tf.Session() as sess:
    init.run()
    for epoch in range( n_epochs ):
        for iteration in range( X_train.shape[0] // batch_size ):
            X_batch, y_batch = mnist.train.next_batch( batch_size )
            sess.run( training_op, feed_dict={X:X_batch, y:y_batch} )
        acc_train = accuracy.eval( feed_dict={X:X_batch, y:y_batch} )
        acc_test = accuracy.eval( feed_dict={X:X_test, y:y_test} )
        print( epoch, "train accuracy : ", acc_train, "; Test accuracy : ", acc_test )
    save_path = saver.save( sess, "./models/mnist/my_model_final.ckpt" )
0 train accuracy :  0.82 ; Test accuracy :  0.8298
1 train accuracy :  0.89 ; Test accuracy :  0.8783
2 train accuracy :  0.88 ; Test accuracy :  0.8977
3 train accuracy :  0.885 ; Test accuracy :  0.9043
4 train accuracy :  0.925 ; Test accuracy :  0.9104
5 train accuracy :  0.9 ; Test accuracy :  0.9143
6 train accuracy :  0.915 ; Test accuracy :  0.9204
7 train accuracy :  0.925 ; Test accuracy :  0.9224
8 train accuracy :  0.93 ; Test accuracy :  0.9246
9 train accuracy :  0.925 ; Test accuracy :  0.9283
10 train accuracy :  0.92 ; Test accuracy :  0.9297
11 train accuracy :  0.91 ; Test accuracy :  0.9316
12 train accuracy :  0.95 ; Test accuracy :  0.933
13 train accuracy :  0.93 ; Test accuracy :  0.9356
14 train accuracy :  0.94 ; Test accuracy :  0.9373
15 train accuracy :  0.915 ; Test accuracy :  0.9382
16 train accuracy :  0.94 ; Test accuracy :  0.9398
17 train accuracy :  0.965 ; Test accuracy :  0.9415
18 train accuracy :  0.935 ; Test accuracy :  0.9425
19 train accuracy :  0.95 ; Test accuracy :  0.9433
20 train accuracy :  0.925 ; Test accuracy :  0.9447
21 train accuracy :  0.925 ; Test accuracy :  0.9455
22 train accuracy :  0.93 ; Test accuracy :  0.9461
23 train accuracy :  0.91 ; Test accuracy :  0.9484
24 train accuracy :  0.935 ; Test accuracy :  0.9485
25 train accuracy :  0.95 ; Test accuracy :  0.95
26 train accuracy :  0.94 ; Test accuracy :  0.9511
27 train accuracy :  0.93 ; Test accuracy :  0.9531
28 train accuracy :  0.95 ; Test accuracy :  0.9527
29 train accuracy :  0.965 ; Test accuracy :  0.9541
  • 如果要使用之前訓練的模型進行分類任務,可以直接讀取儲存的模型檔案
with tf.Session() as sess:
    saver.restore( sess, "./models/mnist/my_model_final.ckpt" )
    X_new_scaled = X_test[:20, :]
    Z = logits.eval( feed_dict={X:X_new_scaled} )
    y_pred = np.argmax( Z, axis=1 )
    print( "real value : ", y_test[0:20] )
    print( "predict value :", y_pred[0:20] )
INFO:tensorflow:Restoring parameters from ./models/mnist/my_model_final.ckpt
real value :  [7 2 1 0 4 1 4 9 5 9 0 6 9 0 1 5 9 7 3 4]
predict value : [7 2 1 0 4 1 4 9 6 9 0 6 9 0 1 5 9 7 3 4]

NN超引數微調

  • NN雖然很靈活,但是有許多超引數需要調節,比如隱含層的數量、啟用函式等
  • 一般情況下,可以首先使用一個隱含層進行訓練與測試,得到一個初步結果
  • 深層網路比淺層網路在引數調節方面要更加靈活,它們可以使用更少的節點個數,對更加複雜的函式進行建模。
  • 訓練的過程中,可以逐步增加隱含層的數目,得到更加複雜的網路
  • 輸入和輸出層的神經元節點個數是由輸入和輸出確定的,對於mnist,輸入層有784個節點(特徵數目),輸出層有10個節點(類別個數)
  • 一般情況下,可以逐步增加隱含層中神經元節點的數量,直到模型發生過擬合;另外一種比較常用的方法是:選擇很大的神經元節點個數,然後利用early stopping方法進行訓練,得到最優的模型
  • 一般情況下,使用RELU作為啟用函式就可以達到比較好的結果,因為它的計算速度很快,同時也不會因為輸入值過大而飽和;在輸出層,只要輸出的類別之間相互排斥,則softmax函式一般即可