【深度學習基礎1】神經網路基礎--邏輯迴歸
阿新 • • 發佈:2018-12-10
本博文根據 coursera 吳恩達 深度學習整理。作為理解神經網路的基礎。
一、知識點
深度學習本質上是對資料的一種擬合。使用非線性的函式集合作為模型,對樣本對進行損失最小的模擬。首先理解單個神經元的作用和原理,可以從最簡單的邏輯迴歸開始。
1) 首先,我們進行符號表示的說明:
樣本對:,訓練樣本共m個,x表示樣本,y表示分類結果。
其中, x有 個特徵;由於是二分類問題,。
因而資料可表示為
2) 瞭解一下sigmoid函式的特徵:
sigmoid 函式:
偏導:
函式影象:從影象中可以看出,當z無窮大或無窮小時,函式值均接近1,梯度接近於0
3)邏輯迴歸損失函式:
當統計m個樣本的cost function時:
二、訓練過程
正向傳播:
z=np.dot(w.T,x)+b
A=sigma(z)
反向傳播:
向量化:
梯度更新:
dz = A - T dw = 1.0/m * np.dot(X,dz.T) db = 1.0/m * np.sum(dz) w = w - alpha * dw b = b - alpha * db
三、例項(Logistic Regression with a Neural Network mindset)整理版本
1)引入必要的包和檔案
#coding=utf-8
import matplotlib.pyplot as plt
import h5py
import numpy as np
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
import pylab
2)pre-processing
Common steps for pre-processing a new dataset are: 1. Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) 2. Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) 3. "Standardize" the data
主要目的是熟悉資料(讀取並顯示),並對其做相應的處理,包括 reshape 和 standardize。
# 讀取訓練、測試資料
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
# m 表示樣本數量,num_px 表示輸入影象的維度即 Height/Width of each image,影象為正方形
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
# 將單張圖片畫素拉直:最後變成 (width*height*channel, m)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
# 標準化
train_set_x = train_set_x_flatten / 255.0
test_set_x = test_set_x_flatten / 255.0
3)開始訓練
3.1 定義必要的函式
def sigmoid(x):
"""定義sigmoid函式"""
return 1.0*np.exp(x)/(1.0+np.exp(x))
3.2 初始化
def initialize_with_zeros(dim):
"""初始化權重"""
w,b = np.zeros((dim,1)), 0
assert (w.shape == (dim,1))
assert (isinstance(b,float) or isinstance(b, int))
return w,b
3.3 定義網路
def propagate(w,b,X,Y):
""" 前向傳播與後向傳播,單次計算 """
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
A = sigmoid(np.dot(w.T,X)+b)
cost = -(1.0/m) * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A)) # compute cost
# BACKWARD PROPAGATION (TO FIND GRAD)
dw = (1.0/m) * np.dot(X,(A-Y).T)
db = (1.0/m) * np.sum(A-Y)
assert (dw.shape == w.shape)
assert (db.dtype == float)
cost = np.squeeze(cost)
assert (cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost=False):
"""
整體優化過程
引數:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation
grads,cost = propagate(w,b,X,Y)
costs.append(cost)
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
w = w - learning_rate * dw
b = b - learning_rate * db
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" % (i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
def predict(w, b, X):
'''
根據訓練權重進行預測
引數:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
A= sigmoid(np.dot(w.T,X)+b)
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
if A[0,i] <= 0.5:
Y_prediction[0,i] = 0
else:
Y_prediction[0,i] = 1
assert (Y_prediction.shape == (1, m))
return Y_prediction
3.4 整體建模
def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
"""
將整個流程建模
引數:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
# initialize parameters with zeros
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train": Y_prediction_train,
"w": w,
"b": b,
"learning_rate": learning_rate,
"num_iterations": num_iterations}
return d
訓練與結果展示:
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
四、小結
通過整體推導+實踐可以對邏輯迴歸的正向傳播、梯度下降、反向傳播等問題有一個基本瞭解,這對後續更深層次網路的理解十分有必要。