【學習筆記】吳恩達老師第一週程式設計大作業總結
Logistic Regression with a Neural Network mindset
用神經網路的思想來實現Logistic迴歸
學習目標
- 構建深度學習演算法的基本結構,包括:
- 初始化引數
- 計算損失函式和它的梯度
- 使用優化演算法(梯度下降法)
- 將上述三種函式按照正確的順序集中在一個主函式中
1. 依賴包
*numpy——python中用於科學計算的庫,多用於計算矩陣
*h5py——在Python提供讀取HDF5二進位制資料格式檔案的介面,本次的訓練及測試圖片集是以HDF5儲存的
*matplotlib——Python中著名的繪相簿
*PIL——Python Image Library,為Python提供影象處理功能
*scipy——基於NumPy來做高等數學、訊號處理、優化、統計和許多其它科學任務的拓展庫
為了使用上述依賴包(庫),使用如下程式碼匯入:
import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset
注意:在Ipython編譯器中可以使用%matplotlib inline,而其它編譯器中使用該魔發指令會編譯報錯,需要自己新增plt.show()函式才能顯示圖片,這在接下來的資料集視覺化中用得到。
2. 總覽問題集
問題陳述:給定一個數據集(“data.h5”),其中包含:
- 包含m個已標註資料的訓練集,其中若是貓(y=1),不是貓(y=0)
- 包含m個已標註資料集的測試集,同樣分為貓或者非貓
- 每一幅圖的shape為(num_px,num_px,3),其中3代表3通道(RGB)。因此,每一幅圖都是正方形的,及長寬相等
使用如下程式碼讀取(載入)資料:
# Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
這裡再訓練集和測試集的樣本後面加_orig,是因為隨後還要對這些資料reshape進行處理,此時它們還是擁有二維結構的矩陣。
可以通過下列程式碼將資料集中的資料實現視覺化:
# Example of a picture
index = 24
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
通過改變index = 後面的數字值,可以檢視資料集中不同的圖片。
執行結果:
注意:保證資料集中每一個數據的尺寸的統一,將對減少程式中的bug幫助很大。
小練習:
- 認識到m_train中的樣本個數
- 認識到m_test中的樣本個數
- 認識到num_px的數值,即瞭解影象的大小(是多少畫素*多少畫素)
這裡老師提示說,我們先前提到的未處理的資料集,即帶有orig的資料集,它的尺寸為(m_train,num_px,num_px,3)。我的個人理解就是(訓練集的個數,影象高的畫素值,影象寬的畫素值,3通道)。
使用如下程式碼,來實現小練習中的任務要求:
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
而shape[0]中的0,我的個人理解就是0訪問的是shape的引數中第一個的數,1為第二位……以此類推
因此執行結果:
Number of training examples: m_train = 209 Number of testing examples: m_test = 50 Height/Width of each image: num_px = 64 Each image is of size: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50)
也就是說,資料集中,訓練集有209副圖,測試集有50幅圖,每一幅圖是64*64的大小(上面視覺化的執行結果就可以看出來)。
還記得老師上課說過,我們要將3個疊加在一起的矩陣展開,變成一個一維陣列,那麼對於其中的一幅圖,它將由(64,64,3)的結構展開成(64*64*3,1)的結構。而我們擁有m(209)個訓練樣本,那麼所有的輸入將會變為一個12288行,m(209)列的矩陣。而為了實現上述操作,我們使用reshape操作。
小練習:將形狀為(num_px,num_px,3)的訓練集和資料集展為一維的(num_px*num_px*3,1)的向量。
小竅門:當欲將一個形如(a,b,c,d)的矩陣X展平為一個形如(b*c*d,a)的矩陣,可以使用
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
使用如下程式碼,進行reshape操作:
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
這裡解釋一下train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T這句中的-1引數,-1代表的是行數(因為後面有轉置)由系統自己決定,大意就是反正我丟給你這麼一個矩陣,我需要209列,多少行你自己看著辦吧!系統一看,明白了大哥,知道了大哥,小的這就去辦,變成了64*64*3的行數。
輸出結果:
train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33]
最後一行是輸出了train_set_x_flatten中前5個數的值,用於檢查。
老師這裡說需要牢記的是:
一般預處理一個數據集的步驟如下:
- 計算出資料集的維度和形狀(如訓練集的個數、測試集的個數、圖片的大小等等)
- 像例子中那樣reshape資料集,將其變為一個向量(num_px*numpx*3,1)
- 標準化資料
為何要標準化我這裡產生了疑問,這裡引用博主index20001的內容:
資料的標準化(normalization)是將資料按比例縮放,使之落入一個小的特定區間。在一些資料比較和評價中常用到。典型的有歸一化法,還有比如極值法、標準差法。
歸一化方法的主要有兩種形式:一種是把數變為(0,1)之間的小數,一種是把有量綱表示式變為無量綱表示式。在數字訊號處理中是簡化計算的有效方式。
而在本例中,我們處理的資料集是影象,每一個值都是畫素值,介於0~255之間,那麼對於這樣的資料進行標準化很簡單,只需要進行:
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
因為0/255=0, 255/255=1,輕鬆地將資料變為了(0,1)的量。
3. 學習演算法的一般結構
說了這麼多,終於到構建演算法來區分貓圖的時候了。
我們使用神經網路的思想來實現logistic迴歸,正如上圖所示,是一種十分簡單的神經網路。
從數學角度來解釋演算法:
對於單一的一幅圖
回顧以前學習的內容,成本函式為:
關鍵步驟:在這個練習中執行以下步驟:
- 初始化模型引數
- 通過最小化成本來習得引數
- 通過習得的引數來進行預測(在測試集上)
- 分析結果得出結論
4. 分部實現演算法
建立神經網路的主要步驟如下:
1. 定義模型結構(例如輸入的特徵數)
2.初始化模型引數
3.迴圈:
- 計算當前損失(前向傳播)
- 計算當前梯度(反向傳播)
- 更新引數(梯度下降)
通常將1~3步單獨建立,然後最後整合在一個叫做model()的函式中。
4.1 輔助函式
小練習:建立sigmoid()函式。sigmoid函式定義為:,使用np.exp()來構建sigmoid函式。
函式程式碼如下:
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
做個小測試:
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
輸出結果
sigmoid([0, 2]) = [0.5 0.88079708]
4.2 初始化引數
小練習:使用np.zeros()函式初始化引數,將w初始化為0向量。
程式碼如下:
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
其中w=np.zeros((dim,1))中dim的理解就是說還是待定,它可以隨著我們輸入的向量X隨時變化。
關於np.zeros()函式的詳細說明可以參考雲金杞博主的文章。
比如我們設定維度dim=2,輸出一個示例:
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
輸出結果:
w = [[0.] [0.]] b = 0
那麼對於我們這個例子中,dim的取值就是num_px*num_px*3=12288維。
4.3 前向和反向傳播
初始化了向量之後,就可以進行前向傳播和反向傳播步驟了。
小練習:定義函式propagate()來計算成本函式和它的梯度。
提示:
前向傳播:
現在有了輸入X
計算A,
計算成本函式:
這裡需要用到兩個公式:
程式碼如下:
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = -1 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1 / m * np.dot(X, (A - Y).T)
db = 1 / m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
測試輸出結果:
定義w,b,X,Y如下:
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
計算dw、db和cost
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
輸出結果如下:
dw = [[0.99993216] [1.99980262]] db = 0.49993523062470574 cost = 6.000064773192205
最優化
- 目前已經初始化了引數
- 目前可以計算成本函式和它的梯度
- 現在,需要使用梯度下降方法來更新引數
小練習:定義優化函式,目標是通過習得w和b的值來最小化成本函式J。用θ舉例的話:θ=θ-αdθ,其中α是學習率。
程式碼如下:
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
測試輸出:
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print("costs="+str(costs))
結果如下
w = [[0.1124579 ] [0.23106775]] b = 1.5593049248448891 dw = [[0.90158428] [1.76250842]] db = 0.4304620716786828 costs=[6.000064773192205]
小練習:通過上述函式我們計算出了習得的w和b的值,我們通過習得的w和b的值來預測資料集X的標籤。通過定義predict()函式,用如下兩步計算預測值:
1.計算yhat,
2.將預測結果轉化為0(如果啟用值<=0.5)或者1(啟用值>=0.5),將預測結果儲存在向量y_prediction中。如果你想的話,可以在for迴圈中使用if/else語句。
程式碼如下:
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0, i] <= 0.5:
Y_prediction[0, i] = 0
else:
Y_prediction[0, i] = 1
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
需要記住的是:我們已經定義了多個函式:
- 初始化函式,initialize(w,b)
- 通過反覆迭最優化parameters(w,b):
- 計算成本和和它的梯度
- 通過梯度下降法更新引數
- 使用習得的(w,b)來預測給定樣本的標籤。
5. 將所有函式整合在model中
現在要將之前分別定義的每一個函式按照正確順序整合在model中。
小練習:按照如下的提示構建model()函式:
- Y_prediction用來表示測試集上的預測值
- Y_prediction_train用來表示訓練集上的預測值
- w,costs,grads用來作為optimize()的輸出
程式碼如下:
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
輸出測試:
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
輸出結果:
Cost after iteration 0: 0.693147 Cost after iteration 100: 0.584508 Cost after iteration 200: 0.466949 Cost after iteration 300: 0.376007 Cost after iteration 400: 0.331463 Cost after iteration 500: 0.303273 Cost after iteration 600: 0.279880 Cost after iteration 700: 0.260042 Cost after iteration 800: 0.242941 Cost after iteration 900: 0.228004 Cost after iteration 1000: 0.214820 Cost after iteration 1100: 0.203078 Cost after iteration 1200: 0.192544 Cost after iteration 1300: 0.183033 Cost after iteration 1400: 0.174399 Cost after iteration 1500: 0.166521 Cost after iteration 1600: 0.159305 Cost after iteration 1700: 0.152667 Cost after iteration 1800: 0.146542 Cost after iteration 1900: 0.140872 train accuracy: 99.04306220095694 % test accuracy: 70.0 %
註解:訓練的準確率接近100%。這證明我們定義的模型起作用了,並且對於訓練集的適應性很高。測試準確率68%,對於一個簡單的模型來說這個結果並不差,給定一個小的資料集通過logistic做線性分類,已經不錯了。
當前還存在針對訓練集過擬合的問題,待會可以通過正則化來減少過擬合。
使用如下程式碼來測試當前模型對圖片的分類能力:
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[int(d["Y_prediction_test"][0,index])].decode("utf-8") + "\" picture.")
這裡說一下,因為準確率只有68%左右,因此測試結果很容易出錯。
y = 1, you predicted that it is a "cat" picture.
y = 0, you predicted that it is a "non-cat" picture.
通過下面的程式碼來顯示成本的值和梯度:
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
註解:圖中可以看到成本值不斷下降,這表明引數已經習得。
在這裡說一下過擬合,當我們增加迭代次數,我們會發現在訓練集上的準確率提高了,但是測試集上準確率反而下降。
迭代2000次:
點錯迭代了70000次之後……發現
Cost after iteration 69700: 0.004497 Cost after iteration 69800: 0.004490 Cost after iteration 69900: 0.004484 train accuracy: 100.0 % test accuracy: 72.0 %
這……
6. 進一步分析(選修練習)
選擇學習率
提醒:為了讓梯度下降可以有效地進行,必須選擇合適的學習率。學習率α代表我們更新引數的快速程度。學習率太大我們可能會錯過最優解,同樣的,太小的話我們需要迭代很多次才能找到最優解。這就是使用合適的學習率的重要性。
讓我們比對選擇不同學習率時我們設立模型的學習曲線。
程式碼如下:
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
執行結果如下:
learning rate is: 0.01 train accuracy: 99.52153110047847 % test accuracy: 68.0 % ------------------------------------------------------- learning rate is: 0.001 train accuracy: 88.99521531100478 % test accuracy: 64.0 % ------------------------------------------------------- learning rate is: 0.0001 train accuracy: 68.42105263157895 % test accuracy: 36.0 % -------------------------------------------------------
、
7 用自己的圖片做測試(選修練習)
將自己的圖片放好後,修改下列程式碼中圖片的名稱後執行程式碼來預測:
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "cat_in_iran.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
執行結果:
y = 1.0, your algorithm predicts a "cat" picture.
y = 0.0, your algorithm predicts a "non-cat" picture.
效果還不錯,:-p
總結
1. 預處理資料集是很重要的
2. 單獨建立了所需函式:initialize()、propagate()、optimize()。最後將他們整合在model()中。
3. 調整學習率將對演算法產生巨大的改變。