Andrew Ng's Coursera Machine Leaning Coding Hw 2
Andrew Ng’s Coursera Machine Leaning Coding Hw 2
Author: Yu-Shih Chen
December 21, 2018 4:17AM
Intro:
本人目前是在加州上大學的大二生,對人工智慧和資料科學有濃厚的興趣所以在上學校的課的同時也喜歡上一些網課。主要目的是希望能夠通過在這個平臺上分享自己的筆記來達到自己更好的學習/複習效果所以notes可能會有點亂,有些我認為我自己不需要再複習的內容我也不會重複。當然,如果你也在上這門網課,然後剛好看到了我的notes,又剛好覺得我的notes可能對你有點用,那我也會很開心哈哈!有任何問題或建議OR單純的想交流/單純想做朋友的話可以加我的微信:y802088
Week 3 Coding Assignment
大綱:
- Sigmoid
- Compute cost (without regularization) and Gradient (without reg)
- Perdict Function
- Compute cost (with reg) and Gradients (with reg)
Sigmoid
這個section主要是寫一個能算sigmoid的function。
function g = sigmoid(z) %SIGMOID Compute sigmoid function % g = SIGMOID(z) computes the sigmoid of z. % You need to return the following variables correctly g = zeros(size(z)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the sigmoid of each value of z (z can be a matrix, % vector or scalar). g = 1./(1 + exp(-z)); % ============================================================= end
有很多種寫法,注意不要改變Input的矩陣的大小(適當的使用’.’)
Compute cost (without regularization) and Gradient (without reg)
function [J, grad] = costFunction(theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % % Note: grad should have the same dimensions as theta % % Note: X: 100 x 3, y: 100 x 1 z = X * theta; % 100 x 1 h_x = sigmoid(z); % 100 x 1 J = (-y' * log(h_x) - (1-y)' * log(1-h_x))/m % calculate gradient grad = X' * (h_x - y) / m % ============================================================= end
這其實就是為了之後的fminunc寫的function。我們要算出2個東西:J(誤差值)和gradient(partial derivative也就是slope)。還是一樣,注意矩陣之間的轉換和理解原式子要的是什麼。 剩下的就是套公式了
Predict Function
function p = predict(theta, X)
%PREDICT Predict whether the label is 0 or 1 using learned logistic
%regression parameters theta
% p = PREDICT(theta, X) computes the predictions for X using a
% threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)
m = size(X, 1); % Number of training examples
% You need to return the following variables correctly
p = zeros(m, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
% your learned logistic regression parameters.
% You should set p to a vector of 0's and 1's
%
predictions = sigmoid(X * theta);
for i = 1 : m
if predictions(i) >= 0.5
p(i) = 1
else
p(i) = 0
end
end
% =========================================================================
end
這個section主要就是去用得到的theta(老師已經幫我們用我們上個section寫的function執行過fminunc了)來預測新的值。threshold在0.5,所以只要把預測的值放到矩陣裡,然後所有大於等於0.5的就等於1,否則等於 0。
Compute cost (with reg) and Grardient (with reg)
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
% J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
% X is 118 x 28, y is 118 x 1, theta 28 x 1
z = X * theta; % 118 x 1
h_x = sigmoid(z); % 118 x 1
J_unreg = (-y' * log(h_x) - (1-y)' * (log(1 - h_x)))/m;
J = J_unreg + (lambda * sum(theta(2:end,:).^2))/(2*m);
grad(1) = (X(:,1)' * (h_x - y))./m; % 1 x 1
grad(2:end) = (X(:,2:end)' * (h_x - y)./m) + (lambda .* theta(2:end)./m); % 27 x 1
% =============================================================
end
如果完全理解了’Compute cost (without regularization) and Gradient (without reg)‘ 的section是怎麼寫的,那這個section就會很簡單。J的regularization就是後面加個式子而已。grad就是第一個theta0跟之前的公式一樣,剩下的加regularization。當然,也可以將theta0設為0,然後用一個式子直接寫出來而不是像我這樣分成兩個式子。
總結 :這次的功課是學習如何訓練一個logistic regression的learning model。瞭解fminunc的執行方式很重要(雖然這裡不用寫,老師幫你寫了),不然自己implement的時候就不知道該怎麼做了。然後就是,理解式子!其他就沒啥了。