吳恩達機器學習練習1——多元線性迴歸
阿新 • • 發佈:2018-12-19
機器學習練習1——多元線性迴歸
多變數線性迴歸
均值歸一化
代價函式
梯度下降
練習1
資料集
x1:the size of the house (in square feet)
x2 : the number of bedrooms
y :he price of the house
特徵縮放(歸一化)
在面對多維特徵問題的時候,我們要保證這些特徵都具有相近的尺度,這將幫助梯度下降演算法更快地收斂。
方法:將所有特徵的尺度都儘量縮放到-1到1之間
均值歸一化
為訓練集均值 為訓練集的方差或極差
function [X_norm, mu, sigma] = featureNormalize(X)
X_norm = X;
mu = zeros(1, size(X, 2));%average
sigma = zeros(1, size(X, 2));%standard deviation
mu = mean(X_norm);
sigma = std(X_norm);
for i = 1:m
X_norm(i,:) = (X_norm(i,:) - mu)./sigma;
end
mean(A):返回值為該矩陣的各列向量的均值 mean(A,2):返回值為該矩陣的各行向量的均值 std (x, flag,dim): flag表示標準偏差是除以n還是除以n-1
除以n-1; 除以n
dim表示維數
按照列分 是按照行分
若是三維的矩陣,dim==3就按照第三維來分資料
代價函式
%代價函式
%computeCost.m
function J = computeCost(X, y, theta)
m = length(y);
J = 0;
h = X*theta;
J = sum(h-y).^2/(2*m);
end
梯度下降
%梯度下降 function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters) m = length(y); J_history = zeros(num_iters, 1); for iter = 1:num_iters h = X*theta; theta(1) = theta(1) -(alpha/m)*sum(h-y); theta(2) = theta(2) -(alpha/m)*sum((h-y).*X(:,2)); theta(3) = theta(3) -(alpha/m)*sum((h-y).*X(:,3)); J_history(iter) = computeCostMulti(X, y, theta); end end
正規方程
function [theta] = normalEqn(X, y)
theta = zeros(size(X, 2), 1);
theta = pinv(X'*X)*X'*y;
end