1. 程式人生 > 其它 >線性迴歸學習筆記

線性迴歸學習筆記

技術標籤:機器學習機器學習

線性迴歸學習筆記

推導過程

f ( x i ) = w x i + b f(x_i)=wx_i+b f(xi)=wxi+b
目標函式為
m i n ∑ i = 1 m ( f ( x i ) − y i ) 2 = m i n ∑ i = 1 m ( y i − w x i − b ) 2 min \sum_{i=1}^{m}(f(x_i)-y_i)^2=min \sum_{i=1}^{m}(y_i-wx_i-b)^2 mini=1m(f(xi)yi)2=mini=1m(yiwxib)2
分別對 w w w b b b求偏導,

∂ E ( w , b ) ∂ w = 2 ∑ i = 1 m ( w x i 2 − x i ( y i − b ) ) \frac {\partial E_{(w,b)}}{\partial w}=2\sum_{i=1}^{m}\bigl(wx_i^2-x_i(y_i-b)\bigr) wE(w,b)=2i=1m(wxi2xi(yib))
∂ E ( w , b ) ∂ b = 2 ( ∑ i = 1 m ( w x i − y i ) + m b ) \frac {\partial E_{(w,b)}}{\partial b}=2\bigl( \sum_{i=1}^{m}(wx_i-y_i) + mb \bigr)
bE(w,b)=2(i=1m(wxiyi)+mb)

令兩式為0,得
∑ i = 1 m ( x i y i − w x i 2 ) = b ∑ i = 1 m x i \sum_{i=1}^{m}(x_iy_i-wx_i^2)=b\sum_{i=1}^{m}x_i i=1m(xiyiwxi2)=bi=1mxi
∑ i = 1 m ( y i − w x i ) = m b \sum_{i=1}^{m}(y_i-wx_i) = mb i=1m(yiwxi)=mb
消去 b b b,得
m ∑ i = 1 m ( x i y i − w x i 2 ) ∑ i = 1 m x i = ∑ i = 1 m ( y i − w x i ) m\frac {\sum_{i=1}^{m}(x_iy_i-wx_i^2)}{\sum_{i=1}^{m}x_i}=\sum_{i=1}^{m}(y_i-wx_i)
mi=1mxii=1m(xiyiwxi2)=i=1m(yiwxi)

m ∑ i = 1 m ( x i y i − w x i 2 ) = ∑ i = 1 m x i ∑ i = 1 m y i − w ∑ i = 1 m x i ∑ i = 1 m x i m\sum_{i=1}^{m}(x_iy_i-wx_i^2)=\sum_{i=1}^{m}x_i\sum_{i=1}^{m}y_i-w\sum_{i=1}^{m}x_i\sum_{i=1}^{m}x_i mi=1m(xiyiwxi2)=i=1mxii=1myiwi=1mxii=1mxi
w ( ∑ i = 1 m x i ∑ i = 1 m x i − m ∑ i = 1 m x i 2 ) = ∑ i = 1 m x i ∑ i = 1 m y i − m ∑ i = 1 m x i y i w(\sum_{i=1}^{m}x_i\sum_{i=1}^{m}x_i- m\sum_{i=1}^{m}x_i^2)= \sum_{i=1}^{m}x_i\sum_{i=1}^{m}y_i-m\sum_{i=1}^{m}x_iy_i w(i=1mxii=1mximi=1mxi2)=i=1mxii=1myimi=1mxiyi
w = x ‾ ∑ i = 1 m y i − ∑ i = 1 m x i y i 1 / m ∑ i = 1 m x i ∑ i = 1 m x i − ∑ i = 1 m x i 2 w= \frac{\overline x\sum_{i=1}^{m}y_i-\sum_{i=1}^{m}x_iy_i}{1/m\sum_{i=1}^{m}x_i\sum_{i=1}^{m}x_i- \sum_{i=1}^{m}x_i^2} w=1/mi=1mxii=1mxii=1mxi2xi=1myii=1mxiyi
最終得
w = ∑ i = 1 m y i ( x i − x ‾ ) ∑ i = 1 m x i 2 − 1 m ( ∑ i = 1 m x i ) 2 w= \frac{\sum_{i=1}^{m}y_i(x_i-\overline x)}{\sum_{i=1}^{m}x_i^2-\frac{1}{m}(\sum_{i=1}^{m}x_i)^2} w=i=1mxi2m1(i=1mxi)2i=1myi(xix)
其中 x ‾ = ∑ i = 1 m x i m \overline x=\frac{\sum_{i=1}^{m}x_i}{m} x=mi=1mxi

那麼 b b b的結果為
b = ∑ i = 1 m ( y i − w x i ) m b=\frac{\sum_{i=1}^{m}(y_i-wx_i)}{m} b=mi=1m(yiwxi)

Python實現

import numpy as np
def linear_regression(x, y, px):
    sx = np.sum(x)
    sy = np.sum(y)
    sxy = np.dot(x, y.reshape(slid, 1))
    sx2 = np.dot(x, x.reshape(slid, 1))
    w = (sxy-sx*sy/slid)/(sx2-sx*sx/slid)
    b = (sy-w*sx)/slid
    py = w * px + b
    return py