1. 程式人生 > 其它 >深度學習——前向傳播演算法和反向傳播演算法(BP演算法)及其推導

深度學習——前向傳播演算法和反向傳播演算法(BP演算法)及其推導

1 BP演算法的推導

  

        圖1 一個簡單的三層神經網路

  圖1所示是一個簡單的三層(兩個隱藏層,一個輸出層)神經網路結構,假設我們使用這個神經網路來解決二分類問題,我們給這個網路一個輸入樣本,通過前向運算得到輸出。輸出值的值域為,例如的值越接近0,代表該樣本是“0”類的可能性越大,反之是“1”類的可能性大。

1.1前向傳播的計算

  為了便於理解後續的內容,我們需要先搞清楚前向傳播的計算過程,以圖1所示的內容為例:
  輸入的樣本為:
    ${\Large \overrightarrow{\mathrm{x}}=\left(x_{1}, x_{2}\right)^{T}} $
  第一層網路的引數為:


    ${\Large W^{(1)}=\left[\begin{array}{l} w_{\left(x_{1}, 1\right)}, w_{\left(x_{2}, 1\right)} \\ w_{\left(x_{1}, 2\right)}, w_{\left(x_{2}, 2\right)} \\ w_{\left(x_{1}, 3\right)}, w_{\left(x_{2}, 3\right)} \end{array}\right], \quad b^{(1)}=\left[b_{1}, b_{2}, b_{3}\right]} $
  第二層網路的引數為:
    ${\Large W^{(2)}=\left[\begin{array}{l} w_{(1,4)}, w_{(2,4)}, w_{(3,4)} \\ w_{(1,5)}, w_{(2,5)}, w_{(3,5)} \end{array}\right], \quad b^{(2)}=\left[b_{4}, b_{5}\right]} $

  第三層網路的引數為:
    ${\Large W^{(3)}=\left[w_{(4,6)}, w_{(5,6)}\right], \quad b^{(3)}=\left[b_{6}\right]} $

1.1.1 第一層隱藏層的計算

    

         圖2 計算第一層隱藏層

  第一層隱藏層有三個神經元:neu1、neu2 和 neu3。該層的輸入為:
    ${\large z^{(1)}=W^{(1)} *(\vec{x})^{T}+\left(b^{(1)}\right)^{T}} $
  神經元 neu1的輸入:
    ${\Large z_{1}=w_{\left(x_{1}, 1\right)} * x_{1}+w_{\left(x_{2}, 1\right)} * x_{2}+b_{1}} $


  神經元neu2的輸入:
    ${\Large z_{2}=w_{\left(x_{1}, 2\right)} * x_{1}+w_{\left(x_{2}, 2\right)} * x_{2}+b_{2}} $
  神經元neu2的輸入:
    ${\Large z_{3}=w_{\left(x_{1}, 3\right)} * x_{1}+w_{\left(x_{2}, 3\right)} * x_{2}+b_{3}} $
  假設我們選擇函式f(x)作為該層的啟用函式(圖1中的啟用函式都標了一個下標,一般情況下,同一層的啟用函式都是一樣的,不同層可以選擇不同的啟用函式),那麼該層的輸出為:$f_{1}\left(z_{1}\right)$ 、 $f_{2}\left(z_{2}\right) $ 和 $ f_{3}\left(z_{3}\right) $。

1.1.2 第二層隱藏層的計算

    

        圖3 計算第二層隱藏層

  第二層隱藏層有兩個神經元:neu4 和 neu5。該層的輸入為:
    ${\large { \mathbf{z}^{(2)}=\boldsymbol{W}^{(2)} *\left[f_{1}\left(z_{1}\right), f_{2}\left(z_{2}\right), f_{3}\left(z_{3}\right)\right]^{T}+\left(\boldsymbol{b}^{(2)}\right)^{T}} } $
  即第二層的輸入是第一層的輸出乘以第二層的權重,再加上第二層的偏置。因此得到 neu4 和 neu5 的輸入分別為:
    ${\Large z_{4}=w_{(1,4)} * f_{1}\left(z_{1}\right)+w_{(2,4)} * f_{2}\left(z_{2}\right)+w_{(3,4)} * f_{3}\left(z_{3}\right)+b_{4}} $
    ${\Large z_{5}=w_{(1,5)} * f_{1}\left(z_{1}\right)+w_{(2,5)} * f_{2}\left(z_{2}\right)+w_{(3,5)} * f_{3}\left(z_{3}\right)+b_{5}} $
  該層的輸出分別為:$f_{4}\left(z_{4}\right)$ 和$f_{5}\left(z_{5}\right)$。

1.1.3 輸出層的計算

    

        圖4 計算輸出層

  輸出層只有一個神經元:neu6。該層的輸入為:
    ${\large \mathbf{z}^{(3)}=\boldsymbol{W}^{(3)} *\left[f_{4}\left(z_{4}\right), f_{5}\left(z_{5}\right)\right]^{T}+\left(\boldsymbol{b}^{(3)}\right)^{T}} $
  即:
    ${\Large z_{6}=w_{(4,6)} * f_{4}\left(z_{4}\right)+w_{(5,6)} * f_{5}\left(z_{5}\right)+b_{6}} $
  因為該網路要解決的是一個二分類問題,所以輸出層的啟用函式也可以使用一個Sigmoid型函式,神經網路最後的輸出為:$f_{6}\left(z_{6}\right)$

1.2 反向傳播的計算

  

  網路結構:

  以上圖的網路結構為例, 輸入資料是 $X=\left[\begin{array}{cc}x_{1}^{(1)} & x_{1}^{(2)} \\ x_{2}^{(1)} & x_{2}^{(2)} \\ x_{3}^{(1)} & x_{3}^{(2)}\end{array}\right]_{3 \times 2} $, 其中包括 2 個樣本, 每個樣本都有3個特徵,即 $X$ 的行數=特徵個數, 列數=樣本總數。記 $A^{[0]}=X$ , 圓括號表示第幾個樣 本,方括號表示第幾層,下角標是特徵的個數。

  一、輸入層(Input)

  權重和偏執項 $W^{[1]}=\left[\begin{array}{ccc}w_{11}^{[1]} & w_{12}^{[1]} & w_{13}^{[1]} \\ w_{21}^{[1]} & w_{22}^{[1]} & w_{23}^{[1]}\end{array}\right]_{2 \times 3}, \quad B^{[1]}=\left[\begin{array}{c}b_{1}^{[1]} \\ b_{2}^{[1]}\end{array}\right]_{2 \times 1} $
  W 的行數=當前層神經元的個數, 列數=當前層所接受的特徵個數。
  B 的行數=當前層神經元的個數。

  該層的線性計算:(這裡其實用了類似Python中廣播的概念,否則的話 $B$ 的列數與前一項不相等,按照基本的矩陣加法,這是沒法加的)

    $Z^{[1]}=W^{[1]} A^{[0]}+B^{[1]}=\left[\begin{array}{cc}w_{11}^{[1]} x_{1}^{(1)}+w_{12}^{[1]} x_{2}^{(1)}+w_{13}^{[1]} x_{3}^{(1)}+b_{1}^{[1]} & w_{11}^{[1]} x_{1}^{(2)}+w_{12}^{[1]} x_{2}^{(2)}+w_{13}^{[1]} x_{3}^{(2)}+b_{1}^{[1]} \\w_{21}^{[1]} x_{1}^{(1)}+w_{22}^{[1]} x_{2}^{(1)}+w_{23}^{[1]} x_{3}^{(1)}+b_{2}^{[1]} & w_{21}^{[1]} x_{1}^{(2)}+w_{22}^{[1]} x_{2}^{(2)}+w_{23}^{[1]} x_{3}^{(2)}+b_{2}^{[1]}\end{array}\right]_{2 \times 2}$

  記作

    $Z^{[1]}=\left[\begin{array}{cc} z_{1}^{[1](1)} & z_{1}^{[1](2)} \\ z_{2}^{[1](1)} & z_{2}^{[1](2)} \end{array}\right]_{2 \times 2}$

  啟用輸出

    $A^{[1]}=\sigma\left(Z^{[1]}\right)=\left[\begin{array}{cc} a_{1}^{[1](1)} & a_{1}^{[1](2)} \\ a_{2}^{[1](1)} & a_{2}^{[1](2)} \end{array}\right]_{2 \times 2}=\left[\begin{array}{ll} \sigma\left(z_{1}^{[1](1)}\right) & \sigma\left(z_{1}^{[1](2)}\right) \\ \sigma\left(z_{2}^{[1](1)}\right) & \sigma\left(z_{2}^{[1](2)}\right) \end{array}\right]_{2 \times 2}$

  二、隱含層(Hidden)

  權重和偏執項

    $W^{[2]}=\left[\begin{array}{cc} w_{11}^{[2]} & w_{12}^{[2]} \\ w_{21}^{[2]} & w_{22}^{[2]} \\w_{31}^{[2]} & w_{32}^{[2]} \end{array}\right]_{3 \times 2}, B^{[2]}=\left[\begin{array}{c} b_{1}^{[2]} \\ b_{2}^{[2]} \\ b_{3}^{[2]} \end{array}\right]_{3 \times 1}$

  該層的線性計算

    $Z^{[2]}=W^{[2]} A^{[1]}+B^{[2]}=\left[\begin{array}{ll} w_{11}^{[2]} a_{1}^{[1](1)}+w_{12}^{[2]} a_{2}^{[1](1)}+b_{1}^{[2]} & w_{11}^{[2]} a_{1}^{[1](2)}+w_{12}^{[2]} a_{2}^{[1](2)}+b_{1}^{[2]} \\ w_{21}^{[2]} a_{1}^{[1](1)}+w_{22}^{[2]} a_{2}^{[1](1)}+b_{2}^{[2]} & w_{21}^{[2]} a_{1}^{[1](2)}+w_{22}^{[2]} a_{2}^{[1](2)}+b_{2}^{[2]} \\ w_{31}^{[2]} a_{1}^{[1](1)}+w_{32}^{[2]} a_{2}^{[1](1)}+b_{3}^{[2]} & w_{31}^{[2]} a_{1}^{[1](2)}+w_{32}^{[2]} a_{2}^{[1](2)}+b_{3}^{[2]} \end{array}\right]_{3 \times 2}$

  記作

    $Z^{[2]}=\left[\begin{array}{cc} z_{1}^{[2](1)} & z_{1}^{[2](2)} \\ z_{2}^{[2](1)} & z_{2}^{[2](2)} \\ z_{3}^{[2](1)} & z_{3}^{[2](2)} \end{array}\right]_{3 \times 2}$

  啟用輸出

    $A^{[2]}=\sigma\left(Z^{[2]}\right)=\left[\begin{array}{cc} a_{1}^{[2](1)} & a_{1}^{[2](2)} \\ a_{2}^{[2](1)} & a_{2}^{[2](2)} \\ a_{3}^{[2](1)} & a_{3}^{[2](2)} \end{array}\right]_{3 \times 2}=\left[\begin{array}{cc} \sigma\left(z_{1}^{[2](1)}\right) & \sigma\left(z_{1}^{[2](2)}\right) \\ \sigma\left(z_{2}^{[2](1)}\right) & \sigma\left(z_{2}^{[2](2)}\right) \\ \sigma\left(z_{3}^{[2](1)}\right) & \sigma\left(z_{3}^{[2](2)}\right) \end{array}\right]_{3 \times 2}$

  三、輸出層(Output)

  權重和偏執項

    $W^{[3]}=\left[\begin{array}{ccc} w_{11}^{[3]} & w_{12}^{[3]} & w_{13}^{[3]} \\ w_{21}^{[3]} & w_{22}^{[3]} & w_{23}^{[3]} \end{array}\right]_{2 \times 3}, \quad B^{[3]}=\left[\begin{array}{c} b_{1}^{[3]} \\ b_{2}^{[3]} \end{array}\right]_{2 \times 1}$

  該層的線性計算

    $Z^{[3]}=W^{[3]} A^{[2]}+B^{[3]}=\left[\begin{array}{ll} w_{11}^{[3]} a_{1}^{[2](1)}+w_{12}^{[3]} a_{2}^{[2](1)}+w_{13}^{[3]} a_{3}^{[2](1)}+b_{1}^{[3]} & w_{11}^{[3]} a_{1}^{[2](2)}+w_{12}^{[3]} a_{2}^{[2](2)}+w_{13}^{[3]} a_{3}^{[2](2)}+b_{1}^{[3]} \\ w_{21}^{[3]} a_{1}^{[2](1)}+w_{22}^{[3]} a_{2}^{[2](1)}+w_{23}^{[3]} a_{3}^{[2](1)}+b_{1}^{[3]} & w_{21}^{[3]} a_{1}^{[2](2)}+w_{22}^{[3]} a_{2}^{[2](2)}+w_{23}^{[3]} a_{3}^{[2](2)}+b_{1}^{[3]} \end{array}\right]_{2 \times 2}$

  記作

    $Z^{[3]}=\left[\begin{array}{cc} z_{1}^{[3](1)} & z_{1}^{[3](2)} \\ z_{2}^{[3](1)} & z_{2}^{[3](2)} \end{array}\right]_{2 \times 2}$

  啟用輸出

    $A^{[3]}=\sigma\left(Z^{[3]}\right)=\left[\begin{array}{cc} a_{1}^{[3](1)} & a_{1}^{[3](2)} \\ a_{2}^{[3](1)} & a_{2}^{[3](2)} \end{array}\right]_{2 \times 2}=\left[\begin{array}{cc} \sigma\left(z_{1}^{[3](1)}\right) & \sigma\left(z_{1}^{[3](2)}\right) \\ \sigma\left(z_{2}^{[3](1)}\right) & \sigma\left(z_{2}^{[3](2)}\right) \end{array}\right]_{2 \times 2}$

  輸出結果是

    $Y=\left[\begin{array}{cc} y_{1}^{(1)} & y_{1}^{(2)} \\ y_{2}^{(1)} & y_{2}^{(2)}\end{array}\right]_{2 \times 2}=A^{[3]}$

  對應的真實標籤值記是

    $\tilde{Y}=\left[\begin{array}{ll} \tilde{y_{1}}^{(1)} & \tilde{y_{1}}^{(2)} \\ \tilde{y_{2}}^{(1)} & \tilde{y_{2}}^{(2)} \end{array}\right]_{2 \times 2}$

  設每個神經元的啟用函式為最常用的Sigmoid函式:

    $\sigma(z)=\frac{1}{1+e^{-z}}$

  分類問題中的BP問題:
  目標函式:交叉熵函式

    $L=-(\tilde{Y} \log (Y)+(1-\tilde{Y}) \log (1-Y))$

    $ {\small =\left[\begin{array}{ll} -\left(\tilde{y}_{1}^{(1)} \log \left(y_{1}^{(1)}\right)+\left(1-\tilde{y_{1}}^{(1)}\right) \log \left(1-y_{1}^{(1)}\right)\right. & -\left(\tilde{y}_{1}^{(2)} \log \left(y_{1}^{(2)}\right)+\left(1-\tilde{y}_{1} ^{(2)}\right) \log \left(1-y_{1}^{(2)}\right)\right) \\ -\left(\tilde{y_{2}}(1) \log \left(y_{2}^{(1)}\right)+\left(1-\tilde{y_{2}}^{(1)}\right) \log \left(1-y_{2}^{(1)}\right)\right. & -\left(\tilde{y}_{2}^{(2)} \log \left(y_{2}^{(2)}\right)+\left(1-\tilde{y_{2}}^ {(2)}\right) \log \left(1-y_{2}^{(2)}\right)\right) \end{array}\right]_{2 \times 2}} $

  簡記為:

    $L=\left[\begin{array}{ll} l_{1}^{(1)} & l_{1}^{(2)} \\ l_{2}^{(1)} & l_{2}^{(2)} \end{array}\right]$

  則由簡單的鏈導法則可有:

    $\frac{d L}{d Z^{[3]}}=\frac{d L}{d A^{[3]}} \frac{d A^{[3]}}{d Z^{[3]}}=\left[\begin{array}{cc} \frac{d l_{1}^{(1)}}{d a_{1}^{[3](1)}} \frac{d a_{1}^{[3](1)}}{d z_{1}^{[3](1)}} & \frac{d l_{1}^{(2)}}{d a_{1}^{[3](2)}} \frac{d a_{1}^{[3](2)}}{d z_{1}^{[3](2)}} \\ \frac{d l_{2}^{(1)}}{d a_{2}^{[3](1)}} \frac{d a_{2}^{[3](1)}}{d z_{2}^{[3](1)}} & \frac{d l_{2}^{(2)}}{d a_{2}^{[3](2)}} \frac{d a_{2}^{[3](2)}}{d z_{2}^{[3](2)}} \end{array}\right]_{2 \times 2}=\left[\begin{array}{cc} d z_{1}^{[3](1)} & d z_{1}^{[3](2)} \\ d z_{2}^{[3](1)} & d z_{2}^{[3](2)} \end{array}\right]_{2 \times 2}$

  記 $\frac{d L}{d Z^{[3]}}=d Z^{[3]}, \frac{d L}{d A^{[3]}}=d A^{[3]}, \frac{d A}{d Z^{[3]}}=\sigma^{\prime}\left(Z^{[3]}\right)$,則,顯然有:

  $d Z^{[3]}=d A^{[3]} * \sigma^{\prime}\left(Z^{[3]}\right)$,

  即:
    $\left[\begin{array}{ll} d z_{1}^{[3](1)} & d z_{1}^{[3](2)} \\ d z_{2}^{[3](1)} & d z_{2}^{[3](2)} \end{array}\right]_{2 \times 2}=\left[\begin{array}{ll} d a_{1}^{[3](1)} & d a_{1}^{[3](2)} \\ d a_{2}^{[3](1)} & d a_{2}^{[3](2)} \end{array}\right]_{2 \times 2} *\left[\begin{array}{ll} d \sigma\left(z_{1}^{[3](1)}\right) & d \sigma\left(z_{1}^{[3](2)}\right) \\ d \sigma\left(z_{2}^{[3](1)}\right) & d \sigma\left(z_{2}^{[3](2)}\right) \end{array}\right]_{2 \times 2}$

  其中的 $\ast $ 表示逐元素相乘。這一塊涉及的函式求導就不贅述了,然後就可很容易計算出來下面的結果:
    $d Z^{[3]}=A^{[3]}-\tilde{Y}$

    $\frac{d L}{d W^{[3]}}=\frac{d L}{d Z^{[3]}} \frac{d Z^{[3]}}{d W^{[3]}}=\left[\begin{array}{lll} \frac{d l_{1}^{(1)}}{d w_{11}^{[3]}}+\frac{d l_{1}^{(2)}}{d w_{11}^{[3]}} & \frac{d l_{1}^{(1)}}{d w_{12}^{[3]}}+\frac{d l_{1}^{(2)}}{d w_{12}^{[3]}} & \frac{d l_{1}^{(1)}}{d w_{13}^{[3]}}+\frac{d l_{1}^ {(2)}}{d w_{13}^{[3]}} \\ \frac{d l_{2}^{(1)}}{d w_{21}^{[3]}}+\frac{d l_{2}^{(2)}}{d w_{21}^{[3]}} & \frac{d l_{2}^{(1)}}{d w_{22}^{[3]}}+\frac{d l_{2}^{(2)}}{d w_{22}^{[3]}} & \frac{d l_{2}^{(1)}}{d w_{23}^{[3]}}+\frac{d l_{2}^ {(2)}}{d w_{23}^{[3]}} \end{array}\right]_{2 \times 3}$

    $d W^{[3]}=\left[\begin{array}{ccc} \frac{d l_{1}^{(1)}}{d z_{1}^{[3](1)}} \frac{d z_{1}^{[3](1)}}{d w_{11}^{[3]}}+\frac{d l_{1}^{(2)}}{d z_{1}^{[3](2)}} \frac{d z_{1}^{[3](2)}}{d w_{11}^{[3]}} & \frac{d l_{1}^{(1)}}{d z_{1}^{[3](1)}} \frac{d z_ {1}^{[3](1)}}{d w_{12}^{[3]}}+\frac{d l_{1}^{(2)}}{d z_{1}^{[3](2)}} \frac{d z_{1}^{[3](2)}}{d w_{12}^{[3]}} & \frac{d l_{1}^{(1)}}{d z_{1}^{[3](1)}} \frac{d z_{1}^{[3](1)}}{d w_{13}^{[3]}}+\frac{d l_{1}^ {(2)}}{d z_{1}^{[3](2)}} \frac{d z_{1}^{[3](2)}}{d w_{13}^{[3]}} \\ \frac{d l_{2}^{[3](1)}}{d z_{2}^{[3](1)}}{d w_{21}^{[3]}}+\frac{d l_{2}^{[2)}}{d z_{2}^{[3](2)}} \frac{d z_{2}^{[3](2)}}{d w_{21}^{[3]}} & \frac{d l_{2}^{(1)}}{d z_{2}^{[3](1)}} \frac{d z_{2}^{[3](1)}}{d w_ {22}^{[3]}}+\frac{d l_{2}^{(2)}}{d z_{2}^{[3](2)}} \frac{d z_{2}^{[3](2)}}{d w_{22}^{[3]}} & \frac{d l_{2}^{(1)}}{d z_{2}^{[3](1)}} \frac{d z_{2}^{[3](1)}}{d w_{23}^{[3]}}+\frac{d l_{2}^{(2)}}{d z_{2}^{[3](2)}} \frac{d z_{2}^{[3](2)}}{d w_{23}^{[3]}} \end{array}\right]_{2 \times 3}$
  上式計算後,結果為:
    $d W^{[3]}=\left[\begin{array}{lll} d z_{1}^{[3](1)} a_{1}^{[2](1)}+d z_{1}^{[3](2)} a_{1}^{[2](2)} & d z_{1}^{[3](1)} a_{2}^{[2](1)}+d z_{1}^{[3](2)} a_{2}^{[2](2)} & d z_{1}^{[3](1)} a_{3}^{[2](1)}+d z_{1}^{[3](2)} a_{3}^{[2](2)} \\ d z_{2}^{[3](1)} a_{1}^{[2](1)}+d z_{2}^{[3](2)} a_{1}^{[2](2)} & d z_{2}^{[3](1)} a_{2}^{[2](1)}+d z_{2}^{[3](2)} a_{2}^{[2](2)} & d z_{2}^{[3](1)} a_{3}^{[2](1)}+d z_{2}^{[3](2)} a_{3}^{[2](2)} \end{array}\right]_{2 \times 3}$
  從上面的式子可以看出,每個權重的梯度是每個樣本得到的梯度之和,因此,這裡都除以樣本個數,求出平均梯度。整理一下,我們就得到:
    $d W^{[3]}=\frac{1}{2}\left[\begin{array}{cc} d z_{1}^{[3](1)} & d z_{1}^{[3](2)} \\d z_{2}^{[3](1)} & d z_{2}^{[3](2)} \end{array}\right]_{2 \times 2}\left[\begin{array}{ccc} a_{1}^{[2](1)} & a_{2}^{[2](1)} & a_{3}^{[2](1)} \\ a_{1}^{[2](2)} & a_{2}^{[2](2)} & a_{3}^{[2](2)} \end{array}\right]_{2 \times 3}$

  即 $d W^{[3]}=\frac{1}{2} d Z^{[3]} A^{[2]^{T}}$

  同理可以求出來:

    $d B^{[3]}=\frac{1}{2} d Z^{[3]}\left[\begin{array}{l} 1 \\1 \end{array}\right]$

  其實這就是$d Z^{[3]}$按行求和(即求第一行的總和,第二行的總和),所以簡寫為(Python中numpy的sum函式):

    $d B^{[3]}=\frac{1}{2} \operatorname{sum}\left(d Z^{[3]}, a x i s=1\right)$

  現在輸出層的都求出來,然後就再往回一層,求隱含層的梯度,因此,中間鏈導需要經過$A^{[2]}$:

    $d A^{[2]}=\left[\begin{array}{ll} d z_{1}^{[3](1)} \frac{d z_{1}^{[3](1)}}{d a_{1}^{[2](1)}}+d z_{2}^{[3](1)} \frac{d z_{2}^{[3](1)}}{d a_{1}^{[2](1)}} \quad d z_{1}^{[3](2)} \frac{d z_{1}^{[3](2)}}{d a_{1}^{[2](2)}}+d z_{2}^{[3](2)} \frac{d z_{2}^{[3](2)}}{d a_{1}^{[2](2)}} \\ d z_{1}^{[3](1)} \frac{d z_{2}^{[3](1)}}{d a_{2}^{[2](1)}}+d z_{2}^{[3](1)} \frac{d z_{2}^{[3](1)}}{d a_{2}^{[2](1)}} \quad d z_{1}^{[3](2)} \frac{d z_{1}^{[3](2)}}{d a_{2}^{[2](2)}}+d z_{2}^{[3](2)} \frac{d z_{2}^{[3](2)}}{d a_{2}^{[2](2)}} \\ d z_{1}^{[3](1)} \frac{d z_{1}^{[3](1)}}{d a_{3}^{[2](1)}}+d z_{2}^{[3](1)} \frac{d z_{2}^{[3](1)}}{d a_{3}^{[2](1)}} \quad d z_{1}^{[3](2)} \frac{d z_{1}^{[3](2)}}{d a_{3}^{[2](2)}}+d z_{2}^{[3](2)} \frac{d z_{2}^{[3](2)}}{d a_{3}^{[2](2)}} \end{array}\right]_{3 \times 2}$

    $d A^{[2]}=\left[\begin{array}{ll} d z_{1}^{[3](1)} w_{11}^{[3]}+d z_{2}^{[3](1)} w_{21}^{[3]} & d z_{1}^{[3](2)} w_{11}^{[3]}+d z_{2}^{[3](2)} w_{21}^{[3]} \\ d z_{2}^{[3](1)} w_{12}^{[3]}+d z_{2}^{[3](1)} w_{22}^{[3]} & d z_{1}^{[3](2)} w_{12}^{[3]}+d z_{2}^{[3](2)} w_{22}^{[3]} \\ d z_{1}^{[3](1)} w_{13}^{[3]}+d z_{2}^{[3](1)} w_{23}^{[3]} & d z_{1}^{[3](2)} w_{13}^{[3]}+d z_{2}^{[3](2)} w_{23}^{[3]} \end{array}\right]_{3 \times 2}$
  即:
    $d A^{[2]}=W^{[3]^{T}} d Z^{[3]}$

  接著就可以計算

    $d Z^{[2]}=d A^{[2]} * \sigma^{\prime}\left(Z^{[2]}\right)$

  然後就可以推導隱含層的梯度了,過程和上面是一樣的,就不再寫那麼一大堆了,直接給出結果:

    $d W^{[2]}=\frac{1}{2} d Z^{[2]} A^{[1]^{T}}$
    $d B^{[2]}=\frac{1}{2} \operatorname{sum}\left(d Z^{[2]}, \text { axis }=1\right)$

  同理,計算$d A^{[1]}, \quad d Z^{[1]}$ ,於是就可得輸入層的梯度:

    $d W^{[1]}=\frac{1}{2} d Z^{[1]} A^{[0]^{T}}$
    $d B^{[1]}=\frac{1}{2} \operatorname{sum}\left(d Z^{[1]}, a x i s=1\right)$

  然後更新權重即可:
    $\begin{array}{c} W^{[i]}=W^{[i]}-\eta d W^{[i]} \\ B^{[i]}=B^{[i]}-\eta d B^{[i]} \\ i=1,2,3 \end{array}$

  參考部落格神經網路BP演算法推導》

1.3 前向、反向傳播例題

  假設,有這樣一個網路層:

    

  第一層是輸入層,包含兩個神經元 $x_1$,$x_2$,和截距項 $b_1$;
  第二層是隱含層,包含兩個神經元 $h_1$,$h_2$ 和截距項 $b_2$,
  第三層是輸出$o_1$,$o_2$,每條線上標的 $w_i$ 是層與層之間連線的權重,啟用函式預設為 Sigmoid函式。
  輸入資料 $x_1=0.05,x_2=0.10$;
  輸出資料 $o_1=0.01,o_2=0.99$
  初始權重 $w_1=0.15,w_2=0.20,w_3=0.25,w_4=0.30,w_5=0.40,w_6=0.45,w_7=0.50,w_8=0.55$
  目標:給出輸入資料$x_1 =0.05 ,\ x_2=0.10 $ ,使輸出儘可能與原始輸出 $o_1=0.01,o_2=0.99$ 接近。

1.3.1 前向傳播

  1)輸入層——>隱含層:
  計算神經元 $h_1$ 的輸入加權和:

    ${\large \begin{array}{l} net_{h 1}=w_{1} * x_{1}+w_{2} * x_{2}+b_{1} \\ n e t_{h 1}=0.15 * 0.05+0.2 * 0.1+0.35=0.3775 \end{array}} $

  神經元 $h_1$ 的輸出 $o_1$:

    $ {\large out _{h1}=\frac{1}{ 1+e^{-net_{h1}} }=\frac{1}{1+e^{-0.3775}}=0.593269992} $

  同理,可計算出神經元 $h2$ 的輸出 $o2$ :

    ${\large \begin{array}{l} net_{h2}=w_{3} * x_{1}+w_{4} * x_{2}+b_{1} \\ net_{h2}=0.25 * 0.05+0.30 * 0.10+0.35=0.3925 \end{array}} $
    ${\large \text { out }_{h 2}=0.596884378} $

  2)隱含層——>輸出層:
  計算輸出層神經元 $o_1$ 和 $o_2$ 的值:

    ${\large \begin{array}{l} \text { net }_{o_1}=w_{5} * \text { out }_{h_1}+w_{6} * \text { out }_{h_2}+b_{2} * 1 \\ \text { net }_{o_1}=0.4 * 0.593269992+0.45 * 0.596884378+0.6 * 1=1.105905967 \\ \text { out }_{o_1}=\frac{1}{1+e^{- net _{o_1}}}=\frac{1}{1+e^{-1.105905967}}=0.75136507 \end{array}} $
    ${\large \begin{array}{l} \text { net }_{o_2}=w_{7} * \text { out }_{h_1}+w_{6} * \text { out }_{h_2}+b_{2} * 1 \\ \text { net }_{o_2}=0.5 * 0.593269992+0.55 * 0.596884378+0.6 =1.0607782 \\ \text { out }_{o_2}=\frac{1}{1+e^{- net _{o_2}}}=\frac{1}{1+e^{-1.0607782}}=0.772928465 \end{array}} $

  前向傳播的過程結束得到輸出值為 $[0.75136079 , 0.772928465]$,與實際值 $[0.01 , 0.99]$ 相差還很遠,現在我們對誤差進行反向傳播,更新權值,重新計算輸出。

1.3.2 反向傳播

  1)計算總誤差

  總誤差:
    ${\large E_{\text {total }}=\sum \frac{1}{2}(\text { target }-\text { output })^{2}} $
  但是有兩個輸出,所以分別計算 $o_1$ 和 $o_2$ 的誤差,總誤差為兩者之和:

    ${\large E_{o_1}=\frac{1}{2}\left(\text { target }_{o 1}-\text { out }_{o 1}\right)^{2}=\frac{1}{2}(0.01-0.75136507)^{2}=0.274811083} $
    ${\large E_{o_2}=\frac{1}{2}\left(\text { target }_{o_2}-\text { out }_{o_2}\right)^{2}=\frac{1}{2}(0.99-0.772928465)^{2}=0.023560026} $
    ${\large E_{\text {total }}=E_{o_1}+E_{o_2}=0.274811083+0.023560026=0.298371109 } $

  2)隱含層——>輸出層的權值更新:

  以權重引數 $w_5$ 為例,我們想知道 $w_5$ 對整體誤差產生了多少影響,可以用整體誤差對 $w_5$ 求偏導求出:

    ${\large \frac{\partial E_{\text {total }}}{\partial w_{5}}=\frac{\partial E_{\text {total }}}{\partial \text { out }_{o 1}} * \frac{\partial \text { out }_{o 1}}{\partial \text { net }_{o 1}} * \frac{\partial \text { net }_{o 1}}{\partial w_{5}}} $

  下面的圖可以更直觀的看清楚誤差是怎樣反向傳播的:

    ${\large \frac{\partial n e t_{o_1}}{\partial w_{5}} * \frac{\partial out_{o_1}}{\partial n e t_{o_1}} * \frac{\partial E_{\text {total }}}{\partial o u t_{o_1}}=\frac{\partial E_{\text {total }}}{\partial w_{5}}} $

  

  現在分別計算每個式子的值:
  1)計算${\large \frac{\partial E_{\text {total }}}{\partial o u t_{o 1}}} $ :

    ${\large \begin{array}{l} E_{\text {total }}=\frac{1}{2}\left(\text { target }_{o_1}-\text { out }_{o_1}\right)^{2}+\frac{1}{2}\left(\text { target }_{o_2}-o u t_{o_2}\right)^{2} \\ \frac{\partial E_{\text {total }}}{\partial \text { out}_{o_1}}=2 * \frac{1}{2}\left(\text { target }_{o_1}-\text { out }_{o_1}\right)^{2-1} *-1+0 \\ \frac{\partial E_{\text {total }}}{\partial \text { out }_{o_1}}=-\left(\text { target }_{o_1}-\text { out }_{o_1}\right)=-(0.01-0.75136507)=0.74136507 \end{array}} $

  2)計算${\large \frac{\partial out_{o_1}}{\partial net_{o_1}}} $

    ${\large\begin{array}{l} \text { out }_{o_1}=\frac{1}{1+e^{-n e t_{o_1}}} \\ \frac{\partial \text { out }_{o_1}}{\partial\ \text { net} _{o_1}}=\text { out }_{o_1}\left(1-\text { out }_{o_1}\right)=0.75136507(1-0.75136507)=0.186815602 \end{array}}$

  3)計算${\large \frac{\partial n e t_{o_1}}{\partial w_{5}}} $

    ${\large \begin{array}{l} \text { net }_{o_1}=w_{5} * \text { out }_{h_1}+w_{6} * \text { out }_{h_2}+b_{2} * 1 \\ \frac{\partial n e t_{o_1}}{\partial w_{5}}=1 * \text { out }_{h_1}+0+0=\text { out }_{h_1}=0.593269992 \end{array}} $

  最後三者相乘:

    $ {\large \begin{aligned} \frac{\partial E_{\text {total }}}{\partial w_{5}} &=\frac{\partial E_{\text {total }}}{\partial \text { out }_{o_1}} * \frac{\partial \text { out }_{o_1}}{\partial \text { net }_{o_1}} * \frac{\partial \text { net }_{o_1}}{\partial w_{5}} \\ \frac{\partial E_{\text {total }}}{\partial w_{5}} &=0.74136507 * 0.186815602 * 0.593269992=0.082167041 \end{aligned} } $

  這樣就計算出整體誤差 $E(total)$ 對 $w_5$ 的偏導值。
  回過頭來看上面的公式,發現:

   ${\large \frac{\partial E_{\text {total }}}{\partial w_{5}}=-\left(\operatorname{target}_{o_1}-\text { out }_{o_1}\right) * \text { out }_{o_1}\left(1-\text { out }_{o_1}\right) * \text { out }_{h_1}} $

  為表達方便,用 $\delta_{o 1}$ 來表示輸出層的誤差:

    $ {\large \begin{array}{l} \delta_{o_1}=\frac{\partial E_{\text {total }}}{\partial o u t_{o_1}} * \frac{\partial o u t_{o_1}}{\partial n e t_{o_1}}=\frac{\partial E_{\text {total }}}{\partial n e t_{o_1}} \\ \delta_{o_1}=-\left(\text { target }_{o_1}-\text { out }_{o_1}\right) * \text { out }_{o_1}\left(1-\text { out }_{o_1}\right) \end{array} } $

  因此,整體誤差 $E(total)$ 對 $w_5$ 的偏導公式可以寫成:

    $\frac{\partial E_{\text {total }}}{\partial w_{5}}=\delta_{o 1} \text { out }_{h 1}$

  如果輸出層誤差計為負的話,也可以寫成:

    $\frac{\partial E_{\text {total }}}{\partial w_{5}}=-\delta_{o_1} \text { out }_{h_1}$

  最後更新 $w_5$ 的值:(其中,$\eta$ 是學習速率,這裡取 $0.5$ )

    $w_{5}^{+}=w_{5}-\eta * \frac{\partial E_{\text {total }}}{\partial w_{5}}=0.4-0.5 * 0.082167041=0.35891648$

  同理,可更新 $w_6\ ,w_7 \ ,w_8$:
    $ \begin{aligned} w_{6}^{+} &=0.408666186 \\ w_{7}^{+} &=0.511301270 \\ w_{8}^{+} &=0.561370121 \end{aligned} $

  3)隱含層——>輸入層的權值更新:

  方法其實與上面說的差不多,但是有個地方需要變一下,在上文計算總誤差對 $w_5$ 的偏導時,是從 $out_{o_1}$->$net_{o_1}$->$w_5$ ,但是在隱含層之間的權值更新時,是 $out_{h1}$->$net_{h1}$->$w_1$,而 $out_{h_1}$ 會接受 $E_{o_1}$ 和 $E_{o_2}$ 兩個地方傳來的誤差,所以這個地方兩個都要計算。

    $\frac{\partial E_{\text {total }}}{\partial w_{1}}= \frac{\partial E_{\text {total }}}{\partial \text { out }_{h_1}} * \frac{\partial \text { out }_{h_1}}{\partial n e t_{h_1}} * \frac{\partial \text { net }_{h_1}}{\partial w_{1}} $
    $\frac{\partial E_{\text {total }}}{\partial o u t_{h_1}}=\frac{\partial E_{o_1}}{\partial o u t_{h_1}}+\frac{\partial E_{o_2}}{\partial \text { out }_{h_1}}$

  計算 $\frac{\partial E_{\text {total }}}{\partial o u t_{h_1}}$
    ${\large \frac{\partial E_{\text {total }}}{\partial \text { out }_{h_1}}=\frac{\partial E_{o_1}}{\partial o u t_{h_1}}+\frac{\partial E_{o_2}}{\partial o u t_{h_1}}}$

  先計算 $\frac{\partial E_{o_1}}{\partial o u t_{h_1}}$
    ${\large \frac{\partial E_{o_1}}{\partial o u t_{h_1}}=\frac{\partial E_{o_1}}{\partial n e t_{o_1}} * \frac{\partial n e t_{o_1}}{\partial o u t_{h_1}}} $
    ${\large \frac{\partial E_{o_1}}{\partial n e t_{o_1}}=\frac{\partial E_{o_1}}{\partial o u t_{o_1}} * \frac{\partial o u t_{o_1}}{\partial n e t_{o_1}}=0.74136507 * 0.186815602=0.138498562} $
    ${\large n e t_{o_1}=w_{5} * \text { out }_{h 1}+w_{6} * \text { out }_{h_2}+b_{2} * 1} $
    ${\large \frac{\partial \text { net }_{o_1}}{\partial \text { out }_{h_1}}=w_{5}=0.40} $
    ${\large \frac{\partial E_{o_1}}{\partial o u t_{h_1}}=\frac{\partial E_{o_1}}{\partial n e t_{o_1}} * \frac{\partial n e t_{o_1}}{\partial o u t_{h_1}}=0.138498562 * 0.40=0.055399425} $
  同理,計算出:
  ${\large \frac{\partial E_{o_2}}{\partial o u t_{h_1}}=-0.019049119} $
  兩者相加得到總值:
    ${\large \frac{\partial E_{\text {total }}}{\partial o u t_{h_1}}=\frac{\partial E_{o_1}}{\partial o u t_{h_1}}+\frac{\partial E_{o_2}}{\partial o u t_{h_1}}=0.055399425+-0.019049119=0.036350306} $
  再計算 ${\large \frac{\partial o u t_{h_1}}{\partial n e t_{h_1}}} $
    ${\large \text { out }_{h_1}=\frac{1}{1+e^{-net_{h_1}} }} $
    ${\large \frac{\partial o u t_{h_1}}{\partial n e t_{h_1}}=o u t_{h_1}\left(1-o u t_{h_1}\right)=0.59326999(1-0.59326999)=0.241300709} $
  再計算 ${\large \frac{\partial \text { net }_{h_1}}{\partial w_{1}}} $
    ${\large \text { net }_{h 1}=w_{1} * x_{1}+w_{2} * i_{2}+b_{1} * 1} $
    ${\large \frac{\partial n e t_{h_1}}{\partial w_{1}}=x_{1}=0.05} $
  最後,三者相乘:
    ${\large \frac{\partial E_{\text {total }}}{\partial w_{1}}=\frac{\partial E_{\text {total }}}{\partial o u t_{h_1}} * \frac{\partial o u t_{h_1}}{\partial n e t_{h_1}} * \frac{\partial n e t_{h_1}}{\partial w_{1}}} $
    ${\large \frac{\partial E_{\text {total }}}{\partial w_{1}}=0.036350306 * 0.241300709 * 0.05=0.000438568} $
  為了簡化公式,用 $Sigma(h_1)$ 表示隱含層單元 $h_1$ 的誤差:
    ${\large \frac{\partial E_{\text {total }}}{\partial w_{1}}=\left(\sum\limits _{o} \frac{\partial E_{\text {total }}}{\partial o u t_{o}} * \frac{\partial out_o }{\partial net_o} * \frac{\partial net_o }{\partial \text { out }_{h_1}}\right) * \frac{\partial o u t_{h_1}}{\partial n e t_{h_1}} * \frac{\partial \text { net }_{h 1}}{\partial w_{1}}} $
    ${\large \frac{\partial E_{\text {total }}}{\partial w_{1}}=\left(\sum \limits _{o} \delta_{o} * w_{h_o}\right) * out _{h_1}\left(1-\text { out }_{h 1}\right) * x_{1}} $
    ${\large \frac{\partial E_{\text {total }}}{\partial w_{1}}=\delta_{h_1} x_{1}} $
  最後,更新 $w_1$ 的權值:
    ${\large w_{1}^{+}=w_{1}-\eta * \frac{\partial E_{\text {total }}}{\partial w_{1}}=0.15-0.5 * 0.000438568=0.149780716} $
  同理,可更新 $w_2,w_3,w_4$ 的權值:
    ${\large \begin{array}{l} w_{2}^{+}=0.19956143 \\ w_{3}^{+}=0.24975114 \\ w_{4}^{+}=0.29950229 \end{array}} $
  這樣誤差反向傳播法就完成了,最後我們再把更新的權值重新計算,不停地迭代,在這個例子中第一次迭代之後,總誤差$E_{total} $由 $ 0.298371109 $下降至 $0.291027924$ 。迭代 $10000 $次後,總誤差為 $0.000035085$,輸出為$[0.015912196,0.984065734]$(原輸入為 $[0.01,0.99]$ ),證明效果還是不錯的。

參考

文獻:

1:一文徹底搞懂BP演算法:原理推導+資料演示+專案實戰(上篇)

2:BP演算法例項詳解

3:神經網路BP演算法推導

視訊:

1:26、神經網路之BP演算法舉例說明

因上求緣,果上努力