1. 程式人生 > >神經網路求導

神經網路求導

本篇本來是想寫神經網路反向傳播演算法,但感覺光寫這個不是很完整,所以就在前面將相關的求導內容一併補上。所謂的神經網路求導,核心是損失函式對線性輸出 \(\mathbf{z} \;\; (\mathbf{z} = \mathbf{Wa} + \mathbf{b})\) 求導,即反向傳播中的 \(\delta = \frac{\partial \mathcal{L}}{\partial \mathbf{z}}\) ,求出了該值以後後面的對引數求導就相對容易了。



\(\text{Jacobian}\) 矩陣

函式 \(\boldsymbol{f} : \mathbb{R}^n \rightarrow \mathbb{R}^m\)

,則 \(\text{Jacobian}\) 矩陣為:
\[ \frac{\partial \boldsymbol{f}}{\partial \mathbf{x}} = \begin {bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots \\ \frac{\partial f_m}{\partial x_1} &\frac{\partial f_m}{\partial x_2} & \cdots & \frac{\partial f_m}{\partial x_n} \end {bmatrix} \in \mathbb{R}^{m \times n} \]

\((\frac{\partial \boldsymbol{f}}{\partial \mathbf{x}})_{ij} = \frac{\partial f_i}{\partial x_j}\)


神經網路中的啟用函式多為對應元素運算 ( element-wise ) ,設輸入為K維向量 \(\mathbf{x} = [x_1, x_2, ..., x_K ]^\text{T}\) , 輸出為K維向量 \(\mathbf{z} = [z_1, z_2, ..., z_K]^\text{T}\) ,則啟用函式為 \(\mathbf{z} = f(\mathbf{x})\) ,即 \(z_i = [f(\mathbf{x})]_i = f(x_i)\)

,則其導數按 \(\text{Jacobian}\) 矩陣的定義為一個對角矩陣:

\[ \begin{align*} \frac{\partial f(\mathbf{x})}{\partial \mathbf{x}} = \begin {bmatrix} \frac{\partial f(x_1)}{\partial x_1} & \frac{\partial f(x_1)}{\partial x_2} & \cdots & \frac{\partial f(x_1)}{\partial x_k} \\ \frac{\partial f(x_2)}{\partial x_1} & \frac{\partial f(x_2)}{\partial x_2} & \cdots & \frac{\partial f(x_2)}{\partial x_k} \\ \vdots & \vdots & \ddots \\ \frac{\partial f(x_k)}{\partial x_1} &\frac{\partial f(x_k)}{\partial x_2} & \cdots & \frac{\partial f(x_k)}{\partial x_k} \end {bmatrix} & = \begin {bmatrix} f'(x_1) & 0 &\cdots & 0 \\ 0 & f'(x_2) & \cdots & 0 \\ \vdots & \vdots & \ddots \\ 0 & 0 & \cdots & f'(x_k) \end {bmatrix} \\[2ex] & = \text{diag}(f'(\mathbf{x})) \in \mathbb{R}^{k \times k} \end{align*} \]




\(\text{Sigmoid}\) 啟用函式

\(\text{Sigmoid}\) 函式的形式為:
\[ \sigma(z) = \frac{1}{1+e^{\,-z}} \;\;\in (0,1) \]
其導數為:
\[ \sigma'(z) = -\frac{(1+e^{-z})'}{(1 + e^{-z})^2} = -\frac{-e^{-z}}{(1+ e^{-z})^2} = \frac{e^{-z}}{1 + e^{-z}} \cdot \frac{1}{1 + e^{-z}} = \sigma(z) (1 - \sigma(z)) \]
若輸入為 K 維向量 \(\mathbf{z} = [z_1, z_2, ..., z_K]^\text{T}\) ,根據上文的定義,其導數為
\[ \begin{align*} \sigma'(\mathbf{z}) &= \begin {bmatrix} \sigma(z_1) (1 - \sigma(z_1)) &0& \cdots & 0 \\ 0 & \sigma(z_2) (1 - \sigma(z_2)) & \cdots & 0 \\ \vdots & \vdots & \ddots \\ 0 & 0 & \cdots & \sigma(z_k) (1 - \sigma(z_k)) \end {bmatrix} \\[3ex] & = \text{diag} \left(\sigma(\mathbf{z}) \odot (1-\sigma(\mathbf{z}))\right) \end{align*} \]




\(\text{Tanh}\) 啟用函式

\(\text{Tanh}\) 函式可以看作是放大並平移的 \(\text{Sigmoid}\) 函式,但因為是零中心化的 (zero-centered) ,通常收斂速度快於 \(\text{Sigmoid}\) 函式,下圖是二者的對比:
\[ \text{tanh}(z) = \frac{e^{z} - e^{-z}}{e^z + e^{-z}} = \frac{2}{1 + e^{-2z}} - 1 = 2\sigma(2z) - 1 \;\; \in(-1,1) \]

其導數為:
\[ \text{tanh}'(z) = \frac{(e^z + e^{-z})^2 - (e^z - e^{-z})^2}{(e^z + e^{-z})^2} = 1 - \text{tanh}^2(z) \]




\(\text{Softplus}\) 啟用函式

\(\text{Softplus}\) 函式可以看作是 \(\frak{ReLU}\) 函式的平滑版本,形式為:
\[ \text{softplus}(z) = \text{log}(1+ e^z) \]

而其導數則恰好就是 \(\text{Sigmoid}\) 函式:
\[ \text{softplus}'(z) = \frac{e^z}{1 + e^z} = \frac{1}{1+ e^{-z}} \]




\(\text{Softmax}\) 啟用函式

\(\text{softmax}\) 函式將多個標量對映為一個概率分佈,其形式為:
\[ y_i = \text{softmax}(z_i) = \frac{e^{z_i}}{\sum_{k=1}^C e^{z_k}} \]
\(y_i\) 表示第 \(i\) 個輸出值,也可表示屬於類別 \(i\) 的概率, \(\sum\limits_{i=1}^C y_i = 1\)

首先求標量形式的導數,即第 \(i\) 個輸出對於第 \(j\) 個輸入的偏導:
\[ \frac{\partial y_i}{\partial z_j} = \frac{\partial\, \frac{e^{z_i}}{\sum_{k=1}^{C} e^{a_k}}}{\partial z_j} \]
其中 \(e^{z_i}\)\(z_j\) 求導要分情況討論,即:
\[ \frac{\partial e^{z_i}}{\partial z_j} = \begin{cases} e^{z_i}, & \text{if} \;\;\; i = j \\[1ex] 0, & \text{if} \;\;\; i \neq j \end{cases} \]
那麼當 \(i =j\) 時:
\[ \frac{\partial y_i}{\partial z_j} = \frac{e^{z_i} \sum_{k=1}^Ce^{z_k} - e^{z_i}e^{z_j}}{\left(\sum_{k=1}^C e^{z_k}\right)^2} = \frac{e^{z_i}}{\sum_{k=1}^C e^{z_k}} - \frac{e^{z_i}}{\sum_{k=1}^C e^{z_k}} \frac{e^{z_j}}{\sum_{k=1}^C e^{z_k}} =y_i - y_i y_j \tag{1.1} \]
\(i \neq j\) 時:
\[ \frac{\partial y_i}{\partial z_j} = \frac{0 - e^{z_i}e^{z_j}}{\left(\sum_{k=1}^C e^{z_k}\right)^2} = -y_iy_j \tag{1.2} \]
於是二者綜合:
\[ \frac{\partial y_i}{\partial z_j} = \mathbf{\large1} \{i=j\}\, y_i - y_i\,y_j \tag{1.3} \]
其中 \(\mathbf{\large 1} \{i=j\} = \begin{cases}1, & \text{if} \;\;\; i = j \\0, & \text{if} \;\;\; i \neq j\end{cases}\)


\(\text{softmax}\) 函式的輸入為K 維向量 \(\mathbf{z} = [z_1, z_2, ..., z_K]^\text{T}\) 時,轉換形式為 \(\mathbb{R}^K \rightarrow \mathbb{R}^K\)
\[ \mathbf{y} = \text{softmax}(\mathbf{z}) = \frac{1}{\sum_{k=1}^K e^{z_k}} \begin{bmatrix} e^{z_1} \\ e^{z_2} \\ \vdots \\ e^{z_K} \end{bmatrix} \]
其導數同樣為 \(\text{Jabocian}\) 矩陣 ( 同時利用 \((1.1)\)\((1.2)\) 式 ):
\[ \begin{align*} \frac{\partial\, \mathbf{y}}{\partial\, \mathbf{z}} & = \begin {bmatrix} \frac{\partial y_1}{\partial z_1} & \frac{\partial y_1}{\partial z_2} & \cdots & \frac{\partial y_1}{\partial z_K} \\ \frac{\partial y_2}{\partial z_1} & \frac{\partial y_2}{\partial z_2} & \cdots & \frac{\partial y_2}{\partial z_K} \\ \vdots & \vdots & \ddots \\ \frac{\partial y_K}{\partial z_1} &\frac{\partial y_K}{\partial z_2} & \cdots & \frac{\partial y_K}{\partial z_K} \end {bmatrix} \\[2ex] & = \begin {bmatrix} \small{y_1 - y_1 y_1} & \small{-y_1y_2} & \cdots & \small{-y_1 y_K} \\ \small{-y_2y_1} & \small{y_2 - y_2 y_1} & \cdots & \small{-y_2 y_K} \\ \vdots & \vdots & \ddots \\ \small{-y_Ky_1} & \small{-y_K y_2} & \cdots & \small{y_K - y_K y_K} \end {bmatrix} \\[2.5ex] & = \text{diag}(\mathbf{y}) - \mathbf{y}\mathbf{y}^\text{T} \\[0.5ex] &= \text{diag}(\text{softmax}(\mathbf{z})) - \text{softmax}(\mathbf{z})\, \text{softmax}(\mathbf{z})^\text{T} \end{align*} \]




交叉熵損失函式

交叉熵損失有兩種表示形式,設真實標籤為 \(y\) ,預測值為 \(a\)

(一) \(y\) 為標量,即 \(y \in \mathbb{R}\) ,則交叉熵損失為:
\[ \mathcal{L}(y, a) = - \sum\limits_{j=1}^{k} \mathbf{\large 1}\{y = j\}\, \text{log}\, a_j \]
(二) \(y\) 為one-hot向量,即 \(y = \left[0,0...1...0\right]^\text{T} \in \mathbb{R}^k\) ,則交叉熵損失為:
\[ \mathcal{L}(y, a) = -\sum\limits_{j=1}^k y_j\, \text{log}\, a_j \]




交叉熵損失函式 + Sigmoid啟用函式

已知 \(\mathcal{L}(y, a) = -\sum\limits_{j=1}^k y_j\, \text{log}\, a_j\)\(a_j = \sigma(z_j) = \frac{1}{1+e^{\,-z_j}}\) ,求 \(\frac{\partial \mathcal{L}}{z_j}\)
\[ \frac{\partial \mathcal{L}}{\partial z_j} = \frac{\partial \mathcal{L}}{\partial a_j} \frac{\partial a_j}{\partial z_j} = -y_j \frac{1}{\sigma(z_j)} \sigma(z_j) (1 - \sigma(z_j)) = \sigma(z_j) - 1 = a_j - y_j \]




交叉熵損失函式 + Softmax啟用函式

已知 \(\mathcal{L}(y, a) = -\sum\limits_{i=1}^k y_i\, \text{log}\, a_i\)\(a_j = \text{softmax}(z_j) = \frac{e^{z_j}}{\sum_{c=1}^C e^{z_c}}\) ,求 \(\frac{\partial \mathcal{L}}{\partial z_j}\)
\[ \begin{align*} \frac{\partial \mathcal{L}}{\partial z_j} = \sum\limits_{i=1}^k\frac{\partial \mathcal{L}}{\partial a_i} \frac{\partial a_i}{\partial z_j} & = \sum\limits_{i=j} \frac{\partial \mathcal{L}}{\partial a_j} \frac{\partial a_j}{\partial z_j} + \sum\limits_{i \neq j} \frac{\partial \mathcal{L}}{\partial a_i} \frac{\partial a_i}{\partial z_j} \\ & = -\frac{y_j}{a_j} \frac{\partial a_j}{\partial z_j} - \sum\limits_{i \neq j} \frac{y_i}{a_i}\frac{a_i}{z_j} \\ & = -\frac{y_j}{a_j} a_j(1 - a_j) + \sum\limits_{i \neq j} \frac{y_i}{a_i} a_i a_j \qquad\qquad \text{運用 (1.1)和(1.2) 式} \\ & = -y_j + y_ja_j + \sum\limits_{i \neq j} y_i a_j \\ & = a_j - y_j \end{align*} \]
若輸入為 \(K\) 維向量 \(\mathbf{z} = [z_1, z_2, ..., z_k]^\text{T}\) ,則梯度為:
\[ \frac{\partial \mathcal{L}}{\partial \mathbf{z}} = \mathbf{a} - \mathbf{y} = \begin{bmatrix} a_1 - 0 \\ \vdots \\ a_j - 1 \\ \vdots \\ a_k - 0 \end{bmatrix} \]


另外運用對數除法運算,上面的求導過程可以簡化:
\[ \mathcal{L}(y, a) = -\sum\limits_{i=1}^k y_i\, \text{log}\, a_i = -\sum\limits_{i=1}^k y_i\, \text{log}\, \frac{e^{z_i}}{\sum_c e^{z_c}} = -\sum\limits_{i=1}^k y_i z_i + y_i \text{log} \sum_c e^{z_c} \]

\[ \frac{\partial {\mathcal{L}}}{\partial z_i} = -y_i + \frac{e^{z_i}}{\sum_c e^{z_c}} = a_i - y_i \]








神經網路反向傳播演算法

通常所謂的“學習”指的是通過最小化損失函式進而求得相應引數的過程,神經網路中一般採用梯度下降來實現這個過程,即:
\[ \theta = \theta - \alpha \cdot \frac{\partial}{\partial \theta}\mathcal{L}(\theta) \]
用神經網路中的常用引數符號替換,並用矩陣形式表示:
\[ \begin{align*} \mathbf{W}^{(l)} &= \mathbf{W}^{(l)} - \alpha \frac{\partial \mathcal{L}}{\partial\, \mathbf{W}^{(l)}} \\[2ex] \mathbf{b}^{(l)} &=\,\, \mathbf{b}^{(l)} - \alpha \frac{\partial \mathcal{L}}{\partial\, \mathbf{b}^{(l)}} \end{align*} \]
其中 \((l)\) 表示第 \(l\) 層。


導數是梯度的組成部分,通常採用數值微分的方法近似下式:
\[ f'(x) = \lim\limits_{h \rightarrow 0}\frac{f(x + h) - f(x)}{h} \]
\(f'(x)\) 表示函式 \(f(x)\)\(x\) 處的斜率,但是由於運算時 \(h\) 不可能無限接近於零,上式容易引起數值計算問題,所以實際中常採用中心差分來近似:
\[ f'(x) = \lim\limits_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} \]

這樣整個梯度的計算可以用以下程式碼實現:

import numpy as np

def numerical_gradient(f, x):   # f為函式,x為輸入向量
    h = 1e-4
    grad = np.zeros_like(x)
    it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
    while not it.finished:
        idx = it.multi_index
        temp = x[idx]
        x[idx] = temp + h
        fxh1 = f(x)
        
        x[idx] = temp - h
        fxh2 = f(x)
        grad[idx] = (fxh1 + fxh2) / (2*h)
        
        x[idx] = temp
        it.iternext()
    return grad


由於數值微分對每個引數都要計算 \(f(x+h)\)\(f(x-h)\) ,假設神經網路中有100萬個引數,則需要計算200萬次損失函式。如果是使用 SGD,則是每個樣本計算200萬次,顯然是不可承受的。所以才需要反向傳播演算法這樣能夠高效計算梯度的方法。



接下來先定義神經網路中前向傳播的式子 (\(l\) 表示隱藏層, \(L\) 表示輸出層):


\[ \begin{align*} &\mathbf{z}^{(l)} = \mathbf{W}^{(l)} \mathbf{a}^{(l-1)} + \mathbf{b}^{(l)} \tag{2.1} \\[0.5ex] &\mathbf{a}^{(l)} = f(\mathbf{z}^{(l)}) \tag{2.2} \\[0.5ex] &\mathbf{\hat{y}} = \mathbf{a}^{(L)} = f(\mathbf{z}^{(L)}) \tag{2.3} \\[0.5ex] &\mathcal{L} = \mathcal{L}(\mathbf{y}, \mathbf{\hat{y}}) \tag{2.4} \end{align*} \]



現在我們的終極目標是得到 \(\frac{\partial {\mathcal{L}}}{\partial \mathbf{W}^{(l)}}\)\(\frac{\partial \mathcal{L}}{\partial \mathbf{b}^{(l)}}\) ,為了計算方便,先來看各自的分量 \(\frac{\partial {\mathcal{L}}}{\partial {W}_{jk}^{(l)}}\)\(\frac{\partial \mathcal{L}}{\partial b_j^{(l)}}\)


這裡定義 \(\delta_j^{(l)} = \frac{\partial \mathcal{L}}{\partial z_j^{(l)}}\) , 根據 \((2.1)\) 式使用鏈式法則:
\[ \begin{align*} & \frac{\partial {\mathcal{L}}}{\partial {W}_{jk}^{(l)}} = \frac{\partial \mathcal{L}}{\partial z_j^{(l)}} \frac{\partial z_j^{(l)}}{\partial W_{jk}^{(l)}} = \delta_j^{(l)} \frac{\partial}{\partial W_{jk}^{(l)}} \left(\sum_i W_{ji}^{(l)}a_i^{(l-1)}\right) = \delta_j^{(l)} a_k^{(l-1)} \tag{2.5} \\[1ex] & \frac{\partial {\mathcal{L}}}{\partial {b}_{j}^{(l)}} = \frac{\partial \mathcal{L}}{\partial z_j^{(l)}} \frac{\partial z_j^{(l)}}{\partial b_{j}^{(l)}} =\delta_j^{(l)} \tag{2.6} \end{align*} \]


所以接下來的問題就是求 \(\delta_j^{(l)}\) :

(1) 對於輸出層 \(L\)
\[ \delta_j^{L} = \frac{\partial \mathcal{L}}{\partial z_j^{(L)}} = \frac{\partial \mathcal{L}}{\partial a_j^{(L)}} \frac{\partial a_j^{(L)}}{\partial z_j^{(L)}} = \frac{\partial \mathcal{L}}{\partial a_j^{(L)}} f'(z_j^{(L)}) \tag{2.7} \]

(2) 對於隱藏層 \(l\) ,由 \((2.1)\) 式可知:
\[ \begin{cases} z_1^{(l+1)} &= \sum_j W_{1j}^{(l+1)} a_j^{(l)} + b_1^{(l+1)} \\ z_2^{(l+1)} &= \sum_j W_{2j}^{(l+1)} a_j^{(l)} + b_2^{(l+1)} \\[0.3ex] & \vdots \\[0.3ex] z_k^{(l+1)} &= \sum_j W_{kj}^{(l+1)} a_j^{(l)} + b_k^{(l+1)} \end{cases} \]
可見 \(a_j^{(l)}\) 對於 \(\mathbf{z}^{(l+1)}\) 的每個分量都有影響,使用鏈式法則時需要加和每個分量,下圖是一個形象表示:


所以下面求 \(\delta_j^{(l)}\) 時會用 \(k\) 加和:
\[ \begin{align*} \delta_j^{(l)} = \frac{\partial \mathcal{L}}{\partial z_j^{(l)}} &= \left(\sum_k \frac{\partial \mathcal{L}}{\partial z_k^{(l+1)}}\frac{\partial z_k^{(l+1)}}{\partial a_j^{(l)}} \right) \frac{\partial a_j^{(l)}}{\partial z_j^{(l)}} \\[2ex] &= \left(\sum_k \frac{\partial \mathcal{L}}{\partial z_k^{(l+1)}}\frac{\partial \left(\sum\limits_j W_{kj}^{(l+1)}a_j^{(l)} + b_k^{(l+1)}\right)}{\partial a_j^{(l)}} \right) \frac{\partial a_j^{(l)}}{\partial z_j^{(l)}} \\[1ex] &= \left(\sum\limits_k \delta_k^{(l+1)} W_{kj}^{(l+1)}\right) f'(z_j^{(l)}) \tag{2.8} \end{align*} \]



將上面的 \((2.5) \sim (2.8)\) 式寫成矩陣形式,就得到了傳說中反向傳播演算法的四大公式:
\[ \begin{align*} & \boldsymbol{\delta} ^{(L)} = \frac{\partial \mathcal{L}}{\partial \,\mathbf{z}^{(L)}}= \nabla_{\mathbf{a}^{(L)}} \mathcal{L} \,\odot f'(\mathbf{z}^{(L)}) \\ & \boldsymbol{\delta}^{(l)} = \frac{\partial \mathcal{L}}{\partial \,\mathbf{z}^{(l)}} = ((\mathbf{W}^{(l+1)})^{\text{T}} \boldsymbol{\delta}^{(l+1)}) \odot f'(\mathbf{z}^{(l)}) \\[1ex] & \frac{\partial \mathcal{L}}{\partial \mathbf{W}^{(l)}} = \boldsymbol{\delta}^{(l)} (\mathbf{a}^{(l-1)})^\text{T} = \begin {bmatrix} \delta_1^{(l)} a_1^{(l-1)} & \delta_1^{(l)} a_2^{(l-1)} & \cdots & \delta_1^{(l)} a_k^{(l-1)} \\ \delta_2^{(l)} a_1^{(l-1)} & \delta_2^{(l)} a_2^{(l-1)} & \cdots & \delta_2^{(l)} a_k^{(l-1)} \\ \vdots & \vdots & \ddots \\ \delta_j^{(l)} a_1^{(l-1)} &\delta_j^{(l)} a_2^{(l-1)} & \cdots & \delta_j^{(l)} a_k^{(l-1)} \end {bmatrix}\\ & \frac{\partial \mathcal{L}}{\partial \mathbf{b}^{(l)}} = \boldsymbol{\delta}^{(l)} \end{align*} \]


\(\boldsymbol{\delta}^{(l)}\) 的計算可以直接套用上面損失函式 + 啟用函式的計算結果。




反向傳播演算法 + 梯度下降演算法流程

(1) 前向傳播階段:使用下列式子計算每一層的 \(\mathbf{z}^{(l)}\)\(\mathbf{a}^{(l)}\) ,直到最後一層。
\[ \begin{align*} &\mathbf{z}^{(l)} = \mathbf{W}^{(l)} \mathbf{a}^{(l-1)} + \mathbf{b}^{(l)} \\[0.5ex] &\mathbf{a}^{(l)} = f(\mathbf{z}^{(l)}) \\[0.5ex] &\mathbf{\hat{y}} = \mathbf{a}^{(L)} = f(\mathbf{z}^{(L)}) \\[0.5ex] &\mathcal{L} = \mathcal{L}(\mathbf{y}, \mathbf{\hat{y}}) \end{align*} \]


(2) 反向傳播階段

​ (2.1) 計算輸出層的誤差: $ \boldsymbol{\delta} ^{(L)} = \nabla_{\mathbf{a}^{(L)}} \mathcal{L} ,\odot f'(\mathbf{z}^{(L)})$

​ (2.2) 由後一層反向傳播計算前一層的誤差: \(\boldsymbol{\delta}^{(l)} = ((\mathbf{W}^{(l+1)})^{\text{T}} \boldsymbol{\delta}^{(l+1)}) \odot f'(\mathbf{z}^{(l)})\)

​ (2.3) 計算梯度: \(\frac{\partial \mathcal{L}}{\partial \mathbf{W}^{(l)}} = \boldsymbol{\delta}^{(l)} (\mathbf{a}^{(l-1)})^\text{T}\)\(\frac{\partial \mathcal{L}}{\partial \mathbf{b}^{(l)}} = \boldsymbol{\delta}^{(l)}\)


(3) 引數更新
\[ \begin{align*} \mathbf{W}^{(l)} &= \mathbf{W}^{(l)} - \alpha \frac{\partial \mathcal{L}}{\partial\, \mathbf{W}^{(l)}} \\[2ex] \mathbf{b}^{(l)} &=\,\, \mathbf{b}^{(l)} - \alpha \frac{\partial \mathcal{L}}{\partial\, \mathbf{b}^{(l)}} \end{align*} \]







Reference

  1. 神經網路反向傳播演算法
  2. How the backpropagation algorithm works
  3. The Softmax function and its derivative
  4. Notes on Backpropagation
  5. Computational Graph & Backpropagation





/