1. 程式人生 > 其它 >sigmoid 與 relu 求導程式碼

sigmoid 與 relu 求導程式碼

技術標籤:深度學習

附錄: 吳恩達深度學習中,simoid, relu反相求導程式碼, 後期應用時方便參考

其中 dA 如下:

dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL

在這裡插入圖片描述

cache: 儲存A_pre, w_l, b_l. 即A _pre為第l層的輸入。

  1. sigmoid 求導
def sigmoid_backward(dA, cache):
    """
    Implement the backward propagation for a single SIGMOID unit.
    
    Arguments:
    dA -- post-activation gradient, of any shape
    cache -- 'Z' where we store for computing backward propagation efficiently

    Returns:
    dZ -- Gradient of the cost with respect to Z
    """
# Z = cache # s = 1 / (1 + np.exp(-Z)) # dZ = dA * s * (1 - s) ## 這裡為什麼乘dA, 沒有懂 print("sigmoid", dA.shape, s.shape, dZ.shape) print(dA, s, dZ) assert (dZ.shape == Z.shape) # return dZ
  1. relu求導

def relu_backward(dA, cache):
    """
    Implement the backward propagation for a single RELU unit.

    Arguments:
    dA -- post-activation gradient, of any shape
    cache -- 'Z' where we store for computing backward propagation efficiently

    Returns:
    dZ -- Gradient of the cost with respect to Z
    """
# Z = cache dZ = np.array(dA, copy=True) # just converting dz to a correct object. --這裡沒太懂 # When z <= 0, you should set dz to 0 as well. dZ[Z <= 0] = 0 assert (dZ.shape == Z.shape) # return dZ