1. 程式人生 > >斯坦福深度學習課程cs231n assignment2作業筆記六:Convolutional Networks

斯坦福深度學習課程cs231n assignment2作業筆記六:Convolutional Networks

話不多說,直接上程式碼:

Convolution: Naive forward pass

def conv_forward_naive(x, w, b, conv_param):
    """
    A naive implementation of the forward pass for a convolutional layer.

    The input consists of N data points, each with C channels, height H and
    width W. We convolve each input with F different filters, where each filter
    spans all C channels and has height HH and width WW.

    Input:
    - x: Input data of shape (N, C, H, W)
    - w: Filter weights of shape (F, C, HH, WW)
    - b: Biases, of shape (F,)
    - conv_param: A dictionary with the following keys:
      - 'stride': The number of pixels between adjacent receptive fields in the
        horizontal and vertical directions.
      - 'pad': The number of pixels that will be used to zero-pad the input. 
        

    During padding, 'pad' zeros should be placed symmetrically (i.e equally on both sides)
    along the height and width axes of the input. Be careful not to modfiy the original
    input x directly.

    Returns a tuple of:
    - out: Output data, of shape (N, F, H', W') where H' and W' are given by
      H' = 1 + (H + 2 * pad - HH) / stride
      W' = 1 + (W + 2 * pad - WW) / stride
    - cache: (x, w, b, conv_param)
    """
out = None ########################################################################### # TODO: Implement the convolutional forward pass. # # Hint: you can use the function np.pad for padding. # ###########################################################################
N, C, H, W = x.shape F, _, HH, WW = w.shape stride, pad = conv_param['stride'], conv_param['pad'] H_out = int(1 + (H + 2 * pad - HH) / stride) W_out = int(1 + (W + 2 * pad - WW) / stride) out = np.zeros((N, F, H_out, W_out)) #預分配輸出out的記憶體 x_pad = np.pad(x, ((0,), (0,), (pad,
), (pad,)), mode='constant', constant_values=0) for i in range(H_out): for j in range(W_out): #逐一計算輸出值 x_pad_mask = x_pad[:, :, stride * i:HH + stride * i, stride * j: stride * j + WW] for k in range(F): out[:, k, i, j] = np.sum(x_pad_mask * w[k, :, :, :], axis=(1,2,3)) out += b[None, :, None, None] #加上偏置,這裡None添加了維度,使得能夠正確相加 ########################################################################### # END OF YOUR CODE # ########################################################################### cache = (x, w, b, conv_param) return out, cache

Convolution: Naive backward pass

def conv_backward_naive(dout, cache):
    """
    A naive implementation of the backward pass for a convolutional layer.

    Inputs:
    - dout: Upstream derivatives.
    - cache: A tuple of (x, w, b, conv_param) as in conv_forward_naive

    Returns a tuple of:
    - dx: Gradient with respect to x
    - dw: Gradient with respect to w
    - db: Gradient with respect to b
    """
    dx, dw, db = None, None, None
    ###########################################################################
    # TODO: Implement the convolutional backward pass.                        #
    ###########################################################################
    x, w, b, conv_param = cache
    N, C, H, W = x.shape
    F, _, HH, WW = w.shape
    stride, pad = conv_param['stride'], conv_param['pad']
    H_out = int(1 + (H + 2 * pad - HH) / stride)
    W_out = int(1 + (W + 2 * pad - WW) / stride)

    x_pad = np.pad(x, ((0,), (0,), (pad,), (pad,)), mode='constant', constant_values=0)
    dx = np.zeros_like(x)
    dx_pad = np.zeros_like(x_pad)
    dw = np.zeros_like(w)

    db = np.sum(dout, axis=(0,2,3))
    for i in range(H_out):
      for j in range(W_out):
          x_pad_masked = x_pad[:, :, i*stride:i*stride+HH, j*stride:j*stride+WW]
          # 注意弄清輸出的每一位,有哪些輸入X和W參與,逐一計算梯度
          for k in range(F):
              dw[k ,: ,: ,:] += np.sum(x_pad_masked * (dout[:, k, i, j])[:, None, None, None], axis=0)
          for n in range(N):
              dx_pad[n, :, i*stride:i*stride+HH, j*stride:j*stride+WW] += np.sum((w[:, :, :, :] * 
                                                 (dout[n, :, i, j])[:,None ,None, None]), axis=0)
    dx = dx_pad[:,:,pad:-pad,pad:-pad]
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    return dx, dw, db

測試結果

Testing conv_backward_naive function
dx error:  1.159803161159293e-08
dw error:  2.247109434939654e-10
db error:  3.37264006649648e-11

max_pool

def max_pool_forward_naive(x, pool_param):
    """
    A naive implementation of the forward pass for a max-pooling layer.

    Inputs:
    - x: Input data, of shape (N, C, H, W)
    - pool_param: dictionary with the following keys:
      - 'pool_height': The height of each pooling region
      - 'pool_width': The width of each pooling region
      - 'stride': The distance between adjacent pooling regions

    No padding is necessary here. Output size is given by 

    Returns a tuple of:
    - out: Output data, of shape (N, C, H', W') where H' and W' are given by
      H' = 1 + (H - pool_height) / stride
      W' = 1 + (W - pool_width) / stride
    - cache: (x, pool_param)
    """
    out = None
    ###########################################################################
    # TODO: Implement the max-pooling forward pass                            #
    ###########################################################################
    HH, WW, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride']
    N, C, H, W = x.shape
    H_out = int(1 + (H - HH) / stride)
    W_out = int(1 + (W - WW) / stride)

    out = np.zeros((N, C, H_out, W_out))
    for i in range(H_out):
        for j in range(W_out):
            x_mask = x[:, :, stride * i:stride * i + HH, stride * j:stride * j + WW]
            out[:, :, i, j] = np.max(x_mask, axis=(2, 3))
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    cache = (x, pool_param)
    return out, cache


def max_pool_backward_naive(dout, cache):
    """
    A naive implementation of the backward pass for a max-pooling layer.

    Inputs:
    - dout: Upstream derivatives
    - cache: A tuple of (x, pool_param) as in the forward pass.

    Returns:
    - dx: Gradient with respect to x
    """
    dx = None
    ###########################################################################
    # TODO: Implement the max-pooling backward pass                           #
    ###########################################################################
    x, pool_param = cache
    N, C, H, W = x.shape
    HH, WW, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride']
    H_out = int((H-HH)/stride+1)
    W_out = int((W-WW)/stride+1)

    dx = np.zeros_like(x)
    for i in range(H_out):
        for j in range(W_out):
            x_masked = x[:,:,i*stride : i*stride+HH, j*stride : j*stride+WW]
            max_x_masked = np.max(x_masked,axis=(2,3))
            temp_binary_mask = (x_masked == (max_x_masked)[:,:,None,None])
            dx[:,:,i*stride : i*stride+HH, j*stride : j*stride+WW] += temp_binary_mask * (dout[:,:,i,j])[:,:,None,None]
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    return dx

Three-layer ConvNet

class ThreeLayerConvNet(object):
    """
    A three-layer convolutional network with the following architecture:

    conv - relu - 2x2 max pool - affine - relu - affine - softmax

    The network operates on minibatches of data that have shape (N, C, H, W)
    consisting of N images, each with height H and width W and with C input
    channels.
    """

    def __init__(self, input_dim=(3, 32, 32), num_filters=32, filter_size=7,
                 hidden_dim=100, num_classes=10, weight_scale=1e-3, reg=0.0,
                 dtype=np.float32):
        """
        Initialize a new network.

        Inputs:
        - input_dim: Tuple (C, H, W) giving size of input data
        - num_filters: Number of filters to use in the convolutional layer
        - filter_size: Width/height of filters to use in the convolutional layer
        - hidden_dim: Number of units to use in the fully-connected hidden layer
        - num_classes: Number of scores to produce from the final affine layer.
        - weight_scale: Scalar giving standard deviation for random initialization
          of weights.
        - reg: Scalar giving L2 regularization strength
        - dtype: numpy datatype to use for computation.
        """
        self.params = {}
        self.reg = reg
        self.dtype = dtype

        ############################################################################
        # TODO: Initialize weights and biases for the three-layer convolutional    #
        # network. Weights should be initialized from a Gaussian centered at 0.0   #
        # with standard deviation equal to weight_scale; biases should be          #
        # initialized to zero. All weights and biases should be stored in the      #
        #  dictionary self.params. Store weights and biases for the convolutional  #
        # layer using the keys 'W1' and 'b1'; use keys 'W2' and 'b2' for the       #
        # weights and biases of the hidden affine layer, and keys 'W3' and 'b3'    #
        # for the weights and biases of the output affine layer.                   #
        #                                                                          #
        # IMPORTANT: For this assignment, you can assume that the padding          #
        # and stride of the first convolutional layer are chosen so that           #
        # **the width and height of the input are preserved**. Take a look at      #
        # the start of the loss() function to see how that happens.                #                           
        ############################################################################
        C, H, W = input_dim
        self.params['W1'] = np.random.normal(0, weight_scale, (num_filters, C, filter_size, filter_size))
        self.params['b1'] = np.zeros((num_filters))
        self.params['W2'] = np.random.normal(0, weight_scale, (int((H / 2) * (W / 2) * num_filters), hidden_dim))
        self.params['b2'] = np.zeros(hidden_dim)
        self.params['W3'] = np.random.normal(0, weight_scale, (hidden_dim, num_classes))
        self.params['b3'] = np.zeros(num_classes)
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        for k, v in self.params.items():
            self.params[k] = v.astype(dtype)


    def loss(self, X, y=None):
        """
        Evaluate loss and gradient for the three-layer convolutional network.

        Input / output: Same API as TwoLayerNet in fc_net.py.
        """
        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']
        W3, b3 = self.params['W3'], self.params['b3']

        # pass conv_param to the forward pass for the convolutional layer
        # Padding and stride chosen to preserve the input spatial size
        filter_size = W1.shape[2]
        conv_param = {'stride': 1, 'pad': (filter_size - 1) // 2}

        # pass pool_param to the forward pass for the max-pooling layer
        pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}

        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the three-layer convolutional net,  #
        # computing the class scores for X and storing them in the scores          #
        # variable.                                                                #
        #                                                                          #
        # Remember you can use the functions defined in cs231n/fast_layers.py and  #
        # cs231n/layer_utils.py in your implementation (already imported).         #
        ############################################################################
        out_conv, cache_conv = conv_relu_pool_forward(X, W1, b1, conv_param, pool_param)
        out_fc1, cache_fc1 = affine_relu_forward(out_conv, W2, b2)
        scores, cache_fc2 = affine_forward(out_fc1, W3, b3)
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        if y is None:
            return scores

        loss, grads = 0, {}
        ############################################################################
        # TODO: Implement the backward pass for the three-layer convolutional net, #
        # storing the loss and gradients in the loss and grads variables. Compute  #
        # data loss using softmax, and make sure that grads[k] holds the gradients #
        # for self.params[k]. Don't forget to add L2 regularization!               #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################
        loss, dout = softmax_loss(scores, y)
        loss += 0.5 * self.reg * (np.sum(W1**2) + np.sum(W2**2) + np.sum(W3**2))

        dx3, dw3, db3 = affine_backward(dout, cache_fc2)
        grads['W3'] = dw3 + self.reg * W3
        grads['b3'] = db3
        dx2, dw2, db2 = affine_relu_backward(dx3, cache_fc1)
        grads['W2'] = dw2 + self.reg * W2
        grads['b2'] = db2
        dx1, dw1, db1 = conv_relu_pool_backward(dx2, cache_conv)
        grads['W1'] = dw1 + self.reg * W1
        grads['b1'] = db1
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        return loss, grads

相關推薦

斯坦福深度學習課程cs231n assignment2作業筆記Convolutional Networks

話不多說,直接上程式碼: Convolution: Naive forward pass def conv_forward_naive(x, w, b, conv_param): """ A naive implementation of the

斯坦福深度學習課程cs231n assignment2作業筆記Fully-Connected Neural Nets

在有引導的情況下,發現具體實現和相關原理並不難。可是在學習這個課程之前,這些知識點對於博主來說都是不想去理解的理論知識,更沒想過手動實現。不得不說,大牛的課程就是牛啊。跟著走了一遍之後,以前感覺底層的東西都理解的很透徹。 本部落格只貼出程式碼,給大家自己編寫時有

斯坦福深度學習課程cs231n assignment1作業筆記softmax實現相關

任務 實現向量化的損失函式 實現向量化的梯度計算 分析梯度與數值梯度的驗證 使用驗證集來選擇超引數 使用SGD優化方法 視覺化權重 理論知識 softmax損失函式 令W為權重矩陣,大小為D×C;x為輸入,大小為1×D;b為偏置項,大小為1×C。那麼模型的輸

斯坦福深度學習課程cs231n assignment1作業筆記SVM實現相關

前言 本次作業需要完成: 實現SVM損失函式,並且是完全向量化的 實現相關的梯度計算,也是向量化的 使用數值梯度驗證梯度是否正確 使用驗證集來選擇一組好的學習率以及正則化係數 使用SGD方法優化loss 視覺化最終的權重 程式碼實現 使用for迴圈計算SVM

斯坦福深度學習課程筆記(二)

損失函式和優化 官網 ppt 1 損失函式 損失函式是用來定量地分析我們的模型預測效果有多糟糕的函式。損失函式輸出值越大,代表我們的模型效果越糟糕。 損失函式的通用表示: 假設我們的資料集有N個樣本,{(xi,yi)}i=1N\{(x_i,y_i)\}^{N}_

斯坦福深度學習課程筆記(一)

影象分類 ppt 1 資料驅動方法 人眼和計算機看到的圖片不同,計算機看到的圖片是由很多代表畫素點的數字表示的陣列,所以人眼和計算機的視覺識別存在著Semantic Gap(語義鴻溝)。 同時,讓計算機能夠有效地識別圖片中的物體之前,還存在很多挑戰:比如 一些

Ng深度學習課程-第三週筆記摘要

淺層神經網路: 一般地,輸入層不算在總層數內。只考慮隱藏層和輸出層的層數。 這個是當輸入是單一的訓練樣本時的計算過程,程式設計實現時也只是這四行程式碼。接下來是針對多個訓練樣本。 即是在原來單列的基礎上,再向後增加一列,每增

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(1-2)-- 神經網路基礎

以下為在Coursera上吳恩達老師的DeepLearning.ai課程專案中,第一部分《神經網路和深度學習》第二週課程部分關鍵點的筆記。筆記並不包含全部小視訊課程的記錄,如需學習筆記中捨棄的內容請至Coursera 或者 網易雲課堂。同時在閱讀以下

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(5-1)-- 迴圈神經網路

Ng最後一課釋出了,撒花!以下為吳恩達老師 DeepLearning.ai 課程專案中,第五部分《序列模型》第一週課程“迴圈神經網路”關鍵點的筆記。 同時我在知乎上開設了關於機器學習深度學習的專欄收錄下面的筆記,以方便大家在移動端的學習。歡迎關

Coursera深度學習課程 DeepLearning.ai 提煉筆記(1-2)-- 神經網路基礎

以下為在Coursera上吳恩達老師的DeepLearning.ai課程專案中,第一部分《神經網路和深度學習》第二週課程部分關鍵點的筆記。筆記並不包含全部小視訊課程的記錄,如需學習筆記中捨棄的內容請至Coursera 或者 網易雲課堂。同時在閱讀以下筆記之前,

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(5-3)-- 序列模型和注意力機制

完結撒花!以下為吳恩達老師 DeepLearning.ai 課程專案中,第五部分《序列模型》第三週課程“序列模型和注意力機制”關鍵點的筆記。 同時我在知乎上開設了關於機器學習深度學習的專欄收錄下面的筆記,以方便大家在移動端的學習。歡迎關注我的知

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(1-3)-- 淺層神經網路

以下為在Coursera上吳恩達老師的DeepLearning.ai課程專案中,第一部分《神經網路和深度學習》第三週課程“淺層神經網路”部分關鍵點的筆記。筆記並不包含全部小視訊課程的記錄,如需學習筆記中捨棄的內容請至Coursera 或者 網易雲課堂

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(1-4)-- 深層神經網路

以下為在Coursera上吳恩達老師的DeepLearning.ai課程專案中,第一部分《神經網路和深度學習》第四周課程“深層神經網路”部分關鍵點的筆記。筆記並不包含全部小視訊課程的記錄,如需學習筆記中捨棄的內容請至 Coursera 或者 網易雲課

吳恩達Coursera深度學習課程 DeepLearning.ai 提煉筆記(4-2)-- 深度卷積模型

以下為在Coursera上吳恩達老師的 DeepLearning.ai 課程專案中,第四部分《卷積神經網路》第二週課程“深度卷積模型”關鍵點的筆記。本次筆記幾乎涵蓋了所有視訊課程的內容。在閱讀以下筆記的同時,強烈建議學習吳恩達老師的視訊課程,視訊請至

吳恩達-深度學習-課程筆記-3: Python和向量化( Week 2 )

有時 指數 檢查 都是 效果 很快 -1 tro str 1 向量化( Vectorization ) 在邏輯回歸中,以計算z為例,z = w的轉置和x進行內積運算再加上b,你可以用for循環來實現。 但是在python中z可以調用numpy的方法,直接一句z = np.d

吳恩達-深度學習-課程筆記-6: 深度學習的實用層面( Week 1 )

data 絕對值 initial 均值化 http 梯度下降法 ati lod 表示 1 訓練/驗證/測試集( Train/Dev/test sets ) 構建神經網絡的時候有些參數需要選擇,比如層數,單元數,學習率,激活函數。這些參數可以通過在驗證集上的表現好壞來進行選擇

吳恩達-深度學習-課程筆記-8: 超參數調試、Batch正則化和softmax( Week 3 )

erp 搜索 給定 via 深度 mode any .com sim 1 調試處理( tuning process ) 如下圖所示,ng認為學習速率α是需要調試的最重要的超參數。 其次重要的是momentum算法的β參數(一般設為0.9),隱藏單元數和mini-batch的

神經網路與深度學習課程筆記(第三、四周)

接著學習吳恩達老師第三、四周的課程。(圖片均來自吳恩達老師課件)   第三週 1. 普通的淺層網路                        

神經網路與深度學習課程筆記(第一、二週)

   之前結束了吳恩達老師的機器學習的15節課,雖然看得很艱辛,但是也算是對於機器學習的理論有了一個入門,很多的東西需要不斷的思考以及總結。現在開始深度學習的學習,仍然做課程筆記,記錄自己的一些收穫以及思考。   第一週 1. ReLU (Rectified

Elam的吳恩達深度學習課程筆記(一)

記憶力是真的差,看過的東西要是一直不用的話就會馬上忘記,於是乎有了寫部落格把學過的東西儲存下來,大概就是所謂的集鞏固,分享,後期查閱與一身的思想吧,下面開始正題 深度學習概論 什麼是神經網路 什麼是神經網路呢,我們就以房價預測為例子來描述一個最簡單的神經網路模型。   假設有6間