深度學習之Bottleneck Layer or Bottleneck Features
我的理解:就是通過1*1的卷積來大幅度改變通道數的操作,在殘差網路中就是為了能減少訓練引數,能加深網路層數。
在深度學習中經常聽聞Bottleneck Layer 或 Bottleneck Features ,亦或 Bottleneck Block,其雖然容易理解,其意思就是輸入輸出維度差距較大,就像一個瓶頸一樣,上窄下寬亦或上寬下窄,然而其正兒八經的官方出處沒有一個能說出其所以然來,下面本文將對Bottleneck Layer 或 Bottleneck Features ,亦或 Bottleneck Block追根溯源,使之對Bottleneck Layer 或 Bottleneck Features ,亦或 Bottleneck Block有一個全面的認識。
首先來看一篇關於深度神經網路的有效處理 的綜述文章對Bottleneck Building Block的表述:"In order to reduce the number of weights, 1x1 filters are applied as a "bottleneck" to reduce the number of channels for each filter",在這句話中,1x1 filters 最初的出處即"Network In Network",1x1 filters 可以起到一個改變輸出維數(channels)的作用(elevation or dimensionality reduction)。下面來看一下ResNet對Bottleneck Building Block的表述:
其對應的圖示如下:
可以看到,右圖中 1x1 filters把維度(channels)升高了,輸入輸出維度差距較大。繼續如下圖所示:
還有一篇論文 "Improved Bottleneck Features Using Pretrained Deep Neural Networks" 對Bottleneck Building Block作了簡單的描述:"Bottleneck features are generated from a multi-layer perceptron in which one of the internal layers has a small number of hidden units, relative to the size of the other layers."
轉自:https://blog.csdn.net/u011501388/article/details/80389164