論文閱讀 (十三):Revisiting Multiple Instance Neural Networks (2016 mi-Net & MI-Net)
文章目錄
引入
論文地址:https://arxiv.org/pdf/1610.02501.pdf
主要內容或優勢:
1)以往的多示例神經網路聚焦於評估例項標籤,本文則是習得包的表示 (bag representations
2)預測和訓練快的飛起。
1 多示例神經網路
本文符號系統如下:
符號 | 含義 |
---|---|
X = { X 1 , X 2 , ⋯ , X N } X = \{ X_1, X_2, \cdots, X_N \} X={X1,X2,⋯,XN} | 包的集合 |
X i = { x i 1 , x i 2 , ⋯ , x i m i } X_i = \{ x_{i1}, x_{i2}, \cdots, x_{im_i} \} Xi={xi1,xi2,⋯,ximi} | 包 |
x
i
j
∈
R
d
×
1
x_{ij} \in \mathbb{R}^{d \times 1}
xi | 例項 |
N N N | 包數量 |
m i m_i mi | 包大小 |
Y i ∈ { 0 , 1 } Y_i \in \{ 0, 1 \} Yi∈{0,1} | 包標籤 |
y i j ∈ { 0 , 1 } y_{ij} \in \{ 0, 1 \} yij∈{0,1} | 例項標籤 |
包標籤中, 1 1 1代表正包, 0 0 0代表負包,且包與例項的標籤滿足標準MI假設:
Y
i
=
{
0
,
∀
y
i
j
=
0
;
1
,
∑
j
=
1
m
i
y
i
j
≥
1.
(1*)
Y_i = \begin{cases} 0, \qquad \forall y_{ij} = 0;\\ 1, \qquad \sum_{j = 1}^{m_i} y_{ij} \geq 1. \end{cases} \tag{1*}
如引入所述,多示例神經網路 (MILL)中共兩種策略,具體為:
1)習得例項的標籤,即將例項為正的概率作為隱藏層 (placing instance probabilities of being positive as a hidden layer in the network)
[1,2,3]
^\text{[1, 2, 3]}
[1,2,3];
2)本文提出:習得包表示,直接對包分類。
考慮將單個包
X
i
X_i
Xi傳遞給MINN的情況:
L
L
L層,每一層均包含一個啟用函式
H
ℓ
(
⋅
)
H^{\ell}(\cdot)
Hℓ(⋅),其中
ℓ
\ell
ℓ表示當前層數;令
x
i
j
ℓ
x_{ij}^{\ell}
xijℓ表示例項
x
i
j
x_{ij}
xij第
ℓ
th
\ell^{\text{th}}
ℓth層的輸出。
1.1 mi-Net:Instance-Space MIL Algorithm
傳統MINN中
[1,2,3]
^\text{[1, 2, 3]}
[1,2,3],即mi-Net,大致過程如圖1。圖1中。使用四個連線層,且使用ReLU啟用函式。最終將獲得第
L
−
2
L - 2
L−2層的例項特徵,用
x
i
j
L
−
2
x_{ij}^{L - 2}
xijL−2表示,相對應的概率輸出為
p
i
j
L
−
1
p_{ij}^{L - 1}
pijL−1,並歸一化至
[
0
,
1
]
[0, 1]
[0,1];包的概率輸出記為
P
L
(
X
i
)
P^L (X_i)
PL(Xi)。
為解決MIL中例項不帶標籤這一問題,在網路的訓練階段,將其標籤看作是是潛在變數,最終設定某種方法彙總例項的輸出概率為包的輸出概率。
mi-Net可以格式表示為:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; P i L = M L ( p i j ∣ j = 1 … m i L − 1 ) . (1) \begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ P_i^L = M^L (p_{ij \mid j = 1 \ldots m_i}^{L - 1}). \end{cases} \tag{1} {xijℓ=Hℓ(xijℓ−1);PiL=ML(pij∣j=1…miL−1).(1)
1.2 MI-Net: A new Embedded-Space MIL Algorithm
無需依賴例項的輸出概率,而是直接習得包的表示,如圖2,歸納如下:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; X i ℓ = M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) . (2) \begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ X_i^{\ell} = M^{\ell} (x_{ij \mid j = 1 \ldots m_i}^{\ell - 1}). \end{cases} \tag{2} {xijℓ=Hℓ(xijℓ−1);Xiℓ=Mℓ(xij∣j=1…miℓ−1).(2)
1.3 MI-Net with Deep Supervision
受Deeply-Supervised Nets (DSN) [4] ^\text{[4]} [4]啟發,將deep supervisions新增至MI-Net中,如圖3。規則化如下:
{
x
i
j
ℓ
=
H
ℓ
(
x
i
j
ℓ
−
1
)
;
X
i
ℓ
,
k
=
M
ℓ
(
x
i
j
∣
j
=
1
…
m
i
k
)
,
k
∈
{
1
,
2
,
3
}
.
(3)
\begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ X_i^{\ell, k} = M^{\ell} (x_{ij \mid j = 1 \ldots m_i}^k), k \in \{ 1, 2, 3 \}. \end{cases} \tag{3}
{xijℓ=Hℓ(xijℓ−1);Xiℓ,k=Mℓ(xij∣j=1…mik),k∈{1,2,3}.(3)其中
k
k
k表示將從所有不同的例項特徵中習得包特徵。
1.4 MI-Net with Residual Connections
規則化如下:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; X i 1 = M ℓ ( x i j ∣ j = 1 … m i 1 ) ; X i ℓ = M ℓ ( x i j ∣ j = 1 … m i ℓ ) + X ℓ − 1 , ℓ > 1. (4) \left\{\begin{array}{l} x_{i j}^{\ell}=H^{\ell}\left(x_{i j}^{\ell-1}\right); \\ X_{i}^{1}=M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{1}\right); \\ X_{i}^{\ell}=M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell}\right)+X^{\ell-1}, \ell>1. \end{array}\right. \tag{4} ⎩⎪⎪⎨⎪⎪⎧xijℓ=Hℓ(xijℓ−1);Xi1=Mℓ(xij∣j=1…mi1);Xiℓ=Mℓ(xij∣j=1…miℓ)+Xℓ−1,ℓ>1.(4)
1.5 MIL匯聚方法
本文使用三種匯聚方法,包括最大、平均以及log-sum-exp (LSE) [5] ^\text{[5]} [5]。LSE為最大、平均匯聚的平滑版本。具體如下:
{ max : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = max j x i j ℓ − 1 ; mean : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = 1 m i ∑ j = 1 m i x i j ℓ − 1 ; L S E : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = r − 1 log [ 1 m i ∑ j = 1 m i exp ( r ⋅ x i j ℓ − 1 ) ] . (5) \left\{\begin{array}{ll} \max : & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=\max _{j} x_{i j}^{\ell-1}; \\ \operatorname{mean}: & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=\frac{1}{m_{i}} \sum_{j=1}^{m_{i}} x_{i j}^{\ell-1}; \\ \mathrm{LSE}: & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=r^{-1} \log \left[\frac{1}{m_{i}} \sum_{j=1}^{m_{i}} \exp \left(r \cdot x_{i j}^{\ell-1}\right)\right]. \end{array}\right. \tag{5} ⎩⎪⎪⎪⎨⎪⎪⎪⎧max:mean:LSE:Mℓ(xij∣j=1…miℓ−1)=maxjxijℓ−1;Mℓ(xij∣j=1…miℓ−1)=mi1∑j=1mixijℓ−1;Mℓ(xij∣j=1…miℓ−1)=r−1log[mi1∑j=1miexp(r⋅xijℓ−1)].(5)其中 r r r是超引數,其越大越接近最大;反正解決平均。
1.6 訓練損失
訓練損失為每個包得分 S i S_i Si的累加,其中每個包得分的計算如下:
Loss ( S i , Y i ) = − { ( 1 − Y i ) log ( 1 − S i ) + Y i log S i } . (6) \text{Loss} (S_i, Y_i) = - \{ (1 - Y_i) \log (1 - S_i) + Y_i \log S_i \}. \tag{6} Loss(Si,Yi)=−{(1−Yi)log(1−Si)+YilogSi}.(6)網路的訓練將使用隨機梯度下降的標準反饋。
[1]: J. Ramon and L. De Raedt, “Multi instance neural networks,” in Proceedings of the ICML-2000 workshop on attribute-value and relational learning, 2000, pp. 53–60.
[2]: Z.-H. Zhou and M.-L. Zhang, “Neural networks for multi-instance learning,” in Proceedings of the International Conference on Intelligent Information Technology, Beijing, China, 2002, pp. 455–459.
[3]: J. Wu, Y. Yu, C. Huang, and K. Yu, “Deep multiple instance learning for image classification and auto-annotation,” in CVPR, 2015, pp. 3460–3469.
[4]: C. Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply-Supervised Nets,” in AISTATS, 2015, pp. 562–570.
[5]: S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.