1. 程式人生 > 其它 >Pytorch 損失函式總結

Pytorch 損失函式總結

1 nn.L1Loss

  torch.nn.L1Loss(reduction='mean')

  就是 MAE(mean absolute error),計算公式為

    $\ell(x, y)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=\left|x_{n}-y_{n}\right|$

    $\ell(x, y)=\left\{\begin{array}{ll}\operatorname{mean}(L), & \text { if reduction }=\text { 'mean'; } \\\operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array}\right.$

  例子:逐元素計算

input = torch.arange(1,7.).view(2,3)
target = torch.arange(6).view(2,3)
print(input)
print(target)
"""
tensor([[1., 2., 3.],
        [4., 5., 6.]])
tensor([[0, 1, 2],
        [3, 4, 5]])
"""
loss = nn.L1Loss(reduction='sum')
output = loss(input, target)
print(output)
"""
tensor(6.)
"""
loss = nn.L1Loss(reduction='mean') output = loss(input, target) print(output) """ tensor(1.) """

2 nn.MSELoss

    torch.nn.MSELoss(reduction='mean')

  如其名,mean squared error,也就是 L2 正則項,計算公式為

  $\ell(x, y)=\left\{\begin{array}{ll}\operatorname{mean}(L), & \text { if reduction }=\text { 'mean'; } \\\operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array}\right.$

  $\ell(x, y)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=\left(x_{n}-y_{n}\right)^{2}$

  有 mean 和 sum 兩種模式選,通過 reduction 控制。

  例子:逐元素計算

loss = nn.MSELoss(reduction="mean")
output = loss(input, target)
print(output)
"""
tensor(1.)
"""
loss = nn.MSELoss(reduction="sum")
output = loss(input, target)
print(output)
"""
tensor(6.)
"""

  從上述實驗可以看出

    $l_{n}=\left(x_{n}-y_{n}\right)^{2}$ 

  是逐元素計算。

3 nn.SmoothL1Loss

    torch.nn.SmoothL1Loss(reduction='mean', beta=1.0)

  對 L1 做了一點平滑,比起MSELoss,對於 outlier 更加不敏感。

    $\ell(x, y)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{T}$

    $l_{n}=\left\{\begin{array}{ll}0.5\left(x_{n}-y_{n}\right)^{2} / \text { beta }, & \text { if }\left|x_{n}-y_{n}\right|<\text { beta } \\\left|x_{n}-y_{n}\right|-0.5 * \text { beta }, & \text { otherwise }\end{array}\right.$

  在Fast-RCNN中使用以避免梯度爆炸。

  例子:逐元素計算

loss = nn.MSELoss(reduction="sum")
output = loss(input, target)
print(output)
"""
tensor(6.)
"""
loss = nn.SmoothL1Loss(reduction="mean")
output = loss(input, target)
print(output)
"""
tensor(0.5000)
"""
loss = nn.SmoothL1Loss(reduction="mean",beta = 3)
output = loss(input, target)
print(output)
"""
tensor(0.1667)
"""

4 nn.BCELoss 以及 nn.BCEWithLogitsLoss

    torch.nn.BCELoss(weight=None,reduction='mean')

  Binary Cross Entropy,公式如下:

    $\ell(x, y)=\left\{\begin{array}{ll}\operatorname{mean}(L), & \text { if reduction }=\text { 'mean'; } \\\operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array}\right.$

    $\ell(x, y)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{n}\left[y_{n} \cdot \log x_{n}+\left(1-y_{n}\right) \cdot \log \left(1-x_{n}\right)\right]$

  雙向的交叉熵,相當於交叉熵公式的二分類簡化版,可以用於分類不互斥的多分類任務。

  BCELoss需要先手動對輸入 sigmoid,然後每一個位置如果分類是 1 則加 $-log(exp(x))$ 否則加 $-log(exp(1-x))$,最後求取平均。

  BCEWithLogitsLoss 則不需要 sigmoid,其他都完全一樣。

  例子:逐元素計算。

target = torch.tensor([[1,0,1],[0,1,1]],dtype = torch.float32)
raw_output = torch.randn(2,3,dtype = torch.float32)
output = torch.sigmoid(raw_output)
print(output)

result = np.zeros((2,3))
for ix in range(2):
    for iy in range(3):
        if(target[ix, iy]==1): 
            result[ix, iy] += -np.log(output[ix, iy])
        elif(target[ix, iy]==0): 
            result[ix, iy] += -np.log(1-output[ix, iy])

print(result)
print(np.mean(result))

loss_fn = torch.nn.BCELoss(reduction='none')
print(loss_fn(output, target))
loss_fn = torch.nn.BCELoss(reduction='mean')
print(loss_fn(output, target))
loss_fn = torch.nn.BCEWithLogitsLoss(reduction='sum')
print(loss_fn(raw_output, target))
tensor([[0.5316, 0.6816, 0.4768],
        [0.6485, 0.3037, 0.5490]])

[[0.63186073 1.14431179 0.74067789]
 [1.04543173 1.19187558 0.59973639]]

0.892315685749054

tensor([[0.6319, 1.1443, 0.7407],
        [1.0454, 1.1919, 0.5997]])

tensor(0.8923)

tensor(5.3539)

5 nn.CrossEntropyLoss

     torch.nn.CrossEntropyLoss(weight=None, ignore_index=- 100, reduction='mean', label_smoothing=0.0)

  經典Loss, 計算公式為:

    $\text { weight }[\text { class }]\left(-\log \left(\frac{\exp (x[\text { class }])}{\sum\limits_{j} \exp (x[j])}\right)\right)=\text { weight }[\text { class }]\left(-x[\text { class }]+\log \left(\sum\limits_{j} \exp (x[j])\right)\right)$

  相當於先將輸出值通過 softmax 對映到每個值在 $[0,1]$,和為 $1$ 的空間上。

  希望正確的 class 對應的 loss 越小越好,所以對 $\left(\frac {\exp (x[\text {class}])}{\sum\limits _{j} \exp (x[j])}\right)$ 求取 $-log()$, 把 $[0,1]$ 對映到 $[0,+\infty]$ 上,正確項的概率佔比越大,整體損失就越小。

  torch裡的CrossEntropyLoss(x) 等價於 NLLLoss(LogSoftmax(x))

  預期輸入未normalize過的score,輸入形狀和NLL一樣,為$(N,C)和(N)$

  例子:按樣本數計算

target = torch.tensor([1,0,3])
output = torch.randn(3,5)
print(output)
"""
tensor([[-2.5728, -0.4581, -0.2017,  1.8813,  0.4544],
        [-0.7278,  0.6300,  0.6510, -1.7570,  1.1788],
        [-0.4660,  0.0410,  0.6876,  0.8966,  0.1446]])
"""
loss_fn = torch.nn.CrossEntropyLoss(reduction='mean')
loss = loss_fn(output, target)
print(loss)
"""
tensor(2.1940)
"""
loss_fn = torch.nn.CrossEntropyLoss(reduction='sum')
loss = loss_fn(output, target)
print(loss)
"""
tensor(6.5821)
"""

   例子:手寫版

target = torch.tensor([1,0,3])
output = torch.randn(3,5)
print(output)
"""
tensor([[-0.1168,  1.5417,  1.1748, -1.1856, -0.1233],
        [ 0.2074, -0.7376, -0.8934,  0.0899,  0.5337],
        [-0.5323, -0.2945, -0.1710,  1.5925,  1.3654]])
"""
result = np.array([0.0, 0.0, 0.0])
for ix in range(3):
    log_sum = 0.0
    for iy in range(5):
        if(iy==target[ix]): 
            result[ix] += -output[ix, iy]
        log_sum += np.exp(output[ix, iy])
    result[ix] += np.log(log_sum)
print(result)
print(np.mean(result))

loss_fn = torch.nn.CrossEntropyLoss(reduction='mean')
loss = loss_fn(output, target)
print(loss.item())
"""
[0.75984335 1.3853296  0.80614853]
0.9837738275527954
0.9837737679481506
"""

6 nn.NLLLoss

     torch.nn.NLLLoss(weight=None,ignore_index=- 100, reduction='mean')

  negative log likelihood loss,用於訓練 n 類分類器,對於不平衡資料集,可以給類別新增 weight,計算公式為
    $l_{n}=-w_{y_{n}} x_{n, y_{n}}$

    $-w_{c}=\text { weight }[c] \cdot 1$

  預期輸入形狀 $(N,C)$ 和 $(N)$,其中 $N$ 為 batch 大小,$C$ 為類別數;

  計算每個 case 的 target 對應類別的概率的負值,然後求取平均/和,一般與一個 LogSoftMax 連用從而獲得對數概率。

  例子:按樣本數計算

target = torch.tensor([1,0,3])
output = torch.randn(3,5)
print(output)

loss_fn = torch.nn.NLLLoss(reduction='mean')
loss = loss_fn(output, target)
print(loss)

loss_fn = torch.nn.NLLLoss(reduction='sum')
loss = loss_fn(output, target)
print(loss)
"""
tensor([[ 1.5083,  0.1846, -1.8400, -0.0068, -0.1943],
        [ 0.5303, -0.0350, -0.3924,  0.3026,  0.6159],
        [ 2.0047, -1.0653,  0.0718, -0.8632, -1.0695]])
tensor(0.0494)
tensor(0.1482)
"""

  顯然不是逐元素計算。

  例子:

import torch
input=torch.randn(3,3)
soft_input = torch.nn.Softmax(dim=0)
soft_input(input)
"""
tensor([[0.2603, 0.6519, 0.5811],
        [0.5248, 0.3026, 0.1783],
        [0.2148, 0.0455, 0.2406]])
"""
#對softmax結果取log
torch.log(soft_input(input))
"""
tensor([[-1.3458, -0.4279, -0.5428],
        [-0.6447, -1.1952, -1.7243],
        [-1.5379, -3.0898, -1.4248]])
"""

  假設標籤是[0,1,2],第一行取第0個元素,第二行取第1個,第三行取第2個,去掉負號,即[0.3168,3.3093,0.4701],求平均值,就可以得到損失值。

(0.3168+3.3093+0.4701)/3
"""
1.3654000000000002
"""
loss=torch.nn.NLLLoss()
target=torch.tensor([0,1,2])
loss(input,target)
"""
tensor(-0.1395)
"""

   所以 nn.NLLLoss 計算方式為:log(softmax) 取平均

 

參考:https://segmentfault.com/a/1190000038584083