torch.nn.MSELoss用法
MSELOSS
CLASS torch.nn.MSELoss(size_average=None,reduce=None,reduction: str = 'mean')
建立一個標準來測量輸入x和目標y中每個元素之間的均方誤差(L2範數平方)。
未減少的損失(即reduction
設定為'none'
)可以描述為:
其中 N是 batch size. 如果reduction
不是'none'
(預設為'mean'
), 那麼:
x和y是任意形狀的張量,每個張量總共有n個元素。
平均值運算仍對所有元素進行運算,併除以n。
如果設定reduction='sum'
Parameters
-
size_average(bool,optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
True。預設情況下,損失是batch中每個損失元素的平均數。 請注意,對於某些損失,每個樣本有多個元素。 如果將size_average欄位設定為False,則損失是每個minibatch的總和。 當reduce為False時被忽略。 預設值:True。
-
reduce(bool,optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
False
, returns a loss per batch element instead and ignoressize_average
. Default:True。預設情況下,根據size_average對每個minibatch的觀察結果求平均或求和。 當reduce為False時,返回每個batch元素損失,並忽略size_average。 預設值:True。
-
reduction(string,optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'。
指定要應用於輸出的縮減量:'none'| “mean” | 'sum'。'none':不應用任何reduction,'mean':輸出的總和除以輸出中元素的數量,'sum':輸出的總和。注意:size_average和reduce正在棄用過程中,與此同時,指定這兩個args中的任何一個將覆蓋reduction。 預設值:'mean'。
Shape:
-
Input:(N, ∗) where∗means, any number of additional dimensions
-
Target:(N, ∗) , same shape as the input
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()