1. 程式人生 > >深度學習機器學習:softmax和log_softmax區分

深度學習機器學習:softmax和log_softmax區分

softmax 函式

又稱為 normalized exponential function:is a generalization of the logistic function that “squashes” a K-dimensional vector zof arbitrary real values to a K-dimensional vector σ(z) of real values in the range [0, 1] that add up to 1. The function is given by

σ(z)j=ezjKk=1ezkforj=1,,K.

很顯然,這個式子將一個n維的張量輸入轉化為n維的數,其中每個數的範圍為0-1,所有數加起來為1。可以理解為為一種概率分佈(probability distribution),比如一個多 label 的分類任務(比如手寫字元識別0-9),其結果對應著分類結果為j的概率。

In probability theory, the output of the softmax function can be used to represent a categorical distribution – that is, a probability distribution over K different possible outcomes. In fact, it is the gradient-log-normalizer of the categorical probability distribution.[further explanation needed]

The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression)[1]:206–209 [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks.[2] Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j’th class given a sample vector x and a weighting vector w[further explanation needed] is:

下面這個函式是通過向量版的softmax,與之前不同的是這裡的x、w是特定維數的向量,輸入的向量都在k維的空間中。

P(y=jx)=ex