1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——單層神經網路的訓練:增量規則(3)

【讀書1】【2017】MATLAB與深度學習——單層神經網路的訓練:增量規則(3)

例如,epoch = 10意味著神經網路對相同的資料集經過10次重複的訓練過程。

For instance, epoch = 10 means that theneural network goes through 10 repeated training processes with the samedataset.

到目前為止,你能看懂這一部分內容嗎?

Are you able to follow this section so far?

現在,你已經學習了神經網路訓練的大部分關鍵概念。

Then you have learned most of the keyconcepts of the neural network training.

雖然方程表示式可以根據學習規則而變化,但基本概念是相對一致的。

Although the equations may vary dependingon the learning rule, the essential concepts are relatively the same.

圖2-13說明了本節中描述的訓練過程。

Figure 2-13 illustrates the trainingprocess described in this section. 在這裡插入圖片描述 圖2-13 訓練過程描述The training process

廣義增量規則(Generalized Delta Rule)

本節涉及增量規則的一些理論知識。

This section touches on some theoreticalaspects of the delta rule.

然而,你不必因此感到沮喪。

However, you don’t need to be frustrated.

我們將學習最重要的部分,而不是涉及太多的細節。

We will go through the most essentialsubjects without elaborating too much on the specifics.

上一節的增量規則已經過時了。

The delta rule of the previous section israther obsolete.

後來的研究發現,增量規則存在更廣義的形式。

Later studies have uncovered that thereexists a more generalized form of the delta rule.

對於任意啟用函式,增量規則可表示為以下方程。

For an arbitrary activation function, thedelta rule is expressed as the following equation. 在這裡插入圖片描述 將上式代入方程2.3,得到與式2.2中增量法則相同的公式。

Plugging this equation into Equation 2.3results in the same formula as the delta rule in Equation 2.2.

這一事實表明,方程2.2中的增量規則僅適用於線性啟用函式。

This fact indicates that the delta rule inEquation 2.2 is only valid for linear activation functions.

現在,我們可以用sigmoid函式推匯出增量法則,它被廣泛用作啟用函式。

Now, we can derive the delta rule with thesigmoid function, which is widely used as an activation function.

sigmoid函式定義為如圖2-14所示。

The sigmoid function is defined as shown inFigure 2-14.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述