不同的神經網路訓練函式training function的比較
1.traingd:批梯度下降訓練函式,沿網路效能引數的負梯度方向調整網路的權值和閾值.
2.traingdm:動量批梯度下降函式,也是一種批處理的前饋神經網路訓練方法,不但具有更快的收斂速度,而且引入了一個動量項,有效避免了局部最小問題在網路訓練中出現.
3.trainrp:有彈回的BP演算法,用於消除梯度模值對網路訓練帶來的影響,提高訓練的速度.(主要通過delt_inc和delt_dec來實現權值的改變)
4.trainlm:Levenberg-Marquardt演算法,對於中等規模的BP神經網路有最快的收斂速度,是系統預設的演算法.由於其避免了直接計算赫賽矩陣,從而減少了訓練中的計算量
5. traincgb:Plwell-Beale演算法:通過判斷前後梯度的正交性來決定權值和閾值的調整方向是否回到負梯度方向上來.
6. trainscg:比例共軛梯度演算法:將模值信賴域演算法與共軛梯度演算法結合起來,減少用於調整方向時搜尋網路的時間.
一般來說,traingd和traingdm是普通訓練函式,而traingda,traingdx,traingd,trainrp,traincgf,traincgb,trainscg,trainbgf等等都是快速訓練函式.總體感覺就是訓練時間的差別比較大,還帶有精度的差異.
(以上資訊來自網上,忘記出處)
(以下資訊來自MATLAB幫助文件,隨後附有我的翻譯)
nntrain
Neural Network Toolbox Training Functions.
To change a neural network’s trainingalgorithm set the net.trainFcn
property to the name of the correspondingfunction. For example, to use
the scaled conjugate gradient backproptraining algorithm:
net.trainFcn = ‘trainscg’;
Backpropagation training functions that useJacobian derivatives
These algorithms can be faster but requiremore memory than gradient
backpropation. They are also not supported on GPU hardware.
trainlm - Levenberg-Marquardt backpropagation.
trainbr - Bayesian Regulation backpropagation.
Backpropagation training functions that usegradient derivatives
These algorithms may not be as fast asJacobian backpropagation.
They are supported on GPU hardware with theParallel Computing Toolbox.
trainbfg - BFGS quasi-Newton backpropagation.
traincgb - Conjugate gradient backpropagation with Powell-Beale restarts.
traincgf - Conjugate gradient backpropagation with Fletcher-Reeves updates.
traincgp - Conjugate gradient backpropagation with Polak-Ribiere updates.
traingd - Gradient descent backpropagation.
traingda - Gradient descent with adaptive lr backpropagation.
traingdm - Gradient descent with momentum.
traingdx - Gradient descent w/momentum & adaptive lr backpropagation.
trainoss - One step secant backpropagation.
trainrp - RPROP backpropagation.
trainscg - Scaled conjugate gradient backpropagation.
Supervised weight/bias training functions
trainb - Batch training with weight & bias learning rules.
trainc - Cyclical order weight/bias training.
trainr - Random order weight/bias training.
trains - Sequential order weight/bias training.
Unsupervised weight/bias training functions
trainbu - Unsupervised batch training with weight & bias learning rules.
trainru - Unsupervised random order weight/bias training.
翻譯:
神經網路工具箱訓練函式.
設定net.trainFcn屬性裡的當前函式的名字可以改變神經網路的訓練函式.舉個例子,用如下程式碼可以把訓練函式設定成scaled conjugate gradientbackprop(擴充套件共軛梯度反向,我亂猜的)訓練函式
net.trainFcn =’trainscg’;
Backpropagation(反向傳播)訓練函式使用的是Jacobian(雅克比)導數.
這些演算法很快,但是比導數反向傳播法需要更多的記憶體.他們也不支援GPU.
trainlm - Levenberg-Marquardt backpropagation.
trainbr - Bayesian Regulation backpropagation.
反向傳播訓練函式使用的是梯度導數
這些演算法沒有雅克比反向傳播的演算法那麼快.他們支援GPU,藉助於並行運算工具箱(ParallelComputing Toolbox).
trainbfg - BFGS quasi-Newton backpropagation.
traincgb - Conjugate gradient backpropagation with Powell-Beale restarts.
traincgf - Conjugate gradient backpropagation with Fletcher-Reeves updates.
traincgp - Conjugate gradient backpropagation with Polak-Ribiere updates.
traingd - Gradient descent backpropagation.
traingda - Gradient descent with adaptive lr backpropagation.
traingdm - Gradient descent with momentum.
traingdx - Gradient descent w/momentum & adaptive lr backpropagation.
trainoss - One step secant backpropagation.
trainrp - RPROP backpropagation.
trainscg - Scaled conjugate gradient backpropagation.
權值/偏差受控訓練法
trainb - Batch training with weight & bias learning rules.
trainc - Cyclical order weight/bias training.
trainr - Random order weight/bias training.
trains - Sequential order weight/bias training.
權值/偏差不受控訓練法
trainbu - Unsupervised batch training with weight & bias learning rules.
trainru - Unsupervised random order weight/bias training.