1. 程式人生 > 實用技巧 >Cerebral Cortex: Principles of Operation.Appendix 4(一)

Cerebral Cortex: Principles of Operation.Appendix 4(一)

Autoassociation or Attractor Networks

(該部分內容來源於Cerebral Cortex: Principles of Operation【Rolls 2016a】的附錄4——Simulation software for neuronalnetwork models,主要介紹瞭如何利用matlab軟體實現相關的幾個神經網路。文章僅作為個人學習筆記用於覆盤,若有錯誤,請多斧正。)

首先,對於一些變數賦初值:神經元數量N,突觸數量nSyn,突觸權重矩陣SynMat,訓練/測試模式數量nPatts,學習率,稀疏度,nFlipBits(這個我還不知道怎麼用中文表達,目前理解是測試模式相對訓練模式變化的程度

),以及神經網路更新次數。

clear all; 
close all hidden;
format compact;
% format bank; fig = 1; % rng(1); % comment this in only to set the random number generator for code development N = 100; % number of neurons in the fully connected autoassociation net nSyn = N; % number of synapses on each neuron SynMat = zeros(nSyn, N); % (Synapses = Rows, neurons = Columns) nPatts = 10; % the number of training and testng patterns. Suggest 10 for sparseness = 0.5, and 30 for sparseness = 0.1 Learnrate = 1 / nPatts; % Not of especial significance in an autoassociation network, but this keeps the weights within a range Sparseness = 0.5; % the sparseness of the trained representation, i.e. the proportion of neurons that have high firing rates of 1 % Investigate values of 0.5 and 0.1 display = 1; % 0 no display; 1 display the network nFlipBits = 14; % The number of bits that are flipped to produce distorted recall cues. % It is suggested that this be set to close to 0.2 * N * Sparseness nepochs = 9; % the number of times that the network is allowed to update during test

(1)設定稀疏度為0.5的10個訓練模式(該訓練集也是神經元的標準輸出)

實現方法為:將矩陣全部初始化為0,再將每一列前一半(50個)元素賦值為1,然後打亂每一列的行順序,將1隨機分散在該列。

***用不同灰度畫圖輸出TrainPatts矩陣的程式碼在此處省略,下面有關畫圖的程式碼也全都省略。

TrainPatts = zeros(N, nPatts); % This matrix stores the training patterns. Each pattern vector has N elements
for patt = 1 : nPatts
    TrainPatts(
1 : N * Sparseness, patt) = 1; % the number of bits set to 1 in each pattern p = randperm(N); % rearrange the elements of this pattern vector in random order TrainPatts(:, patt) = TrainPatts(p, patt); end

(2)對TrainPatts作變換,生成distorted(adj.扭曲的,也就是將理想輸出稍微變動一下,以測試其準確度)的測試集。用於後面測試時,作為epoch=1時的輸入。

(先將每列中隨機14個1變成0,再隨機將14個0變成1)

TrainPattsFlipped = TrainPatts;
for patt = 1 : nPatts
    synarray = randperm(nSyn);
    el = 1;
    for bit = 1 : nFlipBits
        while TrainPatts(synarray(el),patt) ~= 1
            el = el + 1;
            if el > nSyn
                disp('Error: too many bits being flipped');
                el = 1;
            end
        end
        TrainPattsFlipped(synarray(el),patt) = 0;
        el = el + 1;
    end
    synarray = randperm(nSyn);
    el = 1;
    for bit = 1 : nFlipBits
        while TrainPatts(synarray(el),patt) ~= 0
            el = el + 1;
            if el > nSyn
                disp('Error: too many bits being flipped');
                el = 1;
            end
        end
        TrainPattsFlipped(synarray(el),patt) = 1;
        el = el + 1;
    end
end

(3)訓練權重矩陣SynMat

每個神經元對應一個postSynRate,各神經元各突觸的preSynRate不同。(這兩個變數各代表什麼,我暫時不是很清楚,後面再補加解釋

syn ~= neuron ,recurrent collateral 以及 covariance rule也都在之後再補加解釋。

for patt = 1 : nPatts
    for neuron = 1 : N
        postSynRate = TrainPatts(neuron, patt); % postsynaptic firing rate. The external input to the neurons.
        for syn = 1 : nSyn
            if syn ~= neuron % avoid self connections of a recurrent collateral axon onto its sending neuron
                preSynRate = TrainPatts(syn, patt); % the presynaptic rate is the same as the postsynaptic rate because of the recurrent collaterals 
                weight_change = Learnrate * (postSynRate - Sparseness) * (preSynRate - Sparseness); % use a covariance rule. 
                % The sparseness is the average firing rate
                % weight_change = Learnrate * (postSynRate) * (preSynRate); % OR use a Hebb rule
                % weight_change = Learnrate * (postSynRate) * (preSynRate - Sparseness); % OR use a Hebb rule but with also heterosynaptic LTD see Rolls (2008) B.3.3.6. 
                SynMat(syn, neuron) = SynMat(syn, neuron) + weight_change;
            end
        end