1. 程式人生 > >deep learning:深度前饋網路

deep learning:深度前饋網路

深度學習是監督學習的一個分支。
簡單來說就是當線性模型無法解決問題時,引入的一種方法。
它綜合多種線行模型來從x空間——>學習到h空間,h空間為可用線行模型解決的空間

深度前饋網路(deep feedforward network)又叫多層感知機,是深度學習最典型的模型。

引入一個例子:
XOR異或問題

在這裡插入圖片描述

當 x1不變時,x2遞增,輸出結果的趨勢相反,即出現了遞增,也出現了遞減。這不是線性變化的,所以無法用線性模型來分類。
在這裡插入圖片描述
上圖兩個輸出空間,黑;綠
引出神經網路核心思想,多線性模型一起工作。
在這裡插入圖片描述

接下來使用python製作自己的神經網路:

構架:
1:初始化函式:設定輸入層節點,隱藏層節點和輸出層節點(上圖所示)
2:訓練:學習給定訓練集樣本後,優化權重
3:查詢:給定輸入,從輸出節點得到輸出結果

import numpy
# scipy.special for the sigmoid function expit()
import scipy.special

# neural network class definition
class neuralNetwork:
    
    
    # initialise the neural network
    def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
        # set number of nodes in each input, hidden, output layer
self.inodes = inputnodes self.hnodes = hiddennodes self.onodes = outputnodes # link weight matrices, wih and who # weights inside the arrays are w_i_j, where link is from node i to node j in the next layer # w11 w21 # w12 w22 etc self.
wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes)) self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes)) # learning rate self.lr = learningrate # activation function is the sigmoid function self.activation_function = lambda x: scipy.special.expit(x) pass # train the neural network def train(self, inputs_list, targets_list): # convert inputs list to 2d array inputs = numpy.array(inputs_list, ndmin=2).T targets = numpy.array(targets_list, ndmin=2).T # calculate signals into hidden layer hidden_inputs = numpy.dot(self.wih, inputs) # calculate the signals emerging from hidden layer hidden_outputs = self.activation_function(hidden_inputs) # calculate signals into final output layer final_inputs = numpy.dot(self.who, hidden_outputs) # calculate the signals emerging from final output layer final_outputs = self.activation_function(final_inputs) # output layer error is the (target - actual) output_errors = targets - final_outputs # hidden layer error is the output_errors, split by weights, recombined at hidden nodes hidden_errors = numpy.dot(self.who.T, output_errors) # update the weights for the links between the hidden and output layers self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs)) # update the weights for the links between the input and hidden layers self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs)) pass # query the neural network def query(self, inputs_list): # convert inputs list to 2d array inputs = numpy.array(inputs_list, ndmin=2).T # calculate signals into hidden layer hidden_inputs = numpy.dot(self.wih, inputs) # calculate the signals emerging from hidden layer hidden_outputs = self.activation_function(hidden_inputs) # calculate signals into final output layer final_inputs = numpy.dot(self.who, hidden_outputs) # calculate the signals emerging from final output layer final_outputs = self.activation_function(final_inputs) return final_outputs if __name__ =="__main__": # number of input, hidden and output nodes input_nodes =2 hidden_nodes = 5 output_nodes = 1 # learning rate is 0.3 learning_rate = 00.3 n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate) print(n.query([1,1]))