1. 程式人生 > >Udacity機器學習進階—監督學習之神經網路迷你專案

Udacity機器學習進階—監督學習之神經網路迷你專案

1.建立感知

def activate(self,inputs):
        """
        Takes in @param inputs, a list of numbers equal to length of weights.
        @return the output of a threshold perceptron with given inputs based on
        perceptron weights and threshold.
        """ 

        # The strength with which the perceptron fires.
strength = np.dot(self.weights, inputs) # TODO: return 0 or 1 based on the threshold if strength <= self.threshold : self.result = 0# TODO else: self.result = 1# TODO return self.result

3.在哪兒訓練感知

  • 我們希望建立一個感知機,那麼在建立模型的過程中,我們需要修改的是以下哪些值?

閾值
權重

4.感知輸入

  • 人工神經網路是由感知機單元構成的,人工神經網路的輸入應該是什麼格式的呢?

每行帶有標籤的數值型矩陣

5.神經網路輸出

  • 我們能從神經網路的輸出中得到什麼資訊?

一個有向圖(神經網路本身)
一個標量
用向量表示的分類資訊
每個輸入向量都對應一個輸出向量

6.感知更新規則

 def update(self, values, train, eta=.1):
        """
        Takes in a 2D array @param values consisting of a LIST of inputs and a
        1D array @param train, consisting of a corresponding list of expected
        outputs. Updates internal weights according to the perceptron training
        rule using these values and an optional learning rate, @param eta.
        """
# For each data point: for data_point in xrange(len(values)): # TODO: Obtain the neuron's prediction for the data_point --> values[data_point] prediction = self.activate(values[data_point]) # Get the prediction accuracy calculated as (expected value - predicted value) # expected value = train[data_point], predicted value = prediction error = train[data_point] - prediction # TODO: update self.weights based on the multiplication of: # - prediction accuracy(error) # - learning rate(eta) # - input value(values[data_point]) weight_update = eta*error*values[data_point]# TODO self.weights += weight_update

7.多層網路示例

這裡寫圖片描述

8.線性表徵能力

這裡寫圖片描述

9.建立XOR網路

# Part 1: Set up the perceptron network
Network = [
    # input layer, declare input layer perceptrons here
    [ input1,input2], \
    # output node, declare output layer perceptron here
    [ output ]
]

# Part 2: Define a procedure to compute the output of the network, given inputs
def EvalNetwork(inputValues, Network):
    """
    Takes in @param inputValues, a list of input values, and @param Network
    that specifies a perceptron network. @return the output of the Network for
    the given set of inputs.
    """

    # YOUR CODE HERE
    input=[]
    for net in Network[0]:
        input.append(net.activate(inputValues))
    OutputValue = output.activate(input)
    # Be sure your output value is a single number
    return OutputValue

10.離散測驗

  • 人工神經網路的一個問題是他只能輸出離散值,這就使得他不能有效的處理迴歸問題,並且處理負責問題的時候需要更多的單元。
    例如: 給定一個結構為 [2,2,1](輸入層兩個單元,隱藏層兩個單元,輸出層一個單元)的神經網路,最多可以預測幾種房屋的價格?

這裡寫圖片描述

2*2=4

13.啟用函式 測驗

  • 我們已經決定使用一個連續(避免離散問題)並且非線性(允許我們表示非線性)的方程,以下哪個方程滿足我們的需求?

Logistic function
其實就是階躍函式和sigmoid函式

14.Perceptron Vs Sigmoid

  • 單個感知機和一個 Sigmoid 單元在二分類問題上有什麼區別?

後者給出了更多的資訊,但是兩者的結果會相同

15.Sigmoid Learning

  • 我們需要像訓練感知機一樣來訓練 Sigmoid 單元。該怎麼定義更新規則呢?

運用微積分

16.Gradient Descent Issues

  • 運用微積分,梯度下降演算法可以給我們提供一個求極值的方法。但是也會產生很多問題,你認為會產生下列哪些問題?

區域性的極值
執行太耗時
會產生無限次迴圈
無法收斂

17.

# ----------
# 
# As with the previous perceptron exercises, you will complete some of the core
# methods of a sigmoid unit class.
#
# There are two functions for you to finish:
# First, in activate(), write the sigmoid activation function.
# Second, in update(), write the gradient descent update rule. Updates should be
#   performed online, revising the weights after each data point.
# 
# ----------

import numpy as np


class Sigmoid:
    """
    This class models an artificial neuron with sigmoid activation function.
    """

    def __init__(self, weights = np.array([1])):
        """
        Initialize weights based on input arguments. Note that no type-checking
        is being performed here for simplicity of code.
        """
        self.weights = weights

        # NOTE: You do not need to worry about these two attribues for this
        # programming quiz, but these will be useful for if you want to create
        # a network out of these sigmoid units!


        self.last_input = 0 # strength of last input
        self.delta      = 0 # error signal

    def activate(self, values):
        """
        Takes in @param values, a list of numbers equal to length of weights.
        @return the output of a sigmoid unit with given inputs based on unit
        weights.
        """

        # YOUR CODE HERE

        # First calculate the strength of the input signal.
        strength = np.dot(values, self.weights)
        self.last_input = strength

        # TODO: Modify strength using the sigmoid activation function and
        # return as output signal.
        # HINT: You may want to create a helper function to compute the
        #   logistic function since you will need it for the update function.



        result=self.logistic(strength)

        return result

    def logistic(self,x):
        return 1.0/(1+np.exp(-x))

    def update(self, values, train, eta=.1):
        """
        Takes in a 2D array @param values consisting of a LIST of inputs and a
        1D array @param train, consisting of a corresponding list of expected
        outputs. Updates internal weights according to gradient descent using
        these values and an optional learning rate, @param eta.
        """

        # TODO: for each data point...
        for X, y_true in zip(values, train):
            # obtain the output signal for that point
            y_pred = self.activate(X)

            # YOUR CODE HERE
            error  = y_true - y_pred
            # TODO: compute derivative of logistic function at input strength
            # Recall: d/dx logistic(x) = logistic(x)*(1-logistic(x))
            from scipy.special import expit
            de_logistic = self.logistic(self.last_input)* (1 -self.logistic(self.last_input))
            # TODO: update self.weights based on learning rate, signal accuracy,
            # function slope (derivative) and input value
            weight_update=X*de_logistic*eta*error
            self.weights += weight_update

def test():
    """
    A few tests to make sure that the perceptron class performs as expected.
    Nothing should show up in the output if all the assertions pass.
    """
    def sum_almost_equal(array1, array2, tol = 1e-5):
        return sum(abs(array1 - array2)) < tol

    u1 = Sigmoid(weights=[3,-2,1])
    assert abs(u1.activate(np.array([1,2,3])) - 0.880797) < 1e-5

    u1.update(np.array([[1,2,3]]),np.array([0]))
    assert sum_almost_equal(u1.weights, np.array([2.990752, -2.018496, 0.972257]))

    u2 = Sigmoid(weights=[0,3,-1])
    u2.update(np.array([[-3,-1,2],[2,1,2]]),np.array([1,0]))
    assert sum_almost_equal(u2.weights, np.array([-0.030739, 2.984961, -1.027437]))

if __name__ == "__main__":
    test()