1. 程式人生 > >SVM引數引數介紹以及python實現GA對SVM引數的優化

SVM引數引數介紹以及python實現GA對SVM引數的優化

 

最近開始玩起了機器學習,以前都是用matlab做一些機器學習的東西,畢竟要真正放到工程上應用還是python用起來比較好,所以今天就開始學習下使用SVM進行迴歸(分類)預測。

SVM 使用的一般步驟是:
1)準備資料集,轉化為 SVM支援的資料格式 :
[label] [index1]:[value1] [index2]:[value2] ...
即 [l類別標號] [特徵1]:[特徵值] [特徵2]:[特徵值] ...
2)對資料進行簡單的縮放操作(scale);(為什麼要scale,這裡不解釋了)
3)考慮選用核函式(通常選取徑函式,程式預設);
4)優化演算法選擇最佳引數C與g ;
5)用得到的最佳引數C與g 對整個訓練集進行訓練得到SVM模型;
6)用得到的SVM模型進行測試

在這個基本操作中,優化演算法起到了關鍵作用,常見優化演算法有網格搜尋演算法、遺傳演算法、粒子群演算法、蟻群演算法。這些優化演算法主要實現對SVM引數進行優化。在sklearn 中有對svm的實現,其中迴歸的建構函式中的引數就是模型的引數,包括核函式,懲罰因子C、不敏感係數gRBF中的核寬度epsilon。

引數:

l C:C-SVC的懲罰引數C?預設值是1.0

C越大,相當於懲罰鬆弛變數,希望鬆弛變數接近0,即對誤分類的懲罰增大,趨向於對訓練集全分對的情況,這樣對訓練集測試時準確率很高,但泛化能力弱。C值小,對誤分類的懲罰減小,允許容錯,將他們當成噪聲點,泛化能力較強。

l kernel :核函式,預設是rbf,可以是‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’

  0 – 線性:u’v

   1 – 多項式:(gamma*u’*v + coef0)^degree

  2 – RBF函式:exp(-gamma|u-v|^2)

  3 –sigmoid:tanh(gamma*u’*v + coef0)

l degree :多項式poly函式的維度,預設是3,選擇其他核函式時會被忽略。

l gamma : ‘rbf’,‘poly’ 和‘sigmoid’的核函式引數。預設是’auto’,則會選擇1/n_features

l coef0 :核函式的常數項。對於‘poly’和 ‘sigmoid’有用。

l probability :是否採用概率估計?.預設為False

l shrinking :是否採用shrinking heuristic方法,預設為true

l tol :停止訓練的誤差值大小,預設為1e-3

l cache_size :核函式cache快取大小,預設為200

l class_weight :類別的權重,字典形式傳遞。設定第幾類的引數C為weight*C(C-SVC中的C)

l verbose :允許冗餘輸出?

l max_iter :最大迭代次數。-1為無限制。

l decision_function_shape :‘ovo’, ‘ovr’ or None, default=None3

l random_state :資料洗牌時的種子值,int值

主要調節的引數有:C、kernel、degree、gamma、coef0。
 

因此我們使用sklearn 中的SVm時候,要選擇好引數

from sklearn import svm
X = [[0, 0], [2, 2]]
y = [0.5, 2.5]
clf = svm.SVR()
 clf.fit(X, y) 
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1,
    gamma='auto_deprecated', kernel='rbf', max_iter=-1, shrinking=True,
    tol=0.001, verbose=False)
clf.predict([[1, 1]])
array([1.5])

那麼問題來了,優化函式又該怎麼使用呢?ok,這就開始!

以GA(遺傳演算法為例),程式碼如下

ObjFunction.py  這是你得適應度值函式作為目標函式定義的地方

import math
from sklearn import svm

def msefunc(predictval,realval):
    squaredError = []
    absError = []
    for i in range(len(predictval)):
        val=predictval[i-1]-realval[i-1]
        squaredError.append(val * val)  # target-prediction之差平方 

    print("Square Error: ", squaredError)
    print("MSE = ", sum(squaredError) / len(squaredError))  # 均方誤差MSE
    return sum(squaredError) / len(squaredError)

def SVMResult(vardim, x, bound):
    X = [[0, 0], [2, 2]]
    y = [0.5, 2.5]
    c=x[0]
    e=x[1]
    g=x[2]
    clf = svm.SVR(C=c, epsilon=e,gamma=g, kernel='rbf')
    clf.fit(X, y)
    predictval=clf.predict(realval)
    print(predictval)
    '''返回mse作為適應度值'''
    return msefunc(predictval,realval)  
    

GAIndividual.py

import numpy as np
import ObjFunction


class GAIndividual:

    '''
    individual of genetic algorithm
    '''

    def __init__(self,  vardim, bound):
        '''
        vardim: dimension of variables
        bound: boundaries of variables
        '''
        self.vardim = vardim
        self.bound = bound
        self.fitness = 0.

    def generate(self):
        '''
        generate a random chromsome for genetic algorithm
        '''
        len = self.vardim
        rnd = np.random.random(size=len)
        self.chrom = np.zeros(len)
        for i in xrange(0, len):
            self.chrom[i] = self.bound[0, i] + \
                (self.bound[1, i] - self.bound[0, i]) * rnd[i]

    def calculateFitness(self):
        '''
        calculate the fitness of the chromsome
        '''
        self.fitness = ObjFunction.GrieFunc(
            self.vardim, self.chrom, self.bound)

GeneticAlgorithm.py

import numpy as np
from GAIndividual import GAIndividual
import random
import copy
import matplotlib.pyplot as plt


class GeneticAlgorithm:

    '''
    The class for genetic algorithm
    '''

    def __init__(self, sizepop, vardim, bound, MAXGEN, params):
        '''
        sizepop: population sizepop
        vardim: dimension of variables
        bound: boundaries of variables
        MAXGEN: termination condition
        param: algorithm required parameters, it is a list which is consisting of crossover rate, mutation rate, alpha
        '''
        self.sizepop = sizepop
        self.MAXGEN = MAXGEN
        self.vardim = vardim
        self.bound = bound
        self.population = []
        self.fitness = np.zeros((self.sizepop, 1))
        self.trace = np.zeros((self.MAXGEN, 2))
        self.params = params

    def initialize(self):
        '''
        initialize the population
        '''
        for i in xrange(0, self.sizepop):
            ind = GAIndividual(self.vardim, self.bound)
            ind.generate()
            self.population.append(ind)

    def evaluate(self):
        '''
        evaluation of the population fitnesses
        '''
        for i in xrange(0, self.sizepop):
            self.population[i].calculateFitness()
            self.fitness[i] = self.population[i].fitness

    def solve(self):
        '''
        evolution process of genetic algorithm
        '''
        self.t = 0
        self.initialize()
        self.evaluate()
        best = np.max(self.fitness)
        bestIndex = np.argmax(self.fitness)
        self.best = copy.deepcopy(self.population[bestIndex])
        self.avefitness = np.mean(self.fitness)
        self.trace[self.t, 0] = (1 - self.best.fitness) / self.best.fitness
        self.trace[self.t, 1] = (1 - self.avefitness) / self.avefitness
        print("Generation %d: optimal function value is: %f; average function value is %f" % (
            self.t, self.trace[self.t, 0], self.trace[self.t, 1]))
        while (self.t < self.MAXGEN - 1):
            self.t += 1
            self.selectionOperation()
            self.crossoverOperation()
            self.mutationOperation()
            self.evaluate()
            best = np.max(self.fitness)
            bestIndex = np.argmax(self.fitness)
            if best > self.best.fitness:
                self.best = copy.deepcopy(self.population[bestIndex])
            self.avefitness = np.mean(self.fitness)
            self.trace[self.t, 0] = (1 - self.best.fitness) / self.best.fitness
            self.trace[self.t, 1] = (1 - self.avefitness) / self.avefitness
            print("Generation %d: optimal function value is: %f; average function value is %f" % (
                self.t, self.trace[self.t, 0], self.trace[self.t, 1]))

        print("Optimal function value is: %f; " %
              self.trace[self.t, 0])
        print "Optimal solution is:"
        print self.best.chrom
        self.printResult()

    def selectionOperation(self):
        '''
        selection operation for Genetic Algorithm
        '''
        newpop = []
        totalFitness = np.sum(self.fitness)
        accuFitness = np.zeros((self.sizepop, 1))

        sum1 = 0.
        for i in xrange(0, self.sizepop):
            accuFitness[i] = sum1 + self.fitness[i] / totalFitness
            sum1 = accuFitness[i]

        for i in xrange(0, self.sizepop):
            r = random.random()
            idx = 0
            for j in xrange(0, self.sizepop - 1):
                if j == 0 and r < accuFitness[j]:
                    idx = 0
                    break
                elif r >= accuFitness[j] and r < accuFitness[j + 1]:
                    idx = j + 1
                    break
            newpop.append(self.population[idx])
        self.population = newpop

    def crossoverOperation(self):
        '''
        crossover operation for genetic algorithm
        '''
        newpop = []
        for i in xrange(0, self.sizepop, 2):
            idx1 = random.randint(0, self.sizepop - 1)
            idx2 = random.randint(0, self.sizepop - 1)
            while idx2 == idx1:
                idx2 = random.randint(0, self.sizepop - 1)
            newpop.append(copy.deepcopy(self.population[idx1]))
            newpop.append(copy.deepcopy(self.population[idx2]))
            r = random.random()
            if r < self.params[0]:
                crossPos = random.randint(1, self.vardim - 1)
                for j in xrange(crossPos, self.vardim):
                    newpop[i].chrom[j] = newpop[i].chrom[
                        j] * self.params[2] + (1 - self.params[2]) * newpop[i + 1].chrom[j]
                    newpop[i + 1].chrom[j] = newpop[i + 1].chrom[j] * self.params[2] + \
                        (1 - self.params[2]) * newpop[i].chrom[j]
        self.population = newpop

    def mutationOperation(self):
        '''
        mutation operation for genetic algorithm
        '''
        newpop = []
        for i in xrange(0, self.sizepop):
            newpop.append(copy.deepcopy(self.population[i]))
            r = random.random()
            if r < self.params[1]:
                mutatePos = random.randint(0, self.vardim - 1)
                theta = random.random()
                if theta > 0.5:
                    newpop[i].chrom[mutatePos] = newpop[i].chrom[
                        mutatePos] - (newpop[i].chrom[mutatePos] - self.bound[0, mutatePos]) * (1 - random.random() ** (1 - self.t / self.MAXGEN))
                else:
                    newpop[i].chrom[mutatePos] = newpop[i].chrom[
                        mutatePos] + (self.bound[1, mutatePos] - newpop[i].chrom[mutatePos]) * (1 - random.random() ** (1 - self.t / self.MAXGEN))
        self.population = newpop

    def printResult(self):
        '''
        plot the result of the genetic algorithm
        '''
        x = np.arange(0, self.MAXGEN)
        y1 = self.trace[:, 0]
        y2 = self.trace[:, 1]
        plt.plot(x, y1, 'r', label='optimal value')
        plt.plot(x, y2, 'g', label='average value')
        plt.xlabel("Iteration")
        plt.ylabel("function value")
        plt.title("Genetic algorithm for function optimization")
        plt.legend()
        plt.show()
if __name__ == "__main__":

   bound = np.tile([[-600], [600]], 3)  '''代表模型中的三個引數選擇'''
   ga = GA(60, 3, bound, 1000, [0.9, 0.1, 0.5])
   ga.solve()

其實簡單講就是通過執行ga演算法,每次計算的是適應度值的過程就是希望每個基因的位置就是三個引數的值,演算法計算每個引數值與適應度值之間的關係,在解空間內進行搜尋,找到適應度值最大時候的引數,返回出來,就是你模型中的最優引數。所以優化的的結果影響著模型預測的效能。當然每次預測結果的可能不同,但會在最優解附近(在不是陷入區域性最優的時候)。

 

 

參考:http://www.cnblogs.com/biaoyu/p/4857881.html

https://scikit-learn.org/stable/modules/svm.html#regression