1. 程式人生 > >使用sklearn中svm做多分類時難點解惑

使用sklearn中svm做多分類時難點解惑

一,parameters: decision_function_shape:
兩種方法one v one 或者 one v rest

decision_function_shape : ‘ovo’, ‘ovr’ or None, default=None Whether
to return a one-vs-rest (‘ovr’) decision function of shape (n_samples,
n_classes) as all other classifiers, or the original one-vs-one
(‘ovo’) decision function of libsvm which has shape (n_samples,
n_classes * (n_classes - 1) / 2). The default of None will currently
behave as ‘ovo’ for backward compatibility and raise a deprecation
warning, but will change ‘ovr’ in 0.19. New in version 0.17:
decision_function_shape=’ovr’ is recommended. Changed in version 0.17:
Deprecated decision_function_shape=’ovo’ and None.

二、Attributes:

support_: 返回training data中的support vector的索引(幾乎沒什麼用,因為索引是按大小順序排列,不知道其屬於哪個類別的support vector)

support_vectors_ :返回support_vector的值,而且按照每個類別的support_vector依次排列。

n_support_:每個類別的support_vector個數,對照support_vector使用
如如果返回[3,4,5],則表明support_vector前三個元素是第一類的sv,4到7個元素是第二類的sv,最後5個是第三類的sv。

dual_coef_ : kernel前面的係數部分。即下圖中的an*tn

intercept_: 下圖中的b

這裡寫圖片描述

三,method
decision_function: 返回每個data到每個classifier的距離,注意出現overlap時,support_vector 到其對應classifier 的距離不一定是1

這裡寫圖片描述

詳情請參看bishop 《PRML》333-334頁

另外注意:在這個模型中的support vector均對應與class,而不是對應於classifier
即如果data屬於class時,它一定是它的boundray或者outliner。但屬於classifer時,不清楚它屬於classfier中兩個class中的哪一個

# I've only implemented the linear and rbf kernels
#sv:support vector  nv:上面的n_support_  a:上面的dual_coef
#b:上面的intercept_ 
def kernel(params, sv, X):
    if params.kernel == 'linear':
        return [np.dot(vi, X) for vi in sv]
    elif params.kernel == 'rbf':
        return [math.exp(-params.gamma * np.dot(vi - X, vi - X)) for vi in sv]

# This replicates clf.decision_function(X)
def decision_function(params, sv, nv, a, b, X):
    # calculate the kernels
    k = kernel(params, sv, X)

    # define the start and end index for support vectors for each class
    start = [sum(nv[:i]) for i in range(len(nv))]
    end = [start[i] + nv[i] for i in range(len(nv))]

    # calculate: sum(a_p * k(x_p, x)) between every 2 classes
    c = [ sum(a[ i ][p] * k[p] for p in range(start[j], end[j])) +
          sum(a[j-1][p] * k[p] for p in range(start[i], end[i]))
                for i in range(len(nv)) for j in range(i+1,len(nv))]

    # add the intercept
    return [sum(x) for x in zip(c, b)]

# This replicates clf.predict(X)
def predict(params, sv, nv, a, b, cs, X):
    ''' params = model parameters
        sv = support vectors
        nv = # of support vectors per class
        a  = dual coefficients
        b  = intercepts 
        cs = list of class names
        X  = feature to predict       
    '''
    decision = decision_function(params, sv, nv, a, b, X)
    votes = [(i if decision[p] > 0 else j) for p,(i,j) in enumerate((i,j) 
                                           for i in range(len(cs))
                                           for j in range(i+1,len(cs)))]

    return cs[max(set(votes), key=votes.count)]

最近兩週花了很長時間摸索,主要原因是因為不清楚為什麼sv對應decision_function返回的distance很少有是1的距離,究其原因還是對模型理解不夠透徹,忽略了overlap時,outliner的data也是sv.

希望對大家有一點幫助。
以上