《機器學習》(周志華) 習題3.1-3.3個人筆記
阿新 • • 發佈:2019-01-25
3.1 試分析在什麼情況下式(3.2)中不必考慮偏置項b.
其實從前面第一章開始的習題就有很多不會的,第二章更是隻會做前兩道,現在到第三章,發現第一題都不是很明瞭了。從我個人來看:f(x)=w'x+b中,x代表d維向量,w則是相應的權重向量,而b=b*x0可看做權重為b,x0=1為相應屬性. 很明顯,1與x中的xi線性無關,也就是說,f(x)實際上是d+1維空間,當所有示例的x0屬性都只有一種取值時,這樣的屬性已經失去了作為分類屬性的意義,因此,將其權重b設為0,表示將所有示例具有的同樣的屬性值的屬性去掉。(感覺這樣不對啊T_T)
3.2 試證明,對於引數w,對數機率迴歸的目標函式(3.18)是非凸的,但其對數似然函式(3.27)是凸的。
對目標函式(3.18)求取二階導數可發現exp(w'x+b)-1不能確定正負號,因此是非凸的,而對數似然函式3.27經由式(3.31)推導可見是恆大於0的,因此是凸函式。
3.3 程式設計實現對數機率迴歸,並給出西瓜資料集3.0a上的結果。
# -*- coding: utf-8 -*-
import numpy as np
# the exercise is from p69 > 3.3
# the training_dataset is p89, assign to data(type=matrix).
# divide the set into 3 column: density, sugar, label.
# the indices are data[:,0], data[:,1], data[:,2] respectively.
# in this example, the number of attributes equals to 2.
data = [[0.697,0.460,1],
[0.774,0.376,1],
[0.634,0.264,1],
[0.608,0.318,1],
[0.556,0.215,1],
[0.403,0.237,1],
[0.481,0.149,1],
[0.437,0.211,1],
[0.666,0.091,0],
[0.243,0.267,0],
[0.245,0.057,0],
[0.343,0.099,0],
[0.639 ,0.161,0],
[0.657,0.198,0],
[0.360,0.370,0],
[0.593,0.042,0],
[0.719,0.103,0]]
beta=np.array([1,1,1]).reshape((-1,1)) # initial beta is a column vector of [w1,w2,b]'
data = np.matrix(data)
beta = np.matrix(beta)
density, sugar, label = data[:,0], data[:,1], data[:,2]
# in the label list, set 'good' to 1 and 'bad' to '0'
x = np.c_[density,sugar,np.ones(len(sugar))].T # initial x is a column vector of [x1,x2,1]'
def cal_l(beta,x,label):
# solve the l'(beta) and l''(beta) of data to l1 and l2 respectively
l1, l2 = 0, np.mat(np.zeros((3,3)))
# l1,l2 = np.zeros((1,3)), np.zeros((3,3))
for i in range(x.shape[1]):
l1 += x[:,i] * (np.exp(beta.T*x[:,i])/(1+np.exp(beta.T*x[:,i])) - label[i])
l2 += x[:,i]*x.T[i,:] * (np.exp(beta.T*x[:,i])/((1+np.exp(beta.T*x[:,i]))**2))[0,0]
return [l1,l2]
dist = 1 # carry out the distance between new_beta and beta.
while dist >= 0.01:
new_beta = beta - cal_l(beta,x,label)[1].I * cal_l(beta,x,label)[0]
dist = np.linalg.norm(new_beta-beta)
beta = new_beta
c = [] # save the logit regression result
for i in range(17):
c.append(1/(1+np.exp(-beta.T*x[:,i]))[0,0])
print(new_beta)
print(c)
result:
[[ 3.15832966]
[ 12.52119579]
[ -4.42886451]]
[0.97159134201182584, 0.93840796737854693, 0.7066382101828117, 0.81353420973519985,
0.50480582132703811, 0.45300555631425837, 0.26036934432276743, 0.39970315015130975,
0.23397722179395924, 0.42110689644219934, 0.050146188402258575, 0.10851898058397864,
0.40256730484729258, 0.53129773794877577, 0.79265049892320416, 0.11608022112650698,
0.29559934850614572]
結果發現,17個示例中,正例和反例各有3/2個劃分錯誤,錯誤率為5/17