1. 程式人生 > 實用技巧 >kaggle 入門比賽:使用隨機森林解Bag of Words Meets Bags of Popcorn解題報告

kaggle 入門比賽:使用隨機森林解Bag of Words Meets Bags of Popcorn解題報告

  這個kaggle比賽就是使用word2Vec,將句子轉換為多個詞向量進行情感分析,判斷句子是好評還是差評。

  這篇解題報告轉載自kaggle的revanth:https://www.kaggle.com/revanthrex/sentiment-analysis-with-word2vec,中間為了調優word2Vec引數修改了程式。

  首先檢視tsv檔案,看內容是什麼:

id	review
"9999_0"	"Watching Time Chasers, it obvious that it was made by a bunch of friends. Maybe they were sitting around one day in film school and said, \"Hey, let's pool our money together and make a really bad movie!\" Or something like that. What ever they said, they still ended up making a really bad movie--dull story, bad script, lame acting, poor cinematography, bottom of the barrel stock music, etc. All corners were cut, except the one that would have prevented this film's release. Life's like that."
"45057_0"	"I saw this film about 20 years ago and remember it as being particularly nasty. I believe it is based on a true incident: a young man breaks into a nurses' home and rapes, tortures and kills various women.<br /><br />It is in black and white but saves the colour for one shocking shot.<br /><br />At the end the film seems to be trying to make some political statement but it just comes across as confused and obscene.<br /><br />Avoid."
View Code

  文章內容是html程式碼,待會要去除標籤。

  找到檔案載入(tsv檔案事先解壓):

import pandas as pd
import sys

DIR=''
# Read data from files 
train = pd.read_csv( DIR+"labeledTrainData.tsv", header=0, 
 delimiter="\t", quoting=3 )

test = pd.read_csv( DIR+"testData.tsv", header=0, delimiter="\t", quoting=3 )
unlabeled_train 
= pd.read_csv( DIR+"unlabeledTrainData.tsv", header=0, delimiter="\t", quoting=3 ) # Verify the number of reviews that were read (100,000 in total) print( "Read %d labeled train reviews, %d labeled test reviews, " \ "and %d unlabeled reviews\n" % (train["review"].size, test["review"].size, unlabeled_train["
review"].size ))
載入檔案

  把一段文字切成一組單詞的函式:review_to_wordlist

# Import various modules for string cleaning
from bs4 import BeautifulSoup
import re
from nltk.corpus import stopwords

def review_to_wordlist( review, remove_stopwords=False ):
    # Function to convert a document to a sequence of words,
    # optionally removing stop words.  Returns a list of words.
    
    # 1. Remove HTML
    review_text = BeautifulSoup(review).get_text()
    
    # 2. Remove non-letters
    review_text = re.sub("[^a-zA-Z]"," ", review_text)
    
    # 3. Convert words to lower case and split them
    words = review_text.lower().split()
    
    # 4. Optionally remove stop words (false by default)
    if remove_stopwords:
        stops = set(stopwords.words("english"))
        words = [w for w in words if not w in stops]
    
    # 5. Return a list of words
    return(words)
切分句子

  使用tokenizer切分段落為句子的函式:review_to_sentences

# Download the punkt tokenizer for sentence splitting
import nltk.data
import nltk

# Load the punkt tokenizer
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')

# Define a function to split a review into parsed sentences
def review_to_sentences( review, tokenizer, remove_stopwords=False ):
    # Function to split a review into parsed sentences. Returns a list of sentences, where each sentence is a list of words
    
    # 1. Use the NLTK tokenizer to split the paragraph into sentences
    raw_sentences = tokenizer.tokenize(review.strip())
    
    # 2. Loop over each sentence
    sentences = []
    for raw_sentence in raw_sentences:
        # If a sentence is empty, skip it
        if len(raw_sentence) > 0:
            # Otherwise, call review_to_wordlist to get a list of words
            sentences.append( review_to_wordlist( raw_sentence,remove_stopwords ))
    
    # Return the list of sentences (each sentence is a list of words,so this returns a list of lists
    return sentences
#

sentences = []  # Initialize an empty list of sentences

print ("Parsing sentences from training set")
for review in train["review"]:
    sentences += review_to_sentences(review, tokenizer)

print ("Parsing sentences from unlabeled set")
for review in unlabeled_train["review"]:
    sentences += review_to_sentences(review, tokenizer)
#
切分段落

  日誌和word2vec引數。這裡word2vec.Word2Vec的初始化函式引數意義:

    size:每句話的詞向量維度,多的話詞義分析更準,但也容易過擬合;

    window:單詞在句子中的最大跨度限制,例如:the quick brown fox jumps over a lazy dog.,設定為5,則quick和dog配不到一塊(跨度7)。

    min_count:單詞在句子內的最小詞頻數。詞頻小於次數將不會出現在詞向量。

    sample:拉大高頻詞彙和低頻詞彙被選進詞向量的概率之差的係數,此係數越低,概率之差越大

(像a、the這些詞就是高頻的冠詞,無意義,加大此係數可以防止此單詞被單獨拖出來;但是也存在某些高頻的有意義詞,例如back、up等,根據語境和詞頻的匹配程度調整)

    worker:併發執行緒數。

# Import the built-in logging module and configure it so that Word2Vec 
# creates nice output messages
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\
    level=logging.INFO)
#


# Set values for various parameters
num_features = int(sys.argv[1])    # Word vector dimensionality                      
min_word_count = int(sys.argv[2])   # Minimum word count                        
num_workers = 4       # Number of threads to run in parallel
context = 10          # Context window size                                                                                    
downsampling = float(sys.argv[3])   # Downsample setting for frequent words

# Initialize and train the model (this will take some time)
from gensim.models import word2vec
print ("Training model...")

model = word2vec.Word2Vec(sentences, workers=num_workers, \
            size=num_features, min_count = min_word_count, \
            window = context, sample = downsampling)
日誌和word2vec初始化

  model.doesnt_match和model.most_similar都是用來測試模型訓練結果的方法

  其中doesnt_match可以傳入一段陣列,判斷這些單詞哪些與大多數單詞不是同一型別,例如[man,woman,child,kitchen],這個陣列只有kitchen的詞結構與其他三個相差甚遠,可判斷不是同一種詞

  most_similar可在模型詞庫中找到與傳入單詞結構、意義近似的單詞,並且標明近似程度,例如傳入queen,會返回以下內容:

[('princess', 0.6779699325561523),
 ('bride', 0.6370287537574768),
 ('belle', 0.5911383628845215),
 ('eva', 0.5903465747833252),
 ('mistress', 0.5865148305892944),
 ('latifah', 0.5846465229988098),
 ('victoria', 0.577500581741333),
 ('showgirl', 0.5712460279464722),
 ('maid', 0.5661402344703674),
 ('madame', 0.559766411781311)]

queen:皇后&女皇;
princess:公主;
bride:新娘;
belle:美女;(某地)最美的女人;
eva:伊娃;
mistress:情婦;女教師;女主人,主婦
(再扯下去就扯遠了。。。)可得知,右邊的近似指數越小,單詞的意義和語境符合程度也越小
與queen結構相似的詞

  一般詞向量模型會sims化以優化效能:

# If you don't plan to train the model any further, calling 
# init_sims will make the model much more memory-efficient.
model.init_sims(replace=True)

# It can be helpful to create a meaningful model name and 
# save the model for later use. You can load it later using Word2Vec.load()
model_name = "300features_40minwords_10context"
model.save(model_name)
View Code

  以下程式碼把模型和測試詞向量取平均值:

import numpy as np  # Make sure that numpy is imported

def makeFeatureVec(words, model, num_features):
    # Function to average all of the word vectors in a given
    # paragraph
    #
    # Pre-initialize an empty numpy array (for speed)
    featureVec = np.zeros((num_features,),dtype="float32")
    #
    nwords = 0.
    # 
    # Index2word is a list that contains the names of the words in 
    # the model's vocabulary. Convert it to a set, for speed 
    index2word_set = set(model.wv.index2word)
    #
    # Loop over each word in the review and, if it is in the model's
    # vocaublary, add its feature vector to the total
    for word in words:
        if word in index2word_set: 
            nwords = nwords + 1.
            featureVec = np.add(featureVec,model[word])
    # 
    # Divide the result by the number of words to get the average
    featureVec = np.divide(featureVec,nwords)
    return featureVec


def getAvgFeatureVecs(reviews, model, num_features):
    # Given a set of reviews (each one a list of words), calculate 
    # the average feature vector for each one and return a 2D numpy array 
    # 
    # Initialize a counter
    counter = 0
    # 
    # Preallocate a 2D numpy array, for speed
    reviewFeatureVecs = np.zeros((len(reviews),num_features),dtype="float32")
    # 
    # Loop through the reviews
    for review in reviews:
       #
       # Print a status message every 1000th review
        if counter%1000 == 0:
            print ("Review %d of %d" % (counter, len(reviews)))
        
       # Call the function (defined above) that makes average feature vectors
        reviewFeatureVecs[counter] = makeFeatureVec(review, model,num_features)
       #
       # Increment the counter
        counter = counter + 1
    return reviewFeatureVecs
#
# ****************************************************************
# Calculate average feature vectors for training and testing sets,
# using the functions we defined above. Notice that we now use stop word
# removal.

clean_train_reviews = []
for review in train["review"]:
    clean_train_reviews.append( review_to_wordlist( review,remove_stopwords=True ))

trainDataVecs = getAvgFeatureVecs( clean_train_reviews, model, num_features )

print("Creating average feature vecs for test reviews")

clean_test_reviews = []
for review in test["review"]:
    clean_test_reviews.append( review_to_wordlist( review, \
        remove_stopwords=True ))

testDataVecs = getAvgFeatureVecs( clean_test_reviews, model, num_features )
View Code

  trainDataVecs就是上面平均化後的模型詞向量,testDataVecs就是上面平均化後的測試詞向量。

  這裡使用結點數量100的隨機森林,輸出檔案Word2Vec_AverageVectors.csv,訓練結果與準確率99%的答案有4243詞的差距,準確率83%

# Fit a random forest to the training data, using 100 trees
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier( n_estimators = 100 )

print ("Fitting a random forest to labeled training data...")
forest = forest.fit( trainDataVecs, train["sentiment"] )

# Test & extract results 
result = forest.predict( testDataVecs )

# Write the test results 
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )
output.to_csv( "Word2Vec_AverageVectors.csv", index=False, quoting=3 )
View Code

  第二種方法,是通過無監督學習,減少詞向量過於相近的資料。這裡將模型資料減少到之前的1/5

  方法就是,建立KMeans進行無監督學習,聚類數量為原來資料詞向量數量1/5,

from sklearn.cluster import KMeans
import time

start = time.time() # Start time

# Set "k" (num_clusters) to be 1/5th of the vocabulary size, or an
# average of 5 words per cluster
word_vectors = model.wv.syn0
num_clusters = int(word_vectors.shape[0] / 5)

if num_clusters <= 0:
    num_clusters = 1
#
print(word_vectors)
print(num_clusters)
# Initalize a k-means object and use it to extract centroids
kmeans_clustering = KMeans( n_clusters = num_clusters )
idx = kmeans_clustering.fit_predict( word_vectors )

# Get the end time and print how long the process took
end = time.time()
elapsed = end - start
print ("Time taken for K Means clustering: ", elapsed, "seconds.")
資料預聚合

  之後根據訓練結果,將原資料與預聚合資料進行對映,取最前的10個聚類資料:

# Create a Word / Index dictionary, mapping each vocabulary word to
# a cluster number                                                                                            
word_centroid_map = dict(zip( model.wv.index2word, idx ))

# For the first 10 clusters
for cluster in range(0,10):
    #
    # Print the cluster number  
    print( "\nCluster %d" % cluster)
    #
    # Find all of the words for that cluster number, and print them out
    words = []
    for i in range(0,len(word_centroid_map.values())):
        val=list(word_centroid_map.values())
        #print(val)
        if( val == cluster ):
            words.append(word_centroid_map.keys()[i])
    print(words)
#
def create_bag_of_centroids( wordlist, word_centroid_map ):
    #
    # The number of clusters is equal to the highest cluster index
    # in the word / centroid map
    num_centroids = max( word_centroid_map.values() ) + 1
    #
    # Pre-allocate the bag of centroids vector (for speed)
    bag_of_centroids = np.zeros( num_centroids, dtype="float32" )
    #
    # Loop over the words in the review. If the word is in the vocabulary,
    # find which cluster it belongs to, and increment that cluster count 
    # by one
    for word in wordlist:
        if word in word_centroid_map:
            index = word_centroid_map[word]
            bag_of_centroids[index] += 1
    #
    # Return the "bag of centroids"
    return bag_of_centroids
取部分資料

  之後,將拿到的資料進行隨機森林操作,訓練準確度提升些許(84%)。

  改變《日誌和word2vec》引數裡的size(向量維度)、min_count(最小詞頻數)和sample(最高詞頻和最低詞頻被選中概率之差),進行cross validation,準確率沒有明顯變化,在(83.512%-84.732%)

  原因是word2Vec模型有一定的誤差,隨機森林等偏線性的機器學習也會對訓練模型帶來一定的失真。使用IMDB模型可以針對電影評論做出有效的判斷,使用RNN而不是經典演算法也能有效改善準確度。