hadoop大作業
阿新 • • 發佈:2018-05-25
存儲 bcd fin abcde func csv gif ons move
1.用Hive對爬蟲大作業產生的文本文件(或者英文詞頻統計下載的英文長篇小說)詞頻統計。
1.啟動hadoop
2.Hdfs上創建文件夾並查看
3.上傳英文詞頻統計文本至hdfs
1、把英文文章的每個單詞放到列表裏,並統計列表長度;
2、遍歷列表,對每個單詞出現的次數進行統計,並將結果存儲在字典中;
3、利用步驟1中獲得的列表長度,求出每個單詞出現的頻率,並將結果存儲在頻率字典中;
3、以字典鍵值對的“值”為標準,對字典進行排序,輸出結果(也可利用切片輸出頻率最大或最小的特定幾個,因為經過排序sorted()函數處理後,單詞及其頻率信息已經存儲在元組中,所有元組再組成列表。)
fin = open(‘The_Magic_Skin _Honore_de_Balzac.txt‘) #the txt is up #to you lines=fin.readlines() fin.close() ‘‘‘transform the article into word list ‘‘‘ def words_list(): chardigit=‘ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789 ‘ all_lines = ‘‘ for line in lines: one_line=‘‘ for ch in line: if ch in chardigit: one_line = one_line + ch all_lines = all_lines + one_line return all_lines.split() ‘‘‘calculate the total number of article list s is the article list ‘‘‘ def total_num(s): return len(s) ‘‘‘calculate the occurrence times of every word t is the article list ‘‘‘ def word_dic(t): fre_dic = dict() for i in range(len(t)): fre_dic[t[i]] = fre_dic.get(t[i],0) + 1 return fre_dic ‘‘‘calculate the occurrence times of every word w is dictionary of the occurrence times of every word ‘‘‘ def word_fre(w): for key in w: w[key] = w[key] / total return w ‘‘‘sort the dictionary v is the frequency of words ‘‘‘ def word_sort(v): sort_dic = sorted(v.items(), key = lambda e:e[1]) return sort_dic ‘‘‘This is entrance of functions output is the ten words with the largest frequency ‘‘‘ total = total_num(words_list()) print(word_sort(word_fre(word_dic(words_list())))[-10:])
英文長篇小說TXT如下
詞頻截圖如下
2.用Hive對爬蟲大作業產生的csv文件進行數據分析,寫一篇博客描述你的分析過程和分析結果。
csv文件,通過 Numbers 打開
"Header1","Header2","Header3" "Data1","Data2","Data3" "Data1","Data2","Data3"
所有的數據都讀到了 datas 數組中
def main(input_file_path): input_file = open(input_file_path); # skip header input_file.readline(); datas = []; key_index_table = { "Header1": 0, "Header2": 1, "Header3": 2}; count = 0; max_count = 10000; while count < max_count: count += 1; line = input_file.readline(); if not line: break; values = line.split(","); data = {}; for key in key_index_table: value = values[key_index_table[key]]; value = value[1:-1]; # remove quotation mark data[key] = int(value); datas.append(data); drawTable(datas);
分析截圖如下
這樣就可以對自己感興趣的方向分析。
hadoop大作業