元資料與資料治理|Intellij IDEA提交遠端Hadoop MapReduce任務(第八篇)
1.新建IntelliJ下空的的maven專案
直接next即可。
2.配置依賴
編輯pom.xml檔案,新增apache源和hadoop依賴
基礎依賴hadoop-core和hadoop-common;
讀寫HDFS,需要依賴hadoop-hdfs和hadoop-client;
如果需要讀寫HBase,則還需要依賴hbase-client
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <name>hadoop</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.8.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.8.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.8.1</version> </dependency> </dependencies>
3.新增core-site.xml到resources檔案
將虛擬機器上的hadoop下/etc/hadoop/core-site.xml檔案拷貝到此專案下resources資料夾下
注意master是我虛擬機器ip地址的對映,如果沒有配置hosts檔案那麼這裡應該填的是你虛擬機器的IP地址。
4.編寫一個WordCount類
WordCount.java
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); conf.set("mapreduce.cluster.local.dir","/Users/CHOUKIN/hadoop/var");//在此處有一坑,本地需要新增一個快取資料夾 Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
注意:conf.set("mapreduce.cluster.local.dir","/Users/CHOUKIN/hadoop/var");//在此處有一坑,本地需要新增一個快取資料夾
如果沒有這個本地快取資料夾,會報以下錯誤
查詢hadoop官網docs關於mapred-default.xml引數簡介
mapreduce.cluster.local.dir :
The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
這個引數是MapReduce 儲存中間資料檔案的本地目錄。對不同的裝置上的目錄可以用逗號分隔,用以加快磁碟 i/o 。不存在的目錄將被忽略。
5.配置執行引數
在Intellij選單欄中選擇Run->Edit Configurations,在彈出來的對話方塊中點選+,新建一個Application配置。配置Main class為WordCount(可以點選右邊的...選擇),
為Program arguments新增輸入路徑以及輸出路徑,記得把ip地址改為自己虛擬機器的ip地址
6.執行程式
拷貝了一篇滿分英語作文在test.txt裡,執行結果如下
每次執行時檢查hdfs上是否有output資料夾,如果有,請刪除output資料夾。
作者:Chowkin
連結:https://www.jianshu.com/p/41569d558fde
來源:簡書