1. 程式人生 > >【轉載】MapReduce編程(一) Intellij Idea配置MapReduce編程環境

【轉載】MapReduce編程(一) Intellij Idea配置MapReduce編程環境

.net class 上傳 -c word 指定 otl 輸出信息 resource


目錄(?)[-]

  1. 一軟件環境
  2. 二創建maven工程
  3. 三添加maven依賴
  4. 四配置log4j
  5. 五啟動Hadoop
  6. 六運行WordCount從本地讀取文件
  7. 七運行WordCount從HDFS讀取文件
  8. 八代碼下載

介紹如何在Intellij Idea中通過創建maven工程配置MapReduce的編程環境。

一、軟件環境

我使用的軟件版本如下:

  1. Intellij Idea 2017.1
  2. Maven 3.3.9
  3. Hadoop偽分布式環境( 安裝教程可參考這裏)

二、創建maven工程

打開Idea,file->new->Project,左側面板選擇maven工程。(如果只跑MapReduce創建Java工程即可,不用勾選Creat from archetype,如果想創建web工程或者使用骨架可以勾選)

技術分享
設置GroupId和ArtifactId,下一步。
技術分享
設置工程存儲路徑,下一步。
技術分享
Finish之後,空白工程的路徑如下圖所示。

技術分享

完整的工程路徑如下圖所示:
技術分享

三、添加maven依賴

在pom.xml添加依賴,對於Hadoop 2.7.3版本的hadoop,需要的jar包有以下幾個:

  • hadoop-common
  • hadoop-hdfs
  • hadoop-mapreduce-client-core
  • hadoop-mapreduce-client-jobclient
  • log4j( 打印日誌)

    pom.xml中的依賴如下:

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.7.3</version>
        </dependency>


        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.7.3</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
            <version>2.7.3</version>
        </dependency>

        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
    </dependencies>

四、配置log4j

src/main/resources目錄下新增log4j的配置文件log4j.properties,內容如下:

log4j.rootLogger = debug,stdout

### 輸出信息到控制擡 ###
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n

五、啟動Hadoop

啟動Hadoop,運行命令:

cd hadoop-2.7.3/
./sbin/start-all.sh

訪問http://localhost:50070/查看hadoop是否正常啟動。

六、運行WordCount(從本地讀取文件)

在工程根目錄下新建input文件夾,input文件夾下新增dream.txt,隨便寫入一些單詞:

I have a  dream
a dream

在src/main/java目錄下新建包,新增FileUtil.java,創建一個刪除output文件的函數,以後就不用手動刪除了。內容如下:

package com.mrtest.hadoop;

import java.io.File;

/**
 * Created by bee on 3/25/17.
 */
public class FileUtil {

    public static boolean deleteDir(String path) {
        File dir = new File(path);
        if (dir.exists()) {
            for (File f : dir.listFiles()) {
                if (f.isDirectory()) {
                    deleteDir(f.getName());
                } else {
                    f.delete();
                }
            }
            dir.delete();
            return true;
        } else {
            System.out.println("文件(夾)不存在!");
            return false;
        }
    }

}

編寫WordCount的MapReduce程序WordCount.java,內容如下:

package com.mrtest.hadoop;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

/**
 * Created by bee on 3/25/17.
 */
public class WordCount {


    public static class TokenizerMapper extends
            Mapper<Object, Text, Text, IntWritable> {


        public static final IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                this.word.set(itr.nextToken());
                context.write(this.word, one);
            }
        }

    }

    public static class IntSumReduce extends
            Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context)
                throws IOException, InterruptedException {
            int sum = 0;
            IntWritable val;
            for (Iterator i = values.iterator(); i.hasNext(); sum += val.get()) {
                val = (IntWritable) i.next();
            }
            this.result.set(sum);
            context.write(key, this.result);
        }
    }

    public static void main(String[] args)
            throws IOException, ClassNotFoundException, InterruptedException {

        FileUtil.deleteDir("output");
        Configuration conf = new Configuration();

        String[] otherArgs = new String[]{"input/dream.txt","output"};
        if (otherArgs.length != 2) {
            System.err.println("Usage:Merge and duplicate removal <in> <out>");
            System.exit(2);
        }

        Job job = Job.getInstance(conf, "WordCount");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(WordCount.TokenizerMapper.class);
        job.setReducerClass(WordCount.IntSumReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

運行完畢以後,會在工程根目錄下增加一個output文件夾,打開output/part-r-00000,內容如下:

I   1
a   2
dream   2
have    1

這裏在main函數中新增了一個String類型的數組,如果想用main函數的args數組接受參數,在運行時指定輸入和輸出路徑也是可以的。運行WordCount之前,配置Configuration並指定Program arguments即可。
技術分享


七、運行WordCount(從HDFS讀取文件)

在HDFS上新建文件夾:

hadoop fs -mkdir /worddir

如果出現Namenode安全模式導致的不能創建文件夾提示:

mkdir: Cannot create directory /worddir. Name node is in safe mode.

運行以下命令關閉safe mode:

hadoop dfsadmin -safemode leave

上傳本地文件:

hadoop fs -put dream.txt /worddir

修改otherArgs參數,指定輸入為文件在HDFS上的路徑:

String[] otherArgs = new String[]{"hdfs://localhost:9000/worddir/dream.txt","output"};

八、代碼下載

代碼下載地址:http://download.csdn.net/detail/napoay/9799523

【轉載】MapReduce編程(一) Intellij Idea配置MapReduce編程環境