1. 程式人生 > 其它 >尚矽谷 mapreduce wordcount案例

尚矽谷 mapreduce wordcount案例

1.7 MapReduce程式設計規範

使用者編寫的程式分成三個部分:MapperReducerDriver

 

 

1.8 WordCount案例實操

1.8.1 本地測試

1)需求

在給定的文字檔案中統計輸出每一個單詞出現的總次數

1)輸入資料

 

2)期望輸出資料

atguigu 2

banzhang 1

cls 2

hadoop 1

jiao 1

ss 2

xue 1

2)需求分析

按照MapReduce程式設計規範,分別編寫MapperReducerDriver

 

3)環境準備

1)建立maven工程MapReduceDemo

2)在pom.xml檔案

新增如下依賴

<dependencies>

    <dependency>

        <groupId>org.apache.hadoop</groupId>

        <artifactId>hadoop-client</artifactId>

        <version>3.1.3</version>

    </dependency>

    <dependency>

        <groupId>junit</groupId>

        <artifactId>junit</artifactId>

        <version>4.12</version>

    </dependency>

    <dependency>

        <groupId>org.slf4j</groupId>

        <artifactId>slf4j-log4j12</artifactId>

        <version>1.7.30</version>

    </dependency>

</dependencies>

2)在專案的src/main/resources

目錄下,新建一個檔案,命名為“log4j.properties”,在檔案中填入。

log4j.rootLogger=INFO, stdout  

log4j.appender.stdout=org.apache.log4j.ConsoleAppender  

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout  

log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n  

log4j.appender.logfile=org.apache.log4j.FileAppender  

log4j.appender.logfile.File=target/spring.log  

log4j.appender.logfile.layout=org.apache.log4j.PatternLayout  

log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

3)建立包名:com.atguigu.mapreduce.wordcount

4)編寫程式

1)編寫Mapper

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;

 

public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{

 

Text k = new Text();

IntWritable v = new IntWritable(1);

 

@Override

protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

 

// 1 獲取一行

String line = value.toString();

 

// 2 切割

String[] words = line.split(" ");

 

// 3 輸出

for (String word : words) {

 

k.set(word);

context.write(k, v);

}

}

}

2)編寫Reducer

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

 

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{

 

int sum;

IntWritable v = new IntWritable();

 

@Override

protected void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {

 

// 1 累加求和

sum = 0;

for (IntWritable count : values) {

sum += count.get();

}

 

// 2 輸出

         v.set(sum);

context.write(key,v);

}

}

3)編寫Driver驅動類

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

 

public class WordCountDriver {

 

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

 

// 1 獲取配置資訊以及獲取job物件

Configuration conf = new Configuration();

Job job = Job.getInstance(conf);

 

// 2 關聯本Driver程式的jar

job.setJarByClass(WordCountDriver.class);

 

// 3 關聯MapperReducerjar

job.setMapperClass(WordCountMapper.class);

job.setReducerClass(WordCountReducer.class);

 

// 4 設定Mapper輸出的kv型別

job.setMapOutputKeyClass(Text.class);

job.setMapOutputValueClass(IntWritable.class);

 

// 5 設定最終輸出kv型別

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

 

// 6 設定輸入和輸出路徑

FileInputFormat.setInputPaths(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

 

// 7 提交job

boolean result = job.waitForCompletion(true);

System.exit(result ? 0 : 1);

}

}

5)本地測試

1)需要首先配置好HADOOP_HOME變數以及Windows執行依賴

2)在IDEA/Eclipse上執行程式

1.8.2 提交到叢集測試

叢集上測試

1)用mavenjar需要新增的打包外掛依賴

<build>

    <plugins>

        <plugin>

            <artifactId>maven-compiler-plugin</artifactId>

            <version>3.6.1</version>

            <configuration>

                <source>1.8</source>

                <target>1.8</target>

            </configuration>

        </plugin>

        <plugin>

            <artifactId>maven-assembly-plugin</artifactId>

            <configuration>

                <descriptorRefs>

                    <descriptorRef>jar-with-dependencies</descriptorRef>

                </descriptorRefs>

            </configuration>

            <executions>

                <execution>

                    <id>make-assembly</id>

                    <phase>package</phase>

                    <goals>

                        <goal>single</goal>

                    </goals>

                </execution>

            </executions>

        </plugin>

    </plugins>

</build>

注意:如果工程上顯示紅叉。專案上右鍵->maven->Reimport重新整理即可。

2)將程式打成jar

 

3)修改不帶依賴的jar包名稱wc.jar,並拷貝該jar包到Hadoop叢集/opt/module/hadoop-3.1.3路徑

4)啟動Hadoop叢集

[atguigu@hadoop102 hadoop-3.1.3]sbin/start-dfs.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

5)執行WordCount程式

[atguigu@hadoop102 hadoop-3.1.3]$ hadoop jar  wc.jar

 com.atguigu.mapreduce.wordcount.WordCountDriver /user/atguigu/input /user/atguigu/output