1. 程式人生 > 其它 >07.Mapreduce例項——二次排序

07.Mapreduce例項——二次排序

07Mapreduce例項——二次排序

實驗原理

在Map階段,使用job.setInputFormatClass定義的InputFormat將輸入的資料集分割成小資料塊splites,同時InputFormat提供一個RecordReder的實現。本實驗中使用的是TextInputFormat,他提供的RecordReder會將文字的位元組偏移量作為key,這一行的文字作為value。這就是自定義Map的輸入是<LongWritable, Text>的原因。然後呼叫自定義Map的map方法,將一個個<LongWritable, Text>鍵值對輸入給Map的map方法。注意輸出應該符合自定義Map中定義的輸出<IntPair, IntWritable>。最終是生成一個List<IntPair, IntWritable>。在map階段的最後,會先呼叫job.setPartitionerClass對這個List進行分割槽,每個分割槽對映到一個reducer。每個分割槽內又呼叫job.setSortComparatorClass設定的key比較函式類排序。可以看到,這本身就是一個二次排序。 如果沒有通過job.setSortComparatorClass設定key比較函式類,則可以使用key實現的compareTo方法進行排序。 在本實驗中,就使用了IntPair實現的compareTo方法。

在Reduce階段,reducer接收到所有對映到這個reducer的map輸出後,也是會呼叫job.setSortComparatorClass設定的key比較函式類對所有資料對排序。然後開始構造一個key對應的value迭代器。這時就要用到分組,使用job.setGroupingComparatorClass設定的分組函式類。只要這個比較器比較的兩個key相同,他們就屬於同一個組,它們的value放在一個value迭代器,而這個迭代器的key使用屬於同一個組的所有key的第一個key。最後就是進入Reducer的reduce方法,reduce方法的輸入是所有的(key和它的value迭代器)。同樣注意輸入與輸出的型別必須與自定義的Reducer中宣告的一致。

實驗步驟

1.建一個文字文件,用逗號分隔開,資料如下

goods_visit2表

goods_id click_num

1010037 100

1010102 100

1010152 97

1010178 96

1010280 104

1010320 103

1010510 104

1010603 96

1010637 97虛擬機器中啟動Hadoop

2.本地新建/data/mapreduce8目錄。

mkdir-p/data/mapreduce8

3.將表上傳到虛擬機器中

4.上傳並解壓hadoop2lib檔案

5.在HDFS上新建/mymapreduce8/in目錄,然後將Linux本地/data/mapreduce8目錄下的goods_visit2檔案匯入到HDFS的/mymapreduce8/in目錄中。

hadoopfs-mkdir-p/mymapreduce8/in

hadoopfs-put/data/mapreduce8/goods_visit2/mymapreduce8/in

6.IDEA中編寫Java程式碼

package mapreduce7;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class SecondarySort
{

public static class IntPair implements WritableComparable<IntPair>
{
int first;
int second;

public void set(int left, int right)
{
first = left;
second = right;
}
public int getFirst()
{
return first;
}
public int getSecond()
{
return second;
}
@Override

public void readFields(DataInput in) throws IOException
{
// TODO Auto-generated method stub
first = in.readInt();
second = in.readInt();
}
@Override

public void write(DataOutput out) throws IOException
{
// TODO Auto-generated method stub
out.writeInt(first);
out.writeInt(second);
}
@Override

public int compareTo(IntPair o)
{
// TODO Auto-generated method stub
if (first != o.first)
{
return first < o.first ? 1 : -1;
}
else if (second != o.second)
{
return second < o.second ? -1 : 1;
}
else
{
return 0;
}
}
@Override
public int hashCode()
{
return first * 157 + second;
}
@Override
public boolean equals(Object right)
{
if (right == null)
return false;
if (this == right)
return true;
if (right instanceof IntPair)
{
IntPair r = (IntPair) right;
return r.first == first && r.second == second;
}
else
{
return false;
}
}
}

public static class FirstPartitioner extends Partitioner<IntPair, IntWritable>
{
@Override
public int getPartition(IntPair key, IntWritable value,int numPartitions)
{
return Math.abs(key.getFirst() * 127) % numPartitions;
}
}
public static class GroupingComparator extends WritableComparator
{
protected GroupingComparator()
{
super(IntPair.class, true);
}
@Override
//Compare two WritableComparables.
public int compare(WritableComparable w1, WritableComparable w2)
{
IntPair ip1 = (IntPair) w1;
IntPair ip2 = (IntPair) w2;
int l = ip1.getFirst();
int r = ip2.getFirst();
return l == r ? 0 : (l < r ? -1 : 1);
}
}
public static class Map extends Mapper<LongWritable, Text, IntPair, IntWritable>
{
private final IntPair intkey = new IntPair();
private final IntWritable intvalue = new IntWritable();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
int left = 0;
int right = 0;
if (tokenizer.hasMoreTokens())
{
left = Integer.parseInt(tokenizer.nextToken());
if (tokenizer.hasMoreTokens())
right = Integer.parseInt(tokenizer.nextToken());
intkey.set(right, left);
intvalue.set(left);
context.write(intkey, intvalue);
}
}
}

public static class Reduce extends Reducer<IntPair, IntWritable, Text, IntWritable>
{
private final Text left = new Text();
private static final Text SEPARATOR = new Text("------------------------------------------------");

public void reduce(IntPair key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException
{
context.write(SEPARATOR, null);
left.set(Integer.toString(key.getFirst()));
System.out.println(left);
for (IntWritable val : values)
{
context.write(left, val);
//System.out.println(val);
}
}
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException
{

Configuration conf = new Configuration();
Job job = new Job(conf, "secondarysort");
job.setJarByClass(SecondarySort.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setPartitionerClass(FirstPartitioner.class);

job.setGroupingComparatorClass(GroupingComparator.class);
job.setMapOutputKeyClass(IntPair.class);

job.setMapOutputValueClass(IntWritable.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

job.setInputFormatClass(TextInputFormat.class);

job.setOutputFormatClass(TextOutputFormat.class);
String[] otherArgs=new String[2];
otherArgs[0]="hdfs://192.168.149.10:9000/mymapreduce8/in/goods_visit2";
otherArgs[1]="hdfs://192.168.149.10:9000/mymapreduce8/out";

FileInputFormat.setInputPaths(job, new Path(otherArgs[0]));

FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

7.將hadoop2lib目錄中的jar包,拷貝到hadoop2lib目錄下。

8.拷貝log4j.properties檔案

9.執行結果

PS:本次表不用把空格替換成逗號,資料之間使用一個空格隔開