1. 程式人生 > >18.自定義Inputformat

18.自定義Inputformat

需求:

將一個資料夾裡的幾個小檔案讀入併合並,輸出為 : 檔案路徑+檔案內容

程式碼:

public class Fcinputformat extends FileInputFormat<NullWritable, BytesWritable> {
    @Override
    protected boolean isSplitable(JobContext context, Path filename) {
        //不切原來檔案
        return false;
    }

    @Override
    public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {

      FcRecordReader fc =  new FcRecordReader();
        return fc;
    }
}

-----------------------------------------------------------
public class FcRecordReader extends RecordReader<NullWritable, BytesWritable> {
    boolean isProcess = false;
    FileSplit sp;
    Configuration conf;
    BytesWritable value = new BytesWritable();

    public void initialize(InputSplit inputSplit, TaskAttemptContext Context)  {
        this.sp = (FileSplit) inputSplit;
        conf = Context.getConfiguration();
    }

    public boolean nextKeyValue() throws IOException {

        if (!isProcess){
                 FSDataInputStream fis ;
                 FileSystem fs ;

                //1.根據切片長度獲得緩衝區
                byte [] bur = new byte[(int)sp.getLength()];
                //2.獲得路徑
                Path path = sp.getPath();
                //3.通過路徑獲得檔案系統
                fs = path.getFileSystem(conf);
                //4.通過檔案系統獲得輸入流
                fis = fs.open(path);
                //5.拷貝流
                IOUtils.readFully(fis,bur,0,bur.length);
                //6.關閉流

                value.set(bur, 0, bur.length);


                IOUtils.closeStream(fis);
                IOUtils.closeStream(fs);



            isProcess = true;

            return true;
        }

        return false;
    }

    public NullWritable getCurrentKey()  {
        return NullWritable.get();
    }

    public BytesWritable getCurrentValue()  {
        return value;
    }

    public float getProgress() {
        return 0;
    }

    public void close()  {

    }
}
----------------------------------------------------
public class SquenceDrive {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(SquenceDrive.class);



        job.setMapperClass(SquenceMapper.class);
        job.setReducerClass(SquenceReducer.class);

        job.setInputFormatClass(Fcinputformat.class);
        job.setOutputFormatClass(SequenceFileOutputFormat.class);
//        job.setOutputFormatClass(TextOutputFormat.class);


        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(BytesWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(BytesWritable.class);

        FileInputFormat.setInputPaths(job,new Path("B:/測試資料/"));
        FileOutputFormat.setOutputPath(job,new Path("B:/測試資料/out"));

        boolean b = job.waitForCompletion(true);

        System.out.println(b);
    }
}
-----------------------------------------------------------------------------------------
public class SquenceMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
        Text k = new Text();
    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        FileSplit sp = (FileSplit) context.getInputSplit();
        Path path = sp.getPath();
        k.set(path.toString());
    }

    @Override
    protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
        context.write(k,value);
    }
}
--------------------------------------------------------------------
public class SquenceReducer extends Reducer<Text, BytesWritable,Text, BytesWritable> {
    @Override
    protected void reduce(Text key, Iterable<BytesWritable> values, Context context) throws IOException, InterruptedException {
        for (BytesWritable v : values){
            context.write(key,v);
        }
    }
}

輸入結果:

在這裡插入圖片描述

上面包名不能去掉,用的不多,主要熟悉下Inputformat的過程和寫法!

預設格式TextInputformat

setInputFormat:
TextInputFormat:用於讀取純文字檔案,檔案被分為一系列以LF或CR結束的行,key是每一行的偏移量(LongWritable),value是每一行的內容(Text)。
KeyValueTextInputFormat:用於讀取檔案,如果行被分隔符分割為兩部分,第一部分為key,剩下的為value;若沒有分隔符,整行作為key,value為空。
SequenceFileInputFormat:用於讀取SequenceFile,讀取格式要與寫出SequenceFileOutputFormat時設定的setOutputKeyClass與setOutputValueClass一致(key+value的格式)。
SequenceFileInputFilter:根據filter從SequenceFile中取得滿足條件的資料,通過setFilterClass指定Filter,內建了三種Filter,RegexFilter取key值滿足指定的正則表示式的記錄;PercentFilter通過指定引數f,取記錄行數f%0的記錄;MD5Filter通過指定引數f,取MD5(key)%f

0的記錄。

setOutputFormat:
TextOutputFormat:輸出到純文字檔案,格式為key + “ ”+ value。
NullOutputFormat:hadoop中的/dev/null,將輸出送進黑洞。
SequenceFileOutputFormat,輸出SequenceFile檔案,其具體格式與setOutputKeyClass,setOutputValueClass相關 ,如此SequenceFileInputFormat的讀取格式應該與SequenceFileOutputFormat的輸出格式一致(key+value的格式)
MultipleSequenceFileOutputFormat, MultipleTextOutputFormat:根據key將記錄輸出到不同的檔案,可以被重寫
DBInputFormat和DBOutputFormat,從DB讀取,輸出到DB。