Flume自定義Source、Sink和Interceptor(簡單功能實現)
1.Event
event是flume傳輸的最小對象,從source獲取數據後會先封裝成event,然後將event發送到channel,sink從channel拿event消費。
event由頭headers和身體(body)兩部分組成:Headers部分是一個map,body部分可以是String或者byte[]等。其中body部分是真正存放數據的地方,headers部分用於本節所講的interceptor。
2.Source
自定義Source,自定義的Event需要繼承PollableSource (輪訓拉取)或者EventDrivenSource (事件驅動),另外還需要實現Configurable接口。
PollableSource或者EventDrivenSource的區別在於:PollableSource是通過線程不斷去調用process方法,主動拉取消息,而EventDrivenSource是需要觸發一個調用機制,即被動等待。 Configurable接口:便於項目中初始化某些配置用的。
Event:
event是flume傳輸的最小對象,從source獲取數據後會先封裝成event,然後將event發送到channel,sink從channel拿event消費。
2.1CustomSource.java
public class CustomSource extends AbstractSource implements Configurable,PollableSource{
@Override
public long getBackOffSleepIncrement() {
// TODO Auto-generated method stub
return 0;
}
@Override
public long getMaxBackOffSleepInterval() {
// TODO Auto-generated method stub
return 0;
}
@Override
public Status process() throws EventDeliveryException {
Random random = new Random();
int randomNum = random.nextInt(100);
String text = "Hello world" + random.nextInt(100);
HashMap<String, String> header = new HashMap<String,String>();
header.put("id",Integer.toString(randomNum));
this.getChannelProcessor()
.processEvent(EventBuilder.withBody(text,Charset.forName("UTF-8"),header));
return Status.READY;
}
@Override
public void configure(Context arg0) {
2.2Flume 編寫配置文件:
# 指定Agent的組件名稱
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 指定Flume source(要監聽的路徑)
a1.sources.r1.type = com.harderxin.flume.test.MySource
# 指定Flume sink
a1.sinks.k1.type = file_roll
# sink的輸出目錄,根據自己情況定義
a1.sinks.k1.sink.directory = www.jyyL157.com /home/hadoop/sinkFolder
# 指定Flume channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c1.byteCapacityBufferPercentage = 20
a1.channels.c1.byteCapacity = 800000
# 綁定source和sink到channel上
a1.sources.r1.channels = c1
2.3打jar包運行:
將項目打成jar包(可以只打程所在的類),放入flume下的 lib目錄下(網上說是bin目錄,但沒有運行成功)。
然後bin目錄下執行:
flume-ng agent -conf conf -conf-file ..www.dfgjpt.com /conf/custom_source.conf -name a1
1
3.Sink
3.1CustomSink.java
public class CustomSink extends AbstractSink implements Configurable{
@Override
public Status process() throws EventDeliveryException {
Status status = Status.READY;
Transaction trans = null;
try {
Channel channel = getChannel();
trans = channel.getTransaction();
trans.begin();
for(int i= 0;i < 100 ;i++) {
Event event = channel.take();
if(event == null) {
status = status.BACKOFF;
break;
}else {
String body = new String(event.getBody());
System.out.println(body);
}
}
trans.commit();
}catch (Exception e) {
if(trans != null) {
trans.commit();
}
e.printStackTrace(www.120xh.cn/);
}finally {
if(trans != null) {
trans.close(chuangshi88.cn);
}
}
return status;
}
@Override
public void configure(Context arg0) {
3.2custom_sink.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = com.caoxufeng.MyCustom.CustomSink
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3.3打jar包運行:
將項目打成jar包(可以只打程所在的類),放入flume下的 lib目錄下(網上說是bin目錄,但沒有運行成功)。
然後bin目錄下執行:
flume-ng agent -conf conf -conf-file ../conf/custom_sink.conf -name a1
1
4.Interceptor
用戶Source讀取events發送到Sink的時候,在events header中加入一些有用的信息,或者對events的內容進行過濾,完成初步的數據清洗。
Fluem實現了日誌的多來源自動抽取和多target的自動發送等功能。一直以來人們都是將數據清洗的過程放在Hadoop 的 MR的進行的。而自定義Interceptor可以讓Flume進行數據清洗匹配,過濾到那些不規則的臟數據。
Flume中攔截器的作用就是對於event中header的部分可以按需塞入一些屬性,當然你如果想要處理event的body內容,也是可以的,但是event的body內容是系統下遊階段真正處理的內容,如果讓Flume來修飾body的內容的話,那就是強耦合了,這就違背了當初使用Flume來解耦的初衷了。
4.1CustomInterceptor.java
public class CustomInterceptor implements Interceptor{
private final String headerKey;
private static final String CONF_HEADER_KEY = "header";
private static final String DEFAULT_HEADER = "count";
private final AtomicLong currentCount;
public CustomInterceptor(Context www.wangcai157.com ctx) {
headerKey = ctx.getString(CONF_HEADER_KEY,DEFAULT_HEADER);
currentCount = new AtomicLong();
}
//運行前的初始化,一般不需要實現
@Override
public void initialize() {
// TODO Auto-generated method stub
}
//)處理單個event
@Override
public Event intercept(Event event) {
long count = currentCount.incrementAndGet();
event.getHeaders(www.ylzx1980.com).put(headerKey, String.valueOf(count));
return event;
}
//批量處理event,循環出路一面的interceptor(Event event)
@Override
public List<Event> intercept(List<Event> events) {
for(Event e:events) {
intercept(e);
}
return events;
}
@Override
public void close() {
}
public static class CounterInterceptorBuilder implements Builder {
private Context ctx;
@Override
public Interceptor build() {
return new CustomInterceptor(ctx);
}
@Override
public void configure(Context context) {
this.ctx = context;
方法intercept(Event www.caibaoyule.cn event)是具體執行解析的方法,將count自增1,然後寫入到該條event的headers中。
4.2custom_interceptor.conf
a1.sources = r1
a1.sinks = s1
a1.channels = c1
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.caoxufeng.MyCustom.CustomInterceptor$CounterInterceptorBuilder
a1.sources.r1.interceptors.i1.perserveExisting = true
a1.sinks.s1.type = logger
a1.channels.c1.type = memory
a1.channels.c1.capacity = 2
a1.channels.c1.transactionCapacity = 2
a1.sources.r1.channels = c1
a1.sinks.s1.channel = c1
4.3打jar包運行:
將項目打成jar包(可以只打程所在的類),放入flume下的 lib目錄下(網上說是bin目錄,但沒有運行成功)。
然後bin目錄下執行:
flume-ng agent -c conf -f ../conf/custom-interceptor.conf -n a1
Flume自定義Source、Sink和Interceptor(簡單功能實現)