flume 從log4j 收集日誌 到kafka
阿新 • • 發佈:2019-02-01
1. flume 配置
# Define a memory channel called ch1 on agent1 agent1.channels.ch1.type = memory agent1.channels.ch1.capacity = 1000 agent1.channels.ch1.transactionCapacity = 100 agent1.sources.avro-source1.channels = ch1 agent1.sources.avro-source1.type = avro agent1.sources.avro-source1.bind = localhost agent1.sources.avro-source1.port = 44445 agent1.sinks.kafka-sink1.channel = ch1 agent1.sinks.kafka-sink1.type = org.apache.flume.sink.kafka.KafkaSink agent1.sinks.kafka-sink1.kafka.bootstrap.servers = localhost:9092 agent1.sinks.kafka-sink1.topic = test agent1.sinks.kafka-sink1.flumeBatchSize = 10 agent1.sinks.kafka-sink1.kafka.producer.acks = 1 # Finally, now that we've defined all of our components, tell # agent1 which ones we want to activate. agent1.channels = ch1 agent1.sources = avro-source1 agent1.sinks = kafka-sink1
source是avro型別
sink 是kafka的型別
2. 啟動flume
flume-ng agent --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/log4g.conf --name agent1 -Dflume.root.logger=INFO,console
3. log4j 列印測試日誌
import org.apache.log4j.Logger; public class LoggerGenerator { private static Logger logger = Logger.getLogger(LoggerGenerator.class.getName()); public static void main(String[] args) throws Exception{ int index = 0; while (true){ Thread.sleep(1000); logger.info("value:" + index++); } } }
4. resources log4j 配置
log4j.rootLogger=INFO,stdout,flume log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.target = System.out log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c] [%p] - %m%n log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender log4j.appender.flume.Hostname = localhost log4j.appender.flume.Port = 44445 log4j.appender.flume.UnsafeMode = true
5. marven 依賴
<dependency> <groupId>org.apache.flume.flume-ng-clients</groupId> <artifactId>flume-ng-log4jappender</artifactId> <version>1.8.0</version> </dependency>