Flume采集Nginx日誌到HDFS
阿新 • • 發佈:2017-06-11
write 1.5 utc 日誌 onf 需要 pat sel source
下載apache-flume-1.7.0-bin.tar.gz,用
tar -zxvf
解壓,在/etc/profile文件中增加設置:
export FLUME_HOME=/opt/apache-flume-1.7.0-bin
export PATH=$PATH:$FLUME_HOME/bin
修改$FLUME_HOME/conf/下的兩個文件,在flume-env.sh中增加JAVA_HOME:
JAVA_HOME=/opt/jdk1.8.0_121
最重要的,修改flume-conf.properties文件:
# 配置Agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # 配置Source a1.sources.r1.type = exec a1.sources.r1.channels = c1 a1.sources.r1.deserializer.outputCharset = UTF-8 # 配置需要監控的日誌輸出目錄 a1.sources.r1.command = tail -F /usr/local/nginx/log/access.log # 配置Sink a1.sinks.k1.type = hdfs a1.sinks.k1.channel = c1 a1.sinks.k1.hdfs.useLocalTimeStamp = true a1.sinks.k1.hdfs.path = hdfs://master:9000/flume/events/%Y-%m a1.sinks.k1.hdfs.filePrefix = %Y-%m-%d-%H a1.sinks.k1.hdfs.fileSuffix = .log a1.sinks.k1.hdfs.minBlockReplicas = 1 a1.sinks.k1.hdfs.fileType = DataStream a1.sinks.k1.hdfs.writeFormat = Text a1.sinks.k1.hdfs.rollInterval = 86400 a1.sinks.k1.hdfs.rollSize = 1000000 a1.sinks.k1.hdfs.rollCount = 10000 a1.sinks.k1.hdfs.idleTimeout = 0 # 配置Channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # 將三者連接 a1.sources.r1.channel = c1 a1.sinks.k1.channel = c1
以上文件設置了Source、Channel和Sink,將Nginx日誌中的記錄采集到HDFS,運行
flume-ng agent -n a1 -c conf -f $FLUME_HOME/conf/flume-conf.properties
如果沒有報錯,則安裝設置成功了,Nginx中新增加的記錄都會被Flume采集,並且存儲到HDFS。
Flume采集Nginx日誌到HDFS