1. 程式人生 > 實用技巧 >大資料實戰(十五):電商數倉(八)之使用者行為資料採集(八)元件安裝(四)採集日誌Flume

大資料實戰(十五):電商數倉(八)之使用者行為資料採集(八)元件安裝(四)採集日誌Flume

0 簡介

Flume 採集

1日誌採集Flume安裝

叢集規劃

伺服器hadoop102

伺服器hadoop103

伺服器hadoop104

Flume(採集日誌)

Flume

Flume

2 專案經驗Flume元件

1Source

1Taildir Source相比Exec SourceSpooling Directory Source的優勢

TailDirSource斷點續傳、多目錄。Flume1.6以前需要自己自定義Source記錄每次讀取檔案位置,實現斷點續傳。

ExecSource可以實時蒐集資料,但是在Flume不執行或者

Shell命令出錯的情況下,資料將會丟失。

Spooling Directory Source監控目錄,不支援斷點續傳。

2batchSize大小如何設定?

答:Event1K左右時,500-1000合適(預設為100)

2Channel

採用KafkaChannel省去了Sink,提高了效率。

3日誌採集Flume配置

1)Flume配置分析

Flume直接log日誌的資料,log日誌的格式是app-yyyy-mm-dd.log

2)Flume的具體配置如下:

1)在/opt/module/flume/conf目錄下建立file-flume-kafka.conf檔案

[atguigu@hadoop102 conf]$ vim file-flume-kafka.conf

檔案配置如下內容

a1.sources=r1
a1.channels=c1 c2

# configure source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /opt/module/flume/test/log_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1 c2

#interceptor
a1.sources.r1.interceptors 
= i1 i2 a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.LogETLInterceptor$Builder a1.sources.r1.interceptors.i2.type = com.atguigu.flume.interceptor.LogTypeInterceptor$Builder a1.sources.r1.selector.type = multiplexing a1.sources.r1.selector.header = topic a1.sources.r1.selector.mapping.topic_start = c1 a1.sources.r1.selector.mapping.topic_event = c2 # configure channel a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel a1.channels.c1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092 a1.channels.c1.kafka.topic = topic_start a1.channels.c1.parseAsFlumeEvent = false a1.channels.c1.kafka.consumer.group.id = flume-consumer a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel a1.channels.c2.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092 a1.channels.c2.kafka.topic = topic_event a1.channels.c2.parseAsFlumeEvent = false a1.channels.c2.kafka.consumer.group.id = flume-consumer

注意:com.atguigu.flume.interceptor.LogETLInterceptor和com.atguigu.flume.interceptor.LogTypeInterceptor是自定義的攔截器的全類名。需要根據使用者自定義的攔截器做相應修改。

flume資料採集

4FlumeETL分型別攔截

本專案自定義了兩個攔截器分別是:ETL攔截器、日誌型別區分攔截器

ETL攔截器主要用於過濾時間戳不合法Json資料完整的日誌

日誌型別區分攔截器主要用於,啟動日誌和事件日誌區分開來,方便發往Kafka的不同Topic

1建立Maven工程flume-interceptor

2建立包名:com.atguigu.flume.interceptor

3)在pom.xml檔案中新增如下配置

<dependencies>
    <dependency>
        <groupId>org.apache.flume</groupId>
        <artifactId>flume-ng-core</artifactId>
        <version>1.7.0</version>
    </dependency>
</dependencies>

<build>
    <plugins>
        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>2.3.2</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
        <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
            <executions>
                <execution>
                    <id>make-assembly</id>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

4)在com.atguigu.flume.interceptor包下建立LogETLInterceptor類名

Flume ETL攔截器LogETLInterceptor

package com.atguigu.flume.interceptor;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;

public class LogETLInterceptor implements Interceptor {

    @Override
    public void initialize() {

    }

    @Override
    public Event intercept(Event event) {

        // 1 獲取資料
        byte[] body = event.getBody();
        String log = new String(body, Charset.forName("UTF-8"));

        // 2 判斷資料型別並向Header中賦值
        if (log.contains("start")) {
            if (LogUtils.validateStart(log)){
                return event;
            }
        }else {
            if (LogUtils.validateEvent(log)){
                return event;
            }
        }

        // 3 返回校驗結果
        return null;
    }

    @Override
    public List<Event> intercept(List<Event> events) {

        ArrayList<Event> interceptors = new ArrayList<>();

        for (Event event : events) {
            Event intercept1 = intercept(event);

            if (intercept1 != null){
                interceptors.add(intercept1);
            }
        }

        return interceptors;
    }

    @Override
    public void close() {

    }

    public static class Builder implements Interceptor.Builder{

        @Override
        public Interceptor build() {
            return new LogETLInterceptor();
        }

        @Override
        public void configure(Context context) {

        }
    }
}
View Code

5)Flume日誌過濾工具類

package com.atguigu.flume.interceptor;
import org.apache.commons.lang.math.NumberUtils;

public class LogUtils {

    public static boolean validateEvent(String log) {
        // 伺服器時間 | json
        // 1549696569054 | {"cm":{"ln":"-89.2","sv":"V2.0.4","os":"8.2.0","g":"[email protected]","nw":"4G","l":"en","vc":"18","hw":"1080*1920","ar":"MX","uid":"u8678","t":"1549679122062","la":"-27.4","md":"sumsung-12","vn":"1.1.3","ba":"Sumsung","sr":"Y"},"ap":"weather","et":[]}

        // 1 切割
        String[] logContents = log.split("\\|");

        // 2 校驗
        if(logContents.length != 2){
            return false;
        }

        //3 校驗伺服器時間
        if (logContents[0].length()!=13 || !NumberUtils.isDigits(logContents[0])){
            return false;
        }

        // 4 校驗json
        if (!logContents[1].trim().startsWith("{") || !logContents[1].trim().endsWith("}")){
            return false;
        }

        return true;
    }

    public static boolean validateStart(String log) {
 // {"action":"1","ar":"MX","ba":"HTC","detail":"542","en":"start","entry":"2","extend1":"","g":"[email protected]","hw":"640*960","l":"en","la":"-43.4","ln":"-98.3","loading_time":"10","md":"HTC-5","mid":"993","nw":"WIFI","open_ad_type":"1","os":"8.2.1","sr":"D","sv":"V2.9.0","t":"1559551922019","uid":"993","vc":"0","vn":"1.1.5"}

        if (log == null){
            return false;
        }

        // 校驗json
        if (!log.trim().startsWith("{") || !log.trim().endsWith("}")){
            return false;
        }

        return true;
    }
}
View Code

6)Flume日誌型別區分攔截器LogTypeInterceptor

package com.atguigu.flume.interceptor;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class LogTypeInterceptor implements Interceptor {
    @Override
    public void initialize() {

    }

    @Override
    public Event intercept(Event event) {

        // 區分日誌型別:   body  header
        // 1 獲取body資料
        byte[] body = event.getBody();
        String log = new String(body, Charset.forName("UTF-8"));

        // 2 獲取header
        Map<String, String> headers = event.getHeaders();

        // 3 判斷資料型別並向Header中賦值
        if (log.contains("start")) {
            headers.put("topic","topic_start");
        }else {
            headers.put("topic","topic_event");
        }

        return event;
    }

    @Override
    public List<Event> intercept(List<Event> events) {

        ArrayList<Event> interceptors = new ArrayList<>();

        for (Event event : events) {
            Event intercept1 = intercept(event);

            interceptors.add(intercept1);
        }

        return interceptors;
    }

    @Override
    public void close() {

    }

    public static class Builder implements  Interceptor.Builder{

        @Override
        public Interceptor build() {
            return new LogTypeInterceptor();
        }

        @Override
        public void configure(Context context) {

        }
    }
}
View Code

7)打包

攔截器打包之後,只需要單獨包,不需要依賴的包上傳。打包之後要放入Flumelib資料夾下面。

注意為什麼不需要依賴包?因為依賴包在flumelib目錄下面已經存在了。

8)需要先將打好的包放入到hadoop102的/opt/module/flume/lib資料夾下面。

[atguigu@hadoop102 lib]$ ls | grep interceptor
flume-interceptor-1.0-SNAPSHOT.jar

8)分發Flumehadoop103、hadoop104

[atguigu@hadoop102 module]$ xsync flume/

[atguigu@hadoop102 flume]$ bin/flume-ng agent --name a1 --conf-file conf/file-flume-kafka.conf &

5 日誌採集Flume啟動停止指令碼

1)在/home/atguigu/bin目錄下建立指令碼f1.sh

[atguigu@hadoop102 bin]$ vim f1.sh

指令碼中填寫如下內容

#! /bin/bash

case $1 in
"start"){
        for i in hadoop102 hadoop103
        do
                echo " --------啟動 $i 採集flume-------"
                ssh $i "nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/file-flume-kafka.conf --name a1 -Dflume.root.logger=INFO,LOGFILE > /dev/null 2>&1 &"
        done
};;    
"stop"){
        for i in hadoop102 hadoop103
        do
                echo " --------停止 $i 採集flume-------"
                ssh $i "ps -ef | grep file-flume-kafka | grep -v grep |awk '{print \$2}' | xargs kill"
        done

};;
esac

說明1nohup,該命令可以在你退出帳戶/關閉終端之後繼續執行相應的程序nohup就是不掛起的意思不掛斷地執行命令

說明2/dev/null代表linux的空裝置檔案,所有往這個檔案裡面寫入的內容都會丟失,俗稱“黑洞”。

標準輸入0:從鍵盤獲得輸入 /proc/self/fd/0

標準輸出1:輸出到螢幕(即控制檯) /proc/self/fd/1

錯誤輸出2:輸出到螢幕(即控制檯) /proc/self/fd/2

2)增加指令碼執行許可權

[atguigu@hadoop102 bin]$ chmod 777 f1.sh

3)f1叢集啟動指令碼

[atguigu@hadoop102 module]$ f1.sh start

4)f1叢集停止指令碼

[atguigu@hadoop102 module]$ f1.sh stop