1. 程式人生 > >Flume 1.7.0安裝與例項

Flume 1.7.0安裝與例項

Flume安裝

系統要求:
需安裝JDK 1.7及以上版本

2、解壓

$ cp ~/Downloads/apache-flume-1.7.0-bin.tar.gz ~
$ cd 
$ tar -zxvf apache-flume-1.7.0-bin.tar.gz
$ cd apache-flume-1.7.0-bin

3、建立flume-env.sh檔案


$ cp conf/flume-env.sh.template conf/flume-env.sh

簡單例項-傳輸指定檔案

場景:兩臺機器,一臺為client,一臺為agent,在client上將指定檔案傳輸到agent機器上。

1、建立配置檔案

根據flume自身提供的模板,建立flume.conf配置檔案。


$ cp conf/flume-conf.properties.template conf/flume.conf

編輯檔案flume.conf:

$ vi conf/flume.conf

在檔案末尾加入以下配置:

# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1. agent1.sources.avro-source1.channels = ch1 agent1.sources.avro-source1.type = avro agent1.sources.avro-source1.bind = 0.0.0.0 agent1.sources.avro-source1.port = 41414 # Define a logger sink that simply logs all events it receives # and connect it to the other end of the same channel.
agent1.sinks.log-sink1.channel = ch1 agent1.sinks.log-sink1.type = logger # Finally, now that we've defined all of our components, tell # agent1 which ones we want to activate. agent1.channels = ch1 agent1.sources = avro-source1 agent1.sinks = log-sink1

儲存,並且退出:

2、啟動flume server
在作為agent的機器上執行以下:


bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1

3、在新的視窗開啟client
在作為client的機器上執行以下:
(由於當前環境是在單機上模擬兩臺機器,所以,直接在新的終端中輸入以下命令)


$ bin/flume-ng avro-client --conf conf -H localhost -p 41414 -F /etc/passwd -Dflume.root.logger=DEBUG,console

4、結果
這個時候,你可以看到以下訊息:


2012-03-16 16:39:17,124 (main) [DEBUG - org.apache.flume.client.avro.AvroCLIClient.run(AvroCLIClient.java:175)] Finished
2012-03-16 16:39:17,127 (main) [DEBUG - org.apache.flume.client.avro.AvroCLIClient.run(AvroCLIClient.java:178)] Closing reader
2012-03-16 16:39:17,127 (main) [DEBUG - org.apache.flume.client.avro.AvroCLIClient.run(AvroCLIClient.java:183)] Closing transceiver
2012-03-16 16:39:17,129 (main) [DEBUG - org.apache.flume.client.avro.AvroCLIClient.main(AvroCLIClient.java:73)] Exiting

在前面那個開啟flume server的視窗,可以看到如下訊息:

2012-03-16 16:39:16,738 (New I/O server boss #1 ([id: 0x49e808ca, /0:0:0:0:0:0:0:0:41414])) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /1
27.0.0.1:39577 => /127.0.0.1:41414] OPEN
2012-03-16 16:39:16,742 (New I/O server worker #1-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /127.0.0.1:39577 => /127.0.0.1:41414] BOU
ND: /127.0.0.1:41414
2012-03-16 16:39:16,742 (New I/O server worker #1-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /127.0.0.1:39577 => /127.0.0.1:41414] CON
NECTED: /127.0.0.1:39577
2012-03-16 16:39:17,129 (New I/O server worker #1-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /127.0.0.1:39577 :> /127.0.0.1:41414] DISCONNECTED
2012-03-16 16:39:17,129 (New I/O server worker #1-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /127.0.0.1:39577 :> /127.0.0.1:41414] UNBOUND
2012-03-16 16:39:17,129 (New I/O server worker #1-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:123)] [id: 0x0b92a848, /127.0.0.1:39577 :> /127.0.0.1:41414] CLOSED
2012-03-16 16:39:17,302 (Thread-1) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:68)] Event: { headers:{} body:[B@5c1ae90c }
2012-03-16 16:39:17,302 (Thread-1) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:68)] Event: { headers:{} body:[B@6aba4211 }
2012-03-16 16:39:17,302 (Thread-1) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:68)] Event: { headers:{} body:[B@6a47a0d4 }
2012-03-16 16:39:17,302 (Thread-1) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:68)] Event: { headers:{} body:[B@48ff4cf }
...

簡單例項-將目錄檔案上傳到HDFS

場景:將機器上的某個資料夾下的檔案上傳到HDFS上。

1、配置conf/flume.conf

# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.spooldir-source1.channels = ch1
agent1.sources.spooldir-source1.type = spooldir
agent1.sources.spooldir-source1.spoolDir=/home/hadoop/flume-1.7.0/tmpData
agent1.sources.spooldir-source1.bind = 0.0.0.0
agent1.sources.spooldir-source1.port = 41414

# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://master:9000/test
agent1.sinks.hdfs-sink1.hdfs.filePrefix = events-
agent1.sinks.hdfs-sink1.hdfs.useLocalTimeStamp = true
agent1.sinks.hdfs-sink1.hdfs.round = true
agent1.sinks.hdfs-sink1.hdfs.roundValue = 10

# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = spooldir-source1
agent1.sinks = hdfs-sink1

其中,/home/hadoop/flume-1.7.0/tmpData是我要上傳的檔案所在目錄,也就是,我要將此資料夾下的檔案都上傳到HDFS上的hdfs://master:9000/test目錄。

注意

  • 這樣的配置會產生許多小檔案,因為預設情況下,一個檔案儲存10個event,這個配置由rollCount控制,預設為10,此外還有一個引數為rollSize,這個是控制一個檔案的大小,如果檔案大於這個數值,就是另起一檔案。
  • 此時的檔名都是以event開頭,如果想保留原來檔案的名字,可以使用以下配置(其中,basenameHeader是相對source而言,filePrefix是相對sink而言,分別這樣設定之後,上傳到hdfs上的檔名就會變成“原始檔名.時間戳”):
agent1.sources.spooldir-source1.basenameHeader = true
agent1.sinks.hdfs-sink1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0  
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0

2、啟動agent
使用以下命令啟動agent:

bin/flume-ng agent --conf ./conf/ -f ./conf/flume.conf --name agent1 -Dflume.root.logger=DEBUG,console

3、檢視結果
到Hadoop提供的WEB GUI介面可以看到剛剛上傳的檔案是否成功。
GUI介面地址為:http://master:50070/explorer.html#/test
其中,master為Hadoop的Namenode所在的機器名。

4、總結
在這個場景,需要將檔案上傳到HDFS上,會使用到幾個Hadoop的jar包,分別是:

${HADOOP_HOME}share/hadoop/common/hadoop-common-2.4.0.jar
${HADOOP_HOME}share/hadoop/common/lib/commons-configuration-1.6.jar
${HADOOP_HOME}share/hadoop/common/lib/hadoop-auth-2.4.0.jar
${HADOOP_HOME}share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar

當我使用HDP的Hadoop2.7時,還會用到以下jar包:

common-io-2.4.jar
htrace-core-3.1.0-incubating.jar

以上包都可以在hadoop相關的lib目錄下找到

異常

Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError org/apache/hadoop/io/SequenceFile$CompressionType

2016-11-03 14:49:35,278 (conf-file-poller-0) [ERROR - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:146)] Failed to start agent because dependencies were not found in classpath. Error follows.
java.lang.NoClassDefFoundError: org/apache/hadoop/io/SequenceFile$CompressionType

問題原因:缺少依賴包,這個依賴包是以下jar檔案:

${HADOOP_HOME}share/hadoop/common/hadoop-common-2.4.0.jar

解決方法:找到這個jar檔案,copy到flume安裝目錄下的lib目錄下就ok了。

java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null

2016-11-03 16:32:06,741 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:447)] process failed
java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
    at org.apache.flume.formatter.output.BucketPath.replaceShorthand(BucketPath.java:256)
    at org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:465)
    at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
    at java.lang.Thread.run(Thread.java:745)

解決方法:
編輯conf/flume.conf檔案,其中agent1,sink1替換成你自己的agent和sink

agent1.sinks.sink1.hdfs.useLocalTimeStamp = true

java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration

2016-11-03 16:32:55,594 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:447)] process failed
java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
    at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:38)
    at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:36)
    at org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:106)
    at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:208)
    at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2554)
    at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2546)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2412)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:240)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
    at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:665)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.configuration.Configuration
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 18 more

解決方法:
缺少的依賴在commons-configuration-1.6.jar包裡,這個包在${HADOOP_HOME}share/hadoop/common/lib/下,將其拷貝到flume的lib目錄下。

cp ${HADOOP_HOME}share/hadoop/common/lib/commons-configuration-1.6.jar ${FLUME_HOME}/lib/

java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName

2016-11-03 16:41:54,629 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:447)] process failed
java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName

解決方法:
缺少hadoop-auth-2.4.0.jar依賴,同樣將其拷貝到flume的lib目錄下:

cp ${HADOOP_HOME}share/hadoop/common/lib/hadoop-auth-2.4.0.jar ${FLUME_HOME}/lib/

HDFS IO error java.io.IOException: No FileSystem for scheme: hdfs

2016-11-03 16:49:26,638 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:443)] HDFS IO error
java.io.IOException: No FileSystem for scheme: hdfs

缺少依賴:hadoop-hdfs-2.4.0.jar

cp ${HADOOP_HOME}share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar ${FLUME_HOME}/lib/

java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder

2016-12-26 09:49:07,854 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:447)] process failed
java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:645)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:240)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
    at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:665)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 18 more
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:645)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:240)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
    at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:665)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 18 more

解決方法:將htrace-core-3.1.0-incubating.jarjar包(這個jar包也可以在Hadoop的安裝目錄下找到,我的是有lib目錄下)拷貝到${FLUME_HOME}/lib/目錄下。

java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets

2016-12-26 10:15:36,190 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:447)] process failed
java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
    at org.apache.hadoop.ipc.Server.<clinit>(Server.java:221)
    at org.apache.hadoop.ipc.ProtobufRpcEngine.<clinit>(ProtobufRpcEngine.java:71)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2147)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2112)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2206)
    at org.apache.hadoop.ipc.RPC.getProtocolEngine(RPC.java:205)
    at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:579)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:419)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:240)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
    at org.apache.flume
            
           

相關推薦

Flume 1.7.0安裝例項

Flume安裝 系統要求: 需安裝JDK 1.7及以上版本 2、解壓 $ cp ~/Downloads/apache-flume-1.7.0-bin.tar.gz ~ $ cd $ tar -zxvf apache-flume-1.7.0-bi

Navicat Premium 12.1.7.0安裝啟用

本文介紹Navicat Premium 12.1.7.0的安裝、啟用與基本使用。 博主所提供的啟用檔案理論支援Navicat Premium 12.0.x系列和Navicat Premium 12.1.x系列的註冊機。由於本文一直在更新,Navicat Premium 12.0.x系列

python3.7.0 安裝配置

分享 設置 python2 https 多版本 pip png 系統環境變量 alt python 3.7.0 X64下載地址: https://www.python.org/ftp/python/3.7.0/python-3.7.0-amd64.exe 更多版本下載請移步

Flume NG高可用叢集搭建詳解(基於flume-1.7.0

1、Flume NG簡述 Flume NG是一個分散式,高可用,可靠的系統,它能將不同的海量資料收集,移動並存儲到一個數據儲存系統中。輕量,配置簡單,適用於各種日誌收集,並支援 Failover和負載均衡。並且它擁有非常豐富的元件。Flume NG採用的是三層架構:Agent層,Collecto

Navicat Premium 12.0.29 / 12.1.5.0安裝啟用

本文介紹Navicat Premium 12.0.29和Navicat Premium 12.1.5.0的安裝、啟用與基本使用。 博主所提供的啟用檔案理論支援Navicat Premium 12.0.x系列,但已測試的版本為Navicat Premium 12.0.2

Navicat Premium 12.1.11.0安裝啟用 我超級推薦的Navicat Premium 12的下載,破解方法

我超級推薦的Navicat Premium 12的下載,破解方法   今天給大家推薦一款炒雞好用的資料庫管理工具,使用它,可以很方便的連線各種主流資料庫軟體----Navicat Premium 12 但是,它是要錢的,不過我們可以使用破解機來破解它,步驟稍有些複雜,簡

flume-1.7.0 簡單使用

在上一篇中,我們安裝了 flume-ng,這一篇我們就來簡單使用一下。 官網上是這麼介紹的,我們需要指定一個配置檔案,需要定義一個 agent 的名稱,然後我們就可以使用 flume-ng 命令來啟動了。 1 編寫配置檔案 我們先拿官網上的例子來跑

Navicat Premium 12.1.11.0安裝啟用

宣告:本文所提供的所有軟體均來自於網際網路,個人存放在此作為備用,以備將來不時之需,同時作為大家的分享和學習成果,僅供個人研究和學習使用,請勿用於商業用途,下載後請於24小時內刪除,請支援正版! 附:二零零二年一月一日《計算機軟體保護條例》第十七條規定

IMSL 7.0 安裝使用

本篇主要介紹windows下IMSL7.0的安裝,以及配合VS+IVF的使用(vs2013+ivf2013)。 IMSL簡介 是一個函式庫集合,通過對其的呼叫,簡化數值計算程式的編寫。 IMSL安裝 1. 首先下載IMSL7.0,包含32

flume 1.7.0-taildirSource 支援 windows系統

Flume-ng 1.7.0 中增加了TaildirSource,可以監控目錄中檔案的變化自動讀取檔案內容。 不過實際應用時發現幾個問題: 1,不支援windows系統。 2,windows下會影響 log4j 日誌檔案的切分,會使log4j日誌不切分一直

Tomcat 7.0安裝配置

安裝JDK就是為了能搭建Web伺服器Tomcat和配置開發工具eclipse,先講Tomcat伺服器配置吧,因為eclipse的server配置中也要用到Tomcat。 Tomcat的下載就不想多說,找度娘都能找到,我選擇Tomcat 7是因為為了和eclip

RocketMQ 菜鳥筆記 (二) RocketMQ 4.1.0 安裝入門例項

一、安裝 環境: Linux version 2.6.32-573.el6.x86_64 RocketMQ 4.1.0 java 1.8 maven 3.3.9 步驟: 1.下載原始碼並編譯 git clone https://gith

【轉】CentOS 7.0 安裝Redis 3.2.1詳細過程和使用常見問題

nec count ges des useful 內存 warning before outside http://www.linuxidc.com/Linux/2016-09/135071.htm 環境:CentOS 7.0 Redis 3.2.1 Redis的安裝與啟動

Solr學習筆記(2)—— solr-7.0.0 安裝目錄說明

導入 lms services pan conf nvi os x ins admin      一:Solr系統要求     您可以在任何系統中安裝 Solr,但是這些系統中必須有適用的 Java 運行時環境(JRE),具體介紹如下文所述。目前,這包括 Linux,Mac

CentOS 7 下 PHP 7,MySQL 5.7 和 Nginx 1.8 的安裝配置(實用)

下面將一步步在 CentOS 7 下 PHP 7,MySQL 5.7 和 Nginx 1.8 的安裝與配置。首先我的 CentOS 版本是7.0.1406 [[email protected] ~]# lsb_release -a LSB Version:  

Windows10離線安裝Anaconda3-4.2.0-Windows-x86_64.exe(對應python3.5)和tensorflow_gpu-1.7.0-cp35-cp35m-win_amd

Windows10離線安裝Anaconda3-4.2.0-Windows-x86_64.exe(對應python3.5)和tensorflow_gpu-1.7.0-cp35-cp35m-win_amd64.whl(對應GPU版本的tensorflow,35表示著對應python3.5) 安裝這個

Ubuntu14.04-Hadoop2.7.1-jdk1.7.0安裝偽分散式

任務1-1 1、建立hadoop使用者 sudo useradd -m hadoop   建立使用者 sudo passwd hadoop  設定密碼 2、安裝配置ssh 安裝ssh server:sudo apt-get install openssh-serve

Flume-ng-1.4.0安裝及執行遇到問題總結

2、解壓安裝包     tar -zxvf apache-flume-1.4.0-bin.tar.gz 3、配置環境變數 export FLUME_HOME=/root/install/apache-flume-1.4.0-bin export PATH=$PATH:$F

gtest 1.7.0安裝和使用

Linux x86環境下 一,gtest安裝 下載gtest原始碼包:gtest-1.7.0.zip 解壓後進入gtest-1.7.0目錄 cmake CMakeLists.txt make 後生成兩個靜態庫:libgtest.a libgtest_main.a sudo c

apache-flume-1.7.x配置安裝

本文內容主要參考自Apache Flume使用者文件(http://flume.apache.org/FlumeUserGuide.html),由於關於Apache Flume 1.X的中文參考資料不是很多,所以這裡將我部署的過程記錄下來,希望能給有同樣需要的人