1. 程式人生 > 其它 >tomcat-embed-core 10.0.0 https 監聽

tomcat-embed-core 10.0.0 https 監聽

技術標籤:FlinkFlinkOnZeppelinFlinkZeppelin

目錄

1. 下載相關jar包

2. 啟動一個flink local模式的叢集

3.配置 Flink Interpreter


1. 下載相關jar包

  • flink-hadoop-compatibility flink v.1.12下載地址

https://repo1.maven.org/maven2/org/apache/flink/flink-hadoop-compatibility_2.11/1.12.0/

  • flink-shaded-hadoop-2-uber下載地址

https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-9.0/

將上述兩個jar包複製到Z的lib目錄下,如下圖


2. 啟動一個flink local模式的叢集

下載最新版本的Flink V1.12,下載地址https://www.apache.org/dyn/closer.lua/flink/flink-1.12.0/flink-1.12.0-bin-scala_2.11.tgz

注意:要使用jdk8或者11,(To be able to run Flink, the only requirement is to have a workingJava 8 or 11installation

解壓,修改flink-conf.yaml中的以下引數

# Port range for the REST and web server to bind to.
#
rest.bind-port: 50801-50901

否則在zeppelin中執行flink作業時,會報以下的異常資訊:

Caused by: java.net.BindException: Could not start rest endpoint on any port in port range 8081
	at org.apache.flink.runtime.rest.RestServerEndpoint.start(RestServerEndpoint.java:228)
	at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:165)
	... 18 more

並啟動local模式的叢集:

//啟動flink
./bin/start-cluster.sh

頁面訪問flink的JobManager地址http://vm01:8081/#/overview

,flink要下載flink-1.12.0-bin-scala_2.11的(注意這裡是需要是scala_2.11),否則會報以下異常

org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: Unsupported scala version: version 2.12.7, Only scala 2.11 is supported
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:760)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
	at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
	at org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: Unsupported scala version: version 2.12.7, Only scala 2.11 is supported
	at org.apache.zeppelin.flink.FlinkInterpreter.checkScalaVersion(FlinkInterpreter.java:57)
	at org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:64)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
	... 8 more


3.配置 Flink Interpreter

Flink的Local模式會在本地建立一個MiniCluster,適合做POC或者小資料量的試驗。必須配置FLINK_HOME 和 flink.execution.mode。

當我們只用Z連線到Flink的local時,只需要配置Flink Interpreter的以下兩個引數即可

必須配置FLINK_HOME flink.execution.mode

如下圖所示:


5. 參考資料

https://blog.csdn.net/xianpanjia4616/article/details/107438077

https://mp.weixin.qq.com/s/a6Zau9c1ZWTSotl_dMg0Xg