Presto on yarn解決方案
Deploying Presto on a YARN-Based Cluster
presto不像spark那樣預設就支援yarn,spark與yarn相容性很好, 只需要簡單的配置下啟動指令碼和叢集環境就可以在Yarn上執行spark任務。presto則不然它需要藉助於slider。通過slider實現prestoon yarn。Yarn是一個通用資源管理系統,可為上層應用提供統一的資源管理和排程,它的引入為叢集在利用率、資源統一管理和資料共享等方面帶來了巨大好處。所以此方案就是把presto的應用提交到yarn上。prestoon yarn不可以直接使用官方提供的二進位制包安裝,需要重新編譯presto,編譯生成presto-yarn-package-1.6-SNAPSHOT-0.184.zip,通過slider來使用這個包。網上資料確實很少,基本上找不到presto on yarn的資料,只能依靠並不是很詳細的官方文件,遇到錯之後再排查。如果hadoop是cdh版編譯slider時需要指定cdh版本預設是apache hadoop。譯安裝部署過程中遇到很多坑,現在把編譯、安裝、除錯部署的過程記錄下來僅供參考。
一、環境基本要求
• Linux
• Java 8, 64-bit
• Presto-0.184
• Zookeeper
• Hadoop
• Yarn
二、Presto基礎配置及架構
• Presto架構圖
• presto執行過程示意圖
• 聯結器
• Presto支援從以下版本的Hadoop中讀取Hive資料:支援以下檔案型別:Text, SequenceFile, RCFile, ORC
1 Apache Hadoop 1.x (hive-hadoop1)
2 Apache Hadoop 2.x
3 Cloudera CDH4 (hive-cdh4)
4 Cloudera CDH5 (hive-cdh5)
• 此外,需要有遠端的Hive元資料。不支援本地或嵌入模式。 Presto不使用MapReduce,只需要HDFS
三、安裝配置步驟
1、編譯presto-on-yarn
(2)編譯:mvn clean package -Dpresto.version=0.184
從上面可以看出已經編譯完了,在編譯完的目錄下可以找到presto-yarn-package-1.6-SNAPSHOT-0.184.zip
檢視presto-yarn-package-1.6-SNAPSHOT-0.184.zip的目錄結構
2、編譯slider
編譯slider,指定CDH版本,升級JDK
(1)下載原始碼:apache-slider-0.91.0-incubating-source-release.tar.gz
(2)修改maven相關的配置
採用CDH版本hadoop(2.6.0-cdh5.7.0),hbase(1.2.0-cdh5.7.0),修改該目錄下pom檔案
<hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
<hbase.version>1.2.0-cdh5.7.0</hbase.version>
<accumulo.version>1.7.0</accumulo.version>
註釋掉slider-core和slider-funtest中對hadoop-minicluster包依賴
<!--
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-minicluster</artifactId>
<scope>test</scope>
</dependency>
-->
(3)編譯:mvn clean package -Dmaven.test.skip=true -DskipTests
在編譯後的目錄下找到slider-0.91.0-incubating-all.tar.gz
(4)slider配置,參照官方詳細文件
slider-client.xml
<configuration>
<property>
<name>slider.client.resource.origin</name>
<value>conf/slider-client.xml</value>
<description>This is just for diagnostics</description>
</property>
<property>
<name>slider.security.protocol.acl</name>
<value>*</value>
</property>
<property>
<name>slider.yarn.queue</name>
<value>root.presto</value>
<description>the name of the YARN queue to use.</description>
</property>
<property>
<name>slider.yarn.queue.priority</name>
<value>1</value>
<description>the priority of the application.</description>
</property>
<property>
<name>slider.am.login.keytab.required</name>
<value>false</value>
<description>Declare that a keytab must be provided.</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/
hadoop/commo
n/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YA
RN_HOME/shar
e/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master2:8030</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://common/user/hadoop/.slider/</value>
</property>
<property>
<name>slider.zookeeper.quorum</name>
<value>node1:2181,node2:2181,node3:2181</value>
</property>
</configuration>
slider-env.sh配置
export JAVA_HOME=${JAVA_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR}
export SLIDER_JVM_OPTS="-server -Xmx40g -Xms4g -Xmn8g"
解壓presto-yarn-package-1.6-SNAPSHOT-0.184.zip,獲取appConfig-default.json,resources-default.json,修改裡面的配置項設定分配presto的資源, 相關配置參照如下(官方配置說明文件https://prestodb.io/presto-yarn/installation-yarn-configuration-options.html),appConfig.json配置
{
"schema": "http://example.org/specification/v2.0.0",
"metadata": {
},
"global": {
"java_home": "/usr/local/java",
"site.global.app_user": "hadoop",
"site.global.user_group": "hadoop",
"site.global.app_name": "presto-server-0.184",
"site.global.data_dir": "/data01/presto/data",
"site.global.config_dir": "/data01/presto/etc",
"zookeeper.quorum" : "node1:2181,node2:2181,node3:2181",
"application.def": "hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-S
NAPSHOT-0.184.zip",
"site.global.singlenode": "false",
"site.global.coordinator_host": "${COORDINATOR_HOST}",
"site.global.app_pkg_plugin": "${AGENT_WORK_ROOT}/app/definition/package/plugins/",
"site.global.presto_query_max_memory": "512GB",
"site.global.presto_query_max_memory_per_node":"20GB",
"site.global.presto_server_port": "18088",
"site.global.catalog": "{'hive': ['connector.name=hive-hadoop2','hive.config.resources=/usr/lo
cal/hadoop/etc/hadoop/core-site.xml,/usr/local/hadoop/etc/hadoop/hdfs-site.xml','hive.metastore.ur
i=thrift://10.134.81.70:9083'], 'tpch': ['connector.name=tpch']}",
"site.global.jvm_args": "['-server', '-Xmx40960M', '-XX:+UseG1GC', '-XX:G1HeapRegionSize=320M'
, '-XX:+UseGCOverheadLimit', '-XX:+ExplicitGCInvokesConcurrent', '-XX:+HeapDumpOnOutOfMemoryError'
, '-XX:OnOutOfMemoryError=kill -9 %p']",
"site.global.log_properties": "['com.facebook.presto.hive=WARN','com.facebook.presto.server=IN
FO']",
"site.global.additional_config_properties":"['task.max-worker-threads=50', 'distributed-joins-
enabled=true']"
},
"components": {
"slider-appmaster": {
"jvm.heapsize": "1024M"
},
"MYAPP_COMPONENT": {
}
}
}
{
"schema" : "http://example.org/specification/v2.0.0",
"metadata" : {
},
"global" : {
"yarn.vcores": "1"
},
"components": {
"slider-appmaster": {
},
"COORDINATOR": {
"yarn.role.priority": "1",
"yarn.component.instances": "1",
"yarn.component.placement.policy": "1",
"yarn.memory": "20000"
},
"WORKER": {
"yarn.role.priority": "2",
"yarn.component.instances": "20",
"yarn.component.placement.policy": "1",
"yarn.memory": "20000"
}
}
}
3、使用slider安裝presto on yarn
在hdfs建立slider的相關目錄及授予訪問許可權,確保當前使用者有訪問許可權,在每個nodemanager建立presto的本地目錄
[hadoop@master1 slider]$ slider package --install --name presto --package presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 14:23:00,327 [main] INFO client.SliderClient - Installing package file:/usr/local/apache/slider-0.91.0-incubating/presto-yarn-package-1.6-SNAPSHOT-0.184.zip to hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip (overwrite set to false)
2017-09-25 14:23:03,114 [main] INFO tools.SliderUtils - Reading metainfo.xml of size 2425
2017-09-25 14:23:03,115 [main] INFO client.SliderClient - Found XML metainfo file in package
2017-09-25 14:23:03,135 [main] INFO client.SliderClient - Creating summary metainfo file
2017-09-25 14:23:03,153 [main] INFO client.SliderClient - Set application.def in your app config JSON to .slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 14:23:03,154 [main] INFO util.ExitUtil - Exiting with status 0
4、啟動Presto OnYarn
[hadoop@master1 slider]$ slider create presto-query --template appConfig.json --resources resources.json
2017-09-25 15:06:35,776 [main] INFO agent.AgentClientProvider - Validating app definition hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 15:06:35,778 [main] INFO agent.AgentUtils - Reading metainfo at hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 15:06:35,945 [main] INFO agent.AgentUtils - Got metainfo from summary file
2017-09-25 15:06:35,985 [main] INFO client.SliderClient - No credentials requested
2017-09-25 15:06:36,087 [main] INFO agent.AgentUtils - Reading metainfo at hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 15:06:36,094 [main] INFO agent.AgentUtils - Got metainfo from summary file
2017-09-25 15:06:36,127 [main] INFO launch.AbstractLauncher - Setting yarn.resourcemanager.am.retry-count-window-ms to 300000
2017-09-25 15:06:36,127 [main] INFO launch.AbstractLauncher - Log include patterns:
2017-09-25 15:06:36,127 [main] INFO launch.AbstractLauncher - Log exclude patterns:
2017-09-25 15:06:36,461 [main] INFO slideram.SliderAMClientProvider - Loading all dependencies for AM.
2017-09-25 15:06:36,462 [main] INFO tools.SliderUtils - Loading all dependencies from /usr/local/apache/slider-0.91.0-incubating/lib
2017-09-25 15:06:40,709 [main] INFO agent.AgentClientProvider - Automatically uploading the agent tarball at hdfs://common/user/hadoop/.slider/cluster/presto-query/tmp/application_1504914229457_101524/agent
2017-09-25 15:06:40,791 [main] INFO agent.AgentClientProvider - Validating app definition hdfs://common/user/hadoop/.slider/package/presto/presto-yarn-package-1.6-SNAPSHOT-0.184.zip
2017-09-25 15:06:40,796 [main] INFO tools.SliderUtils - For faster submission of apps, upload dependencies using cmd dependency --upload
2017-09-25 15:06:40,804 [main] INFO client.SliderClient - Submitting application application_1504914229457_101524
2017-09-25 15:06:40,808 [main] INFO launch.AppMasterLauncher - Submitting application to Resource Manager
2017-09-25 15:06:41,036 [main] INFO impl.YarnClientImpl - Submitted application application_1504914229457_101524
2017-09-25 15:06:41,039 [main] INFO util.ExitUtil - Exiting with status 0
在Yarn的後臺可以看到剛才啟動的application
在zk裡也可以看到對應的presto應用
5、簡單的使用方法
(1)從yarn介面找到coordinator_address地址
(2)在命令列使用presto
問題總結
問題1:
NetUtil.py:62 - SSLError: Failed to connect. Please check openssl library versions
解決:需要升級openssl(openssl>= 1.0.1e-16)
(1) 檢視當前openssl版本:rpm -qa | grep openssl,如果版本低的話需要升級yum -y upgrade openssl
(2) 升級openssl,重啟nodemanager
問題2: presto對jdk版本有要求,需要升級jdk版本
問題3:編譯slider指定cdh版本的時候可能maven依賴或中央倉庫連不上等錯誤
解決:觀察maven對應的錯誤資訊解決即可,因為不同的環境編譯報是不一樣的
問題4:resouceConf.json配置的jvm引數太小,當查詢大點的SQL時會報錯
問題5:slider預設的引數可能與現在叢集的環境不匹配,導致一些引數不可用導致報錯,如:yarn.label.expression,只有當使用CapacityScheduler模式時才能使用這屬性指定coordinator和worker,否則會報錯;目前YARN使用的排程模式是Fair Scheduler,所以不支援。
問題6:appConfig.json 檔案裡的site.global.catalog配置項,預設很多引數都沒有,需要對presto很熟悉遇到錯誤才馬上定位問題所在,比如:connector.name =hive-hadoop2,hive.config.resources= =/usr/lo
cal/hadoop/etc/hadoop/core-site.xml,/usr/local/hadoop/etc/hadoop/hdfs-site.xml'。如果connector.name 值是hive-cdh5會報錯;預設沒有hive.config.resources配置項,如果不配這項識別不到HDFS環境。
問題7:當啟動多個presto application時,需要注意相關配置項,要不會與之前啟過的application衝去導致影響別的服務,因為每啟動一個application都會先刪除對應的data和etc目錄,然後再重新生成配置和資料目錄
問題8:site.global.jvm_args引數設定的值要大於site.global.presto_query_max_memory_per_node的值,否則報OOM
問題9:記憶體單位寫成g報錯,需要寫成GB
問題10:....