spark部署(叢集)
阿新 • • 發佈:2018-11-10
● 下載spark壓縮包(jdk起碼1.7以上)
● 修改配置
[[email protected] conf]# pwd /usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf [[email protected] conf]# cp slaves.template ./slaves [[email protected] conf]# vi slaves #新增worker節點(hostname),預設有一個localhost,我就一臺虛擬機器,將就著用,所以就不需要再填節點, #多個節點需要填寫各自的ip或者hostname localhost
[[email protected] conf]# cp spark-env.sh.template spark-env.sh [[email protected] conf]# vi spark-env.sh #新增如下 EXPORT JAVA_HOME=/usr/java/jdk1.7.0_79 export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=localhost:2181 -Dspark.deploy.zookeeper.dir=/spark"
● 啟動master
[[email protected] spark-2.1.3-bin-hadoop2.7]# ./sbin/start-master.sh
/usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf/spark-env.sh: line 67: EXPORT: command not found
starting org.apache.spark.deploy.master.Master, logging to /usr/local/apps/spark-2.1.3-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
[ [email protected] spark-2.1.3-bin-hadoop2.7]# jps
14263 Launcher
20167 ZooKeeperMain
20661 Master
19231 QuorumPeerMain
20745 Jps
● 啟動master和work
#需要輸入密碼,最好做成免密登陸
[[email protected] sbin]# ./start-all.sh
/usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf/spark-env.sh: line 67: EXPORT: command not found
org.apache.spark.deploy.master.Master running as process 21210. Stop it first.
/usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf/spark-env.sh: line 67: EXPORT: command not found
[email protected]'s password:
localhost: /usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf/spark-env.sh: line 67: EXPORT: command not found
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/apps/spark-2.1.3-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
[[email protected] sbin]# jps
14263 Launcher
20167 ZooKeeperMain
21210 Master
21374 Worker
19231 QuorumPeerMain
21441 Jps
● 瀏覽器檢視
http://192.168.x.xx:8080/