搭建基於mesos的spark叢集
阿新 • • 發佈:2019-01-27
1、準備
1)、安裝scala(略)
SPARK_EXECUTOR_URI 最好用分散式檔案系統hdfs 3、啟動# mkdir /usr/local/spark # cd /usr/local/spark # wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.1-bin-hadoop2.7.tgz # tar -zxvf spark-2.0.1-bin-hadoop2.7.tgz # cd /usr/local/spark/spark-2.0.1-bin-hadoop2.7/conf # cat spark-defaults.conf.template > spark-defaults.conf # vim spark-defaults.conf spark.io.compression.codec lzf # cat spark-env.sh.template > spark-env.sh # vim spark-env.sh export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/mesos/mesos/lib/libmesos.so export SPARK_EXECUTOR_URI=/usr/local/spark/spark.tar.gz #export SPARK_EXECUTOR_URI=hdfs://spark.tar.gz
# cd cd /usr/local/spark/spark-2.0.1-bin-hadoop2.7/sbin
# ./start-mesos-dispatcher.sh --master mesos://192.168.30.97:5050
4、冒煙#cd /usr/local/spark/spark-2.0.1-bin-hadoop2.7/bin #./spark-shell --master mesos://192.168.30.97:5050 scala> val a = sc.parallelize(2 to 1000) scala> a.collect
#./spark-submit \
--class org.apache.spark.examples.SparkPi \
--master mesos://192.168.30.97:7077 \
--deploy-mode cluster \
/usr/local/spark/spark-2.0.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.0.1.jar \
100