1. 程式人生 > >建立Spark 2.1.0 Docker映象

建立Spark 2.1.0 Docker映象

前言

大資料與容器是近年來的兩個熱點技術,大資料平臺的容器化部署自然格外被關注。關於Apache Spark的容器化方案,目前使用最多的是sequenceiq/spark,在Docker Hub上有330K的下載量。

sequenceiq/spark映象中包含了Hadoop/Yarn,雖然採用的是一種“偽叢集”的部署方式,但是用來做開發和測試還是非常便利的。遺憾的是sequenceiq的更新速度不是很給力,目前最新的版本是sequenceiq/spark:1.6.0,想要體驗Spark 2.1.0就得自己動動手了。

執行環境

作業系統 核心版本 Docker版本
Ubuntu 15.10 4.2.0-42 1.12.5

環境準備

1) 下載sequenceiq/spark映象構建原始碼

$ git clone https://github.com/sequenceiq/docker-spark
正克隆到 'docker-spark'...
remote: Counting objects: 211, done.
remote: Total 211 (delta 0), reused 0 (delta 0), pack-reused 211
接收物件中: 100% (211/211), 55.20 KiB | 71.00 KiB/s, 完成.
處理 delta 中: 100
% (108/108), 完成. 檢查連線... 完成。

2) 從Spark官網下載Spark 2.1.0安裝包

這裡寫圖片描述

因為我們用sequenceiq的Dockerfile來構建Spark 2.1.0映象,需要使用基礎映象sequenceiq/hadoop:2.6.0,所以在下載選項中“2. Choose a package type”要選擇“Pre-built for Hadoop 2.6”。

注意:下載的檔案需要放到docker-spark目錄下。

3) 快取sequenceiq/hadoop-docker:2.6.0映象

  • 執行以下命令,拉取hadoop-docker 2.6.0版本映象:
$ docker pull sequenceiq/hadoop-docker:2.6.0
2.6.0: Pulling from sequenceiq/hadoop-docker
b253335dcf03: Already exists 
a3ed95caeb02: Pull complete 
3452351686f4: Pull complete 
dfb6df69b64d: Pull complete 
...
bae586fb2d97: Pull complete 
Digest: sha256:2b95f51b7f0ddf0d7bb2c2cfa793bae3298fcda5523783155a2db9430cba494a
Status: Downloaded newer image for sequenceiq/hadoop-docker:2.6.0
  • 檢視本地快取Docker映象,確認sequenceiq/hadoop-docker:2.6.0拉取成功:
$ docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
hello-world                latest              c54a2cc56cbb        6 months ago        1.848 kB
sequenceiq/hadoop-docker   2.6.0               140b265bd62a        24 months ago       1.624 GB

映象構建

1) 進入docker-spark目錄,確認所有用於映象構建的檔案已經準備好。

$ cd docker-spark/
$ ls
總用量 188856
drwxrwxr-x 4 farawayzheng farawayzheng      4096  110 23:10 ./
drwxrwxr-x 9 farawayzheng farawayzheng      4096  110 23:06 ../
-rwxrwxr-x 1 farawayzheng farawayzheng       901  110 23:06 bootstrap.sh*
-rw-rw-r-- 1 farawayzheng farawayzheng       970  110 23:16 Dockerfile
drwxrwxr-x 8 farawayzheng farawayzheng      4096  110 23:06 .git/
-rw-rw-r-- 1 farawayzheng farawayzheng        18  110 23:06 .gitignore
-rw-rw-r-- 1 farawayzheng farawayzheng     71624  110 23:06 LICENSE
-rw-rw-r-- 1 farawayzheng farawayzheng      3454  110 23:06 README.md
-rwxrwx--- 1 farawayzheng farawayzheng 193281941  110 23:04 spark-2.1.0-bin-hadoop2.6.tgz*
drwxrwxr-x 2 farawayzheng farawayzheng      4096  110 23:06 yarn-remote-client/

2) 修改Dockerfile和bootstrap.sh啟動指令碼

  • 修改Dockerfile為以下內容
FROM sequenceiq/hadoop-docker:2.6.0
#MAINTAINER SequenceIQ
MAINTAINER farawayzheng

#support for Hadoop 2.6.0
#RUN curl -s http://d3kbcqa49mib13.cloudfront.net/spark-1.6.1-bin-hadoop2.6.tgz | tar -xz -C /usr/local/
ADD spark-2.1.0-bin-hadoop2.6.tgz /usr/local/
RUN cd /usr/local && ln -s spark-2.1.0-bin-hadoop2.6 spark
ENV SPARK_HOME /usr/local/spark
RUN mkdir $SPARK_HOME/yarn-remote-client
ADD yarn-remote-client $SPARK_HOME/yarn-remote-client

RUN $BOOTSTRAP && $HADOOP_PREFIX/bin/hadoop dfsadmin -safemode leave && $HADOOP_PREFIX/bin/hdfs dfs -put $SPARK_HOME-2.1.0-bin-hadoop2.6/jars /spark && $HADOOP_PREFIX/bin/hdfs dfs -put $SPARK_HOME-2.1.0-bin-hadoop2.6/examples/jars /spark 

ENV YARN_CONF_DIR $HADOOP_PREFIX/etc/hadoop
ENV PATH $PATH:$SPARK_HOME/bin:$HADOOP_PREFIX/bin
# update boot script
COPY bootstrap.sh /etc/bootstrap.sh
RUN chown root.root /etc/bootstrap.sh
RUN chmod 700 /etc/bootstrap.sh

#install R
RUN rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum -y install R

ENTRYPOINT ["/etc/bootstrap.sh"]
  • 修改bootstrap.sh為以下內容
#!/bin/bash

: ${HADOOP_PREFIX:=/usr/local/hadoop}

$HADOOP_PREFIX/etc/hadoop/hadoop-env.sh

rm /tmp/*.pid

# installing libraries if any - (resource urls added comma separated to the ACP system variable)
cd $HADOOP_PREFIX/share/hadoop/common ; for cp in ${ACP//,/ }; do  echo == $cp; curl -LO $cp ; done; cd -

# altering the core-site configuration
sed s/HOSTNAME/$HOSTNAME/ /usr/local/hadoop/etc/hadoop/core-site.xml.template > /usr/local/hadoop/etc/hadoop/core-site.xml

# setting spark defaults
echo spark.yarn.jars hdfs:///spark/* > $SPARK_HOME/conf/spark-defaults.conf
cp $SPARK_HOME/conf/metrics.properties.template $SPARK_HOME/conf/metrics.properties

service sshd start
$HADOOP_PREFIX/sbin/start-dfs.sh
$HADOOP_PREFIX/sbin/start-yarn.sh



CMD=${1:-"exit 0"}
if [[ "$CMD" == "-d" ]];
then
    service sshd stop
    /usr/sbin/sshd -D -d
else
    /bin/bash -c "$*"
fi

3) 構建Spark 2.1.0映象

$ docker build --rm -t farawayzheng/spark:2.1.0 .
Sending build context to Docker daemon 193.5 MB
Step 1 : FROM sequenceiq/hadoop-docker:2.6.0
 ---> 140b265bd62a
Step 2 : MAINTAINER farawayzheng
 ---> Running in 12a47858c223
 ---> 04b98762d0b7
Removing intermediate container 12a47858c223
Step 3 : ADD spark-2.1.0-bin-hadoop2.6.tgz /usr/local/
 ---> 07eab98fe3f9
Removing intermediate container 3533899c0e8e
Step 4 : RUN cd /usr/local && ln -s spark-2.1.0-bin-hadoop2.6 spark
 ---> Running in 8dbcac623198
 ---> f7d68c7d52f4
Removing intermediate container 8dbcac623198
Step 5 : ENV SPARK_HOME /usr/local/spark
 ---> Running in 55a56f466fcb
 ---> 7f891e362f29
Removing intermediate container 55a56f466fcb
Step 6 : RUN mkdir $SPARK_HOME/yarn-remote-client
 ---> Running in e989ef3d7d67
 ---> 85485e987afd
Removing intermediate container e989ef3d7d67
Step 7 : ADD yarn-remote-client $SPARK_HOME/yarn-remote-client
 ---> f14d86c9f5c0
Removing intermediate container bae32c1ae32a
Step 8 : RUN $BOOTSTRAP && $HADOOP_PREFIX/bin/hadoop dfsadmin -safemode leave && $HADOOP_PREFIX/bin/hdfs dfs -put $SPARK_HOME-2.1.0-bin-hadoop2.6/jars /spark && $HADOOP_PREFIX/bin/hdfs dfs -put $SPARK_HOME-2.1.0-bin-hadoop2.6/examples/jars /spark
 ---> Running in ed073536dd11
/
Starting sshd: [  OK  ]
Starting namenodes on [70b4a57bb473]
70b4a57bb473: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-70b4a57bb473.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-70b4a57bb473.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-70b4a57bb473.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-70b4a57bb473.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-70b4a57bb473.out
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Safe mode is OFF
 ---> e4bdf8a1628d
Removing intermediate container ed073536dd11
Step 9 : ENV YARN_CONF_DIR $HADOOP_PREFIX/etc/hadoop
 ---> Running in ea435f5b6141
 ---> 07083ac117c2
Removing intermediate container ea435f5b6141
Step 10 : ENV PATH $PATH:$SPARK_HOME/bin:$HADOOP_PREFIX/bin
 ---> Running in 0dabc7dd5211
 ---> c55b79a1e670
Removing intermediate container 0dabc7dd5211
Step 11 : COPY bootstrap.sh /etc/bootstrap.sh
 ---> e942148ae10f
Removing intermediate container 996819338fda
Step 12 : RUN chown root.root /etc/bootstrap.sh
 ---> Running in 7a8c32d8ddaa
 ---> 12cbdc408ed4
Removing intermediate container 7a8c32d8ddaa
Step 13 : RUN chmod 700 /etc/bootstrap.sh
 ---> Running in f833cda3afb5
 ---> d8d17a1babbf
Removing intermediate container f833cda3afb5
Step 14 : RUN rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
 ---> Running in 5c35660f3e9b
warning: /var/tmp/rpm-tmp.ByIPjn: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Retrieving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Preparing...                ##################################################
epel-release                ##################################################
 ---> 666a2cbda116
Removing intermediate container 5c35660f3e9b
Step 15 : RUN yum -y install R
 ---> Running in 7192828863d7
Loaded plugins: fastestmirror, keys, protect-packages, protectbase
Determining fastest mirrors
 * base: mirrors.btte.net
 * epel: mirrors.tuna.tsinghua.edu.cn
 * extras: mirrors.btte.net
 * updates: mirrors.yun-idc.com
0 packages excluded due to repository protections
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package R.x86_64 0:3.3.2-3.el6 will be installed
......
......
......
---> Package xz-lzma-compat.x86_64 0:4.999.9-0.3.beta.20091007git.el6 will be updated
---> Package xz-lzma-compat.x86_64 0:4.999.9-0.5.beta.20091007git.el6 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package               Arch    Version                           Repository
                                                                           Size
================================================================================
Installing:
 R                     x86_64  3.3.2-3.el6                       epel      26 k
Installing for dependencies:
 R-core                x86_64  3.3.2-3.el6                       epel      53 M
......
Updating for dependencies:
 cpp                   x86_64  4.4.7-17.el6                      base     3.7 M
......

Transaction Summary
================================================================================
Install      69 Package(s)
Upgrade      19 Package(s)

Total download size: 145 M
Downloading Packages:
--------------------------------------------------------------------------------
Total                                           494 kB/s | 145 MB     05:01     
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Importing GPG key 0x0608B895:
 Userid : EPEL (6) <[email protected]>
 Package: epel-release-6-8.noarch (installed)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Updating   : libgcc-4.4.7-17.el6.x86_64                                 1/107 
  ......  
  Cleanup    : glib2-2.26.1-3.el6.x86_64                                107/107 
  Verifying  : acl-2.2.49-6.el6.x86_64                                    1/107 
  ......  
Installed:
  R.x86_64 0:3.3.2-3.el6                                                        

Dependency Installed:
  R-core.x86_64 0:3.3.2-3.el6                                                   
  ......                                                 
  zlib-devel.x86_64 0:1.2.3-29.el6                                              

Dependency Updated:
  cpp.x86_64 0:4.4.7-17.el6                                                     
  ......                          
  xz-lzma-compat.x86_64 0:4.999.9-0.5.beta.20091007git.el6                      

Complete!
 ---> ed7e19858dc9
Removing intermediate container 7192828863d7
Step 16 : ENTRYPOINT /etc/bootstrap.sh
 ---> Running in 31d75ee50b7d
 ---> 4eb30ebd34a2
Removing intermediate container 31d75ee50b7d
Successfully built 4eb30ebd34a2

4) 檢視新建Spark 2.1.0映象

$ docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
farawayzheng/spark         2.1.0               4eb30ebd34a2        4 hours ago         2.649 GB
hello-world                latest              c54a2cc56cbb        6 months ago        1.848 kB
sequenceiq/hadoop-docker   2.6.0               140b265bd62a        24 months ago       1.624 GB

測試映象

1) 啟動一個Spark 2.1.0容器

$ docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h sandbox farawayzheng/spark:2.1.0 bash
/
Starting sshd:                                             [  OK  ]
Starting namenodes on [sandbox]
sandbox: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-sandbox.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-sandbox.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-sandbox.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-sandbox.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-sandbox.out
bash-4.1# 

出現“bash-4.1#”提示符說明Spark 2.1.0容器啟動成功了!

2) 使用YARN-client模式驗證Spark叢集是否工作正常

bash-4.1# spark-shell --master yarn --deploy-mode client --driver-memory 1g --executor-memory 1g --executor-cores 1
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/01/11 05:00:45 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
17/01/11 05:00:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/11 05:01:10 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/spark-2.1.0-bin-hadoop2.6/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/spark/jars/datanucleus-api-jdo-3.2.6.jar."
17/01/11 05:01:10 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/spark-2.1.0-bin-hadoop2.6/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/spark/jars/datanucleus-rdbms-3.2.9.jar."
17/01/11 05:01:10 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/spark-2.1.0-bin-hadoop2.6/jars/datanucleus-core-3.2.10.jar."
17/01/11 05:01:18 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/01/11 05:01:18 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
17/01/11 05:01:19 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://172.17.0.3:4040
Spark context available as 'sc' (master = yarn, app id = application_1484126893491_0005).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
scala> 

出現“scala>”提示符說明spark shell正常運行了,但是輸出幾條WARN級別的LOG,雖然不影響我們做些簡單測試,後續還是需要對配置進行調整和校正。

關於“WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException”的解釋:
https://issues.apache.org/jira/browse/SPARK-14067

This is expected behavior. We need the default database for a number of operations. We try to look it up in the metastore, if it does not exist the metastore gives a warning and we will create a default database.

輸入scala命令測試Spark能否工作:

scala> sc.parallelize(1 to 1000).count()
res0: Long = 1000                                                               

scala> 

驗證通過,大功告成!