1. 程式人生 > >ceph詳細安裝部署教程

ceph詳細安裝部署教程

一,前期準備安裝頭孢部署工具

   所有的伺服器都是用根使用者登入的

1,安裝環境

   系統的CentOS-6.5

   裝置:1臺admin-node(ceph-ploy)1臺monistor 2臺osd

2,關閉所有節點的防火牆及關閉的selinux,重啟機器。

 服務iptables停止

 sed -i'/ SELINUX / s / enforcing / disabled /'/ etc / selinux / config

 chkconfig關閉iptables

3,編輯admin-node節點的ceph yum倉庫

vi /etc/yum.repos.d/ceph.repo 

[頭孢-3- noarch]

name = Ceph noarch包

baseurl=http://ceph.com/rpm/el6/noarch/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

4、安裝搜狐的epel倉庫

   rpm -ivh http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm

5、更新admin-node節點的yum源 

    yum clean all

    yum update -y

6、在admin-node節點上建立一個ceph叢集目錄

   mkdir /ceph

   cd  /ceph

7、在admin-node節點上安裝ceph部署工具

    yum install ceph-deploy -y

8、配置admin-node節點的hosts檔案

  vi /etc/hosts

10.240.240.210 admin-node

10.240.240.211 node1

10.240.240.212 node2

10.240.240.213 node3

二、配置ceph-deploy部署的無密碼登入每個ceph節點   

1、在每個Ceph節點上安裝一個SSH伺服器

   [[email protected] ~]$ yum install openssh-server -y

2、配置您的admin-node管理節點與每個Ceph節點無密碼的SSH訪問。

[[email protected] ceph]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

3、複製admin-node節點的祕鑰到每個ceph節點

 ssh-copy-id [email protected]

 ssh-copy-id [email protected]

 ssh-copy-id [email protected]

 ssh-copy-id [email protected]

4、測試每臺ceph節點不用密碼是否可以登入

 ssh [email protected]

 ssh [email protected]

 ssh [email protected]

5、修改admin-node管理節點的~/.ssh / config檔案,這樣它登入到Ceph節點建立的使用者

Host admin-node

  Hostname admin-node

  User root   

Host node1

  Hostname node1

  User root

Host node2

  Hostname node2

  User root

Host node3

  Hostname node3

  User root

三、用ceph-deploy工具部署ceph叢集

1、在admin-node節點上新建一個ceph叢集

[[email protected] ceph]#  ceph-deploy new node1 node2 node3      (執行這條命令後node1 node2 node3都作為了monitor節點,多個mon節點可以實現互備)

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy new node1 node2 node3

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][DEBUG ] Resolving host node1

[ceph_deploy.new][DEBUG ] Monitor node1 at 10.240.240.211

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node1][DEBUG ] connected to host: admin-node 

[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1

[ceph_deploy.new][DEBUG ] Resolving host node2

[ceph_deploy.new][DEBUG ] Monitor node2 at 10.240.240.212

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node2][DEBUG ] connected to host: admin-node 

[node2][INFO  ] Running command: ssh -CT -o BatchMode=yes node2

[ceph_deploy.new][DEBUG ] Resolving host node3

[ceph_deploy.new][DEBUG ] Monitor node3 at 10.240.240.213

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node3][DEBUG ] connected to host: admin-node 

[node3][INFO  ] Running command: ssh -CT -o BatchMode=yes node3

[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2', 'node3']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.240.240.211', '10.240.240.212', '10.240.240.213']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

檢視生成的檔案

[[email protected] ceph]# ls

ceph.conf  ceph.log  ceph.mon.keyring

檢視ceph的配置檔案,三個節點都變為了控制節點

[[email protected] ceph]# cat ceph.conf 

[global]

auth_service_required = cephx

filestore_xattr_use_omap = true

auth_client_required = cephx

auth_cluster_required = cephx

mon_host = 10.240.240.211,10.240.240.212,10.240.240.213

mon_initial_members = node1, node2, node3

fsid = 4dc38af6-f628-4c1f-b708-9178cf4e032b

[[email protected] ceph]# 

2、部署之前確保ceph每個節點沒有ceph資料包(先清空之前所有的ceph資料,如果是新裝不用執行此步驟,如果是重新部署的話也執行下面的命令)

[[email protected] ceph]# ceph-deploy purgedata admin-node node1 node2 node3  

[[email protected] ceph]# ceph-deploy forgetkeys

[[email protected] ceph]# ceph-deploy purge admin-node node1 node2 node3

  如果是新裝的話是沒有任何資料的 

3、編輯admin-node節點的ceph配置檔案,把下面的配置放入ceph.conf中

   osd pool default size = 2

4、在admin-node節點用ceph-deploy工具向各個節點安裝ceph

[[email protected] ceph]# ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ...

[admin-node][DEBUG ] connected to host: admin-node 

[admin-node][DEBUG ] detect platform information from remote host

[admin-node][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.5 Final

[admin-node][INFO  ] installing ceph on admin-node

[admin-node][INFO  ] Running command: yum clean all

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates

[admin-node][DEBUG ] Cleaning up Everything

[admin-node][DEBUG ] Cleaning up list of fastest mirrors

[admin-node][INFO  ] Running command: yum -y install wget

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Determining fastest mirrors

[admin-node][DEBUG ]  * base: mirrors.btte.net

[admin-node][DEBUG ]  * epel: mirrors.neusoft.edu.cn

[admin-node][DEBUG ]  * extras: mirrors.btte.net

[admin-node][DEBUG ]  * updates: mirrors.btte.net

[admin-node][DEBUG ] Setting up Install Process

[admin-node][DEBUG ] Package wget-1.12-1.11.el6_5.x86_64 already installed and latest version

[admin-node][DEBUG ] Nothing to do

[admin-node][INFO  ] adding EPEL repository

[admin-node][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] --2014-06-07 22:05:34--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...

[admin-node][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80... connected.

[admin-node][WARNIN] HTTP request sent, awaiting response... 200 OK

[admin-node][WARNIN] Length: 14540 (14K) [application/x-rpm]

[admin-node][WARNIN] Saving to: `epel-release-6-8.noarch.rpm.1'

[admin-node][WARNIN] 

[admin-node][WARNIN]      0K .......... ....                                       100% 73.8K=0.2s

[admin-node][WARNIN] 

[admin-node][WARNIN] 2014-06-07 22:05:35 (73.8 KB/s) - `epel-release-6-8.noarch.rpm.1' saved [14540/14540]

[admin-node][WARNIN] 

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] epel-release                ##################################################

[admin-node][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] ceph-release                ##################################################

[admin-node][INFO  ] Running command: yum -y -q install ceph

[admin-node][DEBUG ] Package ceph-0.80.1-2.el6.x86_64 already installed and latest version

[admin-node][INFO  ] Running command: ceph --version

[admin-node][DEBUG ] ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)

[ceph_deploy.install][DEBUG ] Detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final

[node1][INFO  ] installing ceph on node1

[node1][INFO  ] Running command: yum clean all

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Cleaning repos: base extras updates

[node1][DEBUG ] Cleaning up Everything

[node1][DEBUG ] Cleaning up list of fastest mirrors

[node1][INFO  ] Running command: yum -y install wget

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Determining fastest mirrors

[node1][DEBUG ]  * base: mirrors.btte.net

[node1][DEBUG ]  * extras: mirrors.btte.net

[node1][DEBUG ]  * updates: mirrors.btte.net

[node1][DEBUG ] Setting up Install Process

[node1][DEBUG ] Resolving Dependencies

[node1][DEBUG ] --> Running transaction check

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.8.el6 will be updated

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.11.el6_5 will be an update

[node1][DEBUG ] --> Finished Dependency Resolution

[node1][DEBUG ] 

[node1][DEBUG ] Dependencies Resolved

[node1][DEBUG ] 

[node1][DEBUG ] ================================================================================

[node1][DEBUG ]  Package       Arch            Version                   Repository        Size

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Updating:

[node1][DEBUG ]  wget          x86_64          1.12-1.11.el6_5           updates          483 k

[node1][DEBUG ] 

[node1][DEBUG ] Transaction Summary

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Upgrade       1 Package(s)

[node1][DEBUG ] 

[node1][DEBUG ] Total download size: 483 k

[node1][DEBUG ] Downloading Packages:

[node1][DEBUG ] Running rpm_check_debug

[node1][DEBUG ] Running Transaction Test

[node1][DEBUG ] Transaction Test Succeeded

[node1][DEBUG ] Running Transaction

  Updating   : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Cleanup    : wget-1.12-1.8.el6.x86_64                                     2/2 

  Verifying  : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Verifying  : wget-1.12-1.8.el6.x86_64                                     2/2 

[node1][DEBUG ] 

[node1][DEBUG ] Updated:

[node1][DEBUG ]   wget.x86_64 0:1.12-1.11.el6_5                                                 

[node1][DEBUG ] 

[node1][DEBUG ] Complete!

[node1][INFO  ] adding EPEL repository

[node1][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] --2014-06-07 22:06:57--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.23, 209.132.181.24, 209.132.181.25, ...

[node1][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.23|:80... connected.

[node1][WARNIN] HTTP request sent, awaiting response... 200 OK

[node1][WARNIN] Length: 14540 (14K) [application/x-rpm]

[node1][WARNIN] Saving to: `epel-release-6-8.noarch.rpm'

[node1][WARNIN] 

[node1][WARNIN]      0K .......... ....                                       100% 69.6K=0.2s

[node1][WARNIN] 

[node1][WARNIN] 2014-06-07 22:06:58 (69.6 KB/s) - `epel-release-6-8.noarch.rpm' saved [14540/14540]

[node1][WARNIN] 

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] epel-release                ##################################################

[node1][WARNIN] warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] ceph-release                ##################################################

[node1][INFO  ] Running command: yum -y -q install ceph

[node1][WARNIN] warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][WARNIN] Importing GPG key 0x0608B895:

[node1][WARNIN]  Userid : EPEL (6) <[email protected]>

[node1][WARNIN]  Package: epel-release-6-8.noarch (installed)

[node1][WARNIN]  From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[node1][WARNIN] Warning: RPMDB altered outside of yum.

[node1][INFO  ] Running command: ceph --version

[node1][WARNIN] Traceback (most recent call last):

[node1][WARNIN]   File "/usr/bin/ceph", line 53, in <module>

[node1][WARNIN]     import argparse

[node1][WARNIN] ImportError: No module named argparse

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version

上面報錯資訊的解決方法是:在報錯的節點上執行下面的命令

[[email protected] ~]# yum install *argparse* -y

5、新增初始監控節點並收集金鑰(新的ceph-deploy v1.1.3以後的版本)。

[[email protected] ceph]# ceph-deploy mon create-initial  

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done

[node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create the monitor keyring file

[node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] ceph-mon: mon.noname-a 10.240.240.211:6789/0 is local, renaming to mon.node1

[node1][DEBUG ] ceph-mon: set fsid to 369daf5a-e844-4e09-a9b1-46bb985aec79

[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1

[node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][WARNIN] /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy.mon][ERROR ] Failed to execute command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors

解決上面報錯資訊的方法:

手動在node1 node2 node3節點上執行下面的命令

[[email protected] ~]# yum install redhat-lsb  -y

再次執行上面的命令可以成功啟用監控節點

[[email protected] ceph]# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][DEBUG ] === mon.node1 === 

[node1][DEBUG ] Starting Ceph mon.node1 on node1...already running

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[node1][DEBUG ] ********************************************************************************

[node1][DEBUG ] status for monitor: mon.node1

[node1][DEBUG ] {

[node1][DEBUG ]   "election_epoch": 6, 

[node1][DEBUG ]   "extra_probe_peers": [

[node1][DEBUG ]     "10.240.240.212:6789/0", 

[node1][DEBUG ]     "10.240.240.213:6789/0"

[node1][DEBUG ]   ], 

[node1][DEBUG ]   "monmap": {

[node1][DEBUG ]     "created": "0.000000", 

[node1][DEBUG ]     "epoch": 2, 

[node1][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node1][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node1][DEBUG ]     "mons": [

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node1][DEBUG ]         "name": "node1", 

[node1][DEBUG ]         "rank": 0

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node1][DEBUG ]         "name": "node2", 

[node1][DEBUG ]         "rank": 1

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node1][DEBUG ]         "name": "node3", 

[node1][DEBUG ]         "rank": 2

[node1][DEBUG ]       }

[node1][DEBUG ]     ]

[node1][DEBUG ]   }, 

[node1][DEBUG ]   "name": "node1", 

[node1][DEBUG ]   "outside_quorum": [], 

[node1][DEBUG ]   "quorum": [

[node1][DEBUG ]     0, 

[node1][DEBUG ]     1, 

[node1][DEBUG ]