1. 程式人生 > 實用技巧 >Ceph的基本安裝

Ceph的基本安裝

一、基本環境介紹:

本文采用ceph-deploy工具進行ceph安裝,ceph-deploy可以單獨作為一個admin節點,也可以裝在任意node節點上。

wKiom1ddI6STpLAkAAAvJVPW-Z4918.png

系統環境如下:

1、 系統採用redhat-6.5x86_64basic_server安裝,總共3個節點,ntp時間同步

2、 關閉selinux,用epelceph官方源,版本為0.86

3、 3個節點已做互信,並配置host,每個節點上有3個磁碟做osd

4、 升級核心為3.18,並從新核心啟動。


二、安裝步驟:

1、 每個node上設定iptables規則或關閉iptables(eth0ceph網路所在的網絡卡名稱)

iptables-AINPUT-ieth0-ptcp-s0.0.0.0/0--dport6789-jACCEPT
iptables-AINPUT-ieth0-mmultiport-ptcp-s0.0.0.0/0--dports6800:7300-jACCEPT
serviceiptablessave

2、 格式化、掛載osd

yum-yinstallxfsprogs
mkfs.xfs/dev/sdb
mkdir/osd{0..2}
#blkid檢視sdb的UUID
echo'UUID=89048e27-ff01-4365-a103-22e95fb2cc93/osd0xfsnoatime,nobarrier,nodiratime00'>>/etc/fstab

一個磁碟對應一個osd,每個節點都建立osd0osd1osd2目錄,對應的磁碟掛載相應目錄即可。

3、安裝ceph部署工具

#mkdirceph#最好建立一個目錄,因為安裝ceph過程中,會在安裝目錄生成一些檔案
#cdceph
#yum-yinstallceph-deploy

4、建立mon

ceph-deploynewnode1nod2node3#這個命令其實就是僅僅生成了ceph.conf和ceph.mon.keyring檔案
vimceph.conf追加以下內容(根據需求更改)
debug_ms=0
mon_clock_drift_allowed=1
osd_pool_default_size=2#副本數量
osd_pool_default_min_size=1
osd_pool_default_pg_num=128#pg數量
osd_pool_default_pgp_num=128
osd_crush_chooseleaf_type=0
debug_auth=0/0
debug_optracker=0/0
debug_monc=0/0
debug_crush=0/0
debug_buffer=0/0
debug_tp=0/0
debug_journaler=0/0
debug_journal=0/0
debug_lockdep=0/0
debug_objclass=0/0
debug_perfcounter=0/0
debug_timer=0/0
debug_filestore=0/0
debug_context=0/0
debug_finisher=0/0
debug_heartbeatmap=0/0
debug_asok=0/0
debug_throttle=0/0
debug_osd=0/0
debug_rgw=0/0
debug_mon=0/0
osd_max_backfills=4
filestore_split_multiple=8
filestore_fd_cache_size=1024
filestore_queue_committing_max_bytes=1048576000
filestore_queue_max_ops=500000
filestore_queue_max_bytes=1048576000
filestore_queue_committing_max_ops=500000
osd_max_pg_log_entries=100000
osd_mon_heartbeat_interval=30#Performancetuningfilestore
osd_mount_options_xfs=rw,noatime,logbsize=256k,delaylog
#osd_journal_size=20480日誌大小,不指定,預設是5G
osd_op_log_threshold=50
osd_min_pg_log_entries=30000
osd_recovery_op_priority=1
osd_mkfs_options_xfs=-f-isize=2048
osd_mkfs_type=xfs
osd_journal=/var/lib/ceph/osd/$cluster-$id/journal
journal_queue_max_ops=500000
journal_max_write_bytes=1048576000
journal_max_write_entries=100000
journal_queue_max_bytes=1048576000
objecter_infilght_op_bytes=1048576000
objecter_inflight_ops=819200
ms_dispatch_throttle_bytes=1048576000
sd_data=/var/lib/ceph/osd/$cluster-$id
merge_threshold=40
backfills=1
mon_osd_min_down_reporters=13
mon_osd_down_out_interval=600
rbd_cache_max_dirty_object=0
rbd_cache_target_dirty=235544320
rbd_cache_writethrough_until_flush=false
rbd_cache_size=335544320
rbd_cache_max_dirty=335544320
rbd_cache_max_dirty_age=60
rbd_cache=false

5、安裝ceph

#On All nodes to Install

yum-yinstallceph

admin節點上執行:

ceph-deploymoncreatenode1node2node3
ceph-deploygatherkeysnode1#從moniter節點獲得keys,用來管理節點


6、建立啟用osd

ceph-deployosdpreparenode1:/osd0node1:/osd1node1:/osd2node2:/osd0node2:/osd1node2:/osd2node3:/osd0node3:/osd1node3:/osd2
ceph-deployosdactivatenode1:/osd0node1:/osd1node1:/osd2node2:/osd0node2:/osd1node2:/osd2node3:/osd0node3:/osd1node3:/osd2

ceph-deployadminnode1node2node3#從admin節點複製配置檔案及key到node
chmod+r/etc/ceph/ceph.client.admin.keyring(所有節點為該檔案新增讀許可權)

也可以建立相關的pool

cephosdpoolcreatevolumes128
cephosdpoolcreatep_w_picpaths128
cephosdpoolcreatevms128
cephosdpoolcreatebackups128

wKioL1ddJPzwowP1AAAkuJqOoAU556.png

wKioL1ddJP3Q_-jPAAA9Ba1Z6HI170.png

如果ceph安裝失敗,要清除ceph環境,執行以下命令

ceph-deploypurgenode1node2node3(所有節點)
ceph-deploypurgedatanode1node2node3(所有節點)
ceph-deployforgetkeys


轉載於:https://blog.51cto.com/linuxnote/1788333