Ceph 儲存叢集配置之示例 CEPH.CONF
阿新 • • 發佈:2022-03-05
作者:Varden 出處:http://www.cnblogs.com/varden/ 本文內容如有雷同,請聯絡作者! 本文版權歸作者和部落格園共有,歡迎轉載,但未經作者同意必須保留此段宣告,且在文章頁面明顯位置給出原文連線,否則保留追究法律責任的權利。[global] fsid = {cluster-id} mon_initial_ members = {hostname}[, {hostname}] mon_host = {ip-address}[, {ip-address}] #All clusters have a front-side public network. #If you have two network interfaces, you can configure a private / cluster #network for RADOS object replication, heartbeats, backfill, #recovery, etc. public_network = {network}[, {network}] #cluster_network = {network}[, {network}] #Clusters require authentication by default. auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx #Choose reasonable numbers for journals, number of replicas #and placement groups. osd_journal_size = {n} osd_pool_default_size = {n} # Write an object n times. osd_pool_default_min size = {n} # Allow writing n copy in a degraded state. osd_pool_default_pg num = {n} osd_pool_default_pgp num = {n} #Choose a reasonable crush leaf type. -- 選擇合理的crush葉型 #0 for a 1-node cluster. -- 0 用於 1 節點叢集 #1 for a multi node cluster in a single rack -- 1 用於單個機架中的多節點叢集 #2 for a multi node, multi chassis cluster with multiple hosts in a chassis -- 2 用於多節點、多機箱叢集,機箱中有多個主機 #3 for a multi node cluster with hosts across racks, etc. -- 3 用於跨機架等主機的多節點叢集 osd_crush_chooseleaf_type = {n}