k8s對接ceph存儲
阿新 • • 發佈:2019-03-17
monitor volume storage ger blank cati dep version ada
前提條件:已經部署好ceph集群
本次實驗由於環境有限,ceph集群是部署在k8s的master節點上的
一、創建ceph存儲池
在ceph集群的mon節點上執行以下命令:
ceph osd pool create k8s-volumes 64 64
查看下副本數
[root@master ceph]# ceph osd pool get k8s-volumes size size: 3
pg的設置參照以下公式:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
結算的結果往上取靠近2的N次方的值。比如總共OSD數量是2,復制份數3,pool數量也是1,那麽按上述公式計算出的結果是66.66。取跟它接近的2的N次方是64,那麽每個pool分配的PG數量就是64。
二、在k8s的所有節點上安裝ceph-common
1、配置國內 yum源地址、ceph源地址
cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak yum install -y wget rm -rf /etc/yum.repos.d/* wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache
cat <<EOF > /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/ enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS enabled=0 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 EOF
2、安裝ceph-common
yum -y install ceph-common
3、將ceph的mon節點的配置文件/etc/ceph/ceph.conf 放到所有k8s節點的/etc/ceph目錄下
4、將ceph的mon節點的文件 /etc/ceph/ceph.client.admin.keyring 放到所有k8s節點的/etc/ceph目錄下
5、在k8s的master節點獲取秘鑰
[root@master ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk ‘{printf "%s", $NF}‘|base64 QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
6、在k8s的master節點創建ceph的secret
cat <<EOF > /root/ceph-secret.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" data: key: QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ== EOF
kubectl apply -f ceph-secret.yaml
7、由於是用kubeadm部署的k8s集群,kube-controller-manager是以容器方式運行的,裏面並沒有ceph-common,所以采用 擴展存儲卷插件 的方式來實現
7、創建存儲類
cat <<EOF > /root/ceph-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-storage-class provisioner: kubernetes.io/rbd parameters: monitors: 192.168.137:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: default pool: k8s-volumes userId: admin userSecretName: ceph-secret EOF
kubectl apply -f ceph-secret.yaml
8、
k8s對接ceph存儲