1. 程式人生 > >基於kubernetes1.11安裝Harbor私有映象庫(一)

基於kubernetes1.11安裝Harbor私有映象庫(一)

1.簡介

本文主要記錄基於kubernetes1.11版本安裝harbor私有映象庫,及配置nginx, traefik的代理及https相關過程。此處harbor採用共享儲存(GlusterFS)作為映象的儲存系統。本節首先說明如何在Centos7上安裝規模為三個節點的GlusterFS叢集。

2.安裝共享儲存(GlusterFS)

  • 節點準備
節點IP 節點Hostname 角色
192.168.1.11 gfs-manager manager
192.168.1.12 gfs-node1 node1
192.168.1.13 gfs-node2 node2
  • 所有節點配置/etc/hosts
192.168.1.11 gfs-manager
192.168.1.12 gfs-node1
192.168.1.13 gfs-node2
  • 所有節點通過yum安裝glusterfs
yum install -y centos-release-gluster
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
# 啟動GlusterFS服務
systemctl start glusterd.service
systemctl enable glusterd.service
  • 在manager節點上執行,將node節點加入叢集
[[email protected] ~]#gluster peer probe gfs-manager
peer probe: success. Probe on localhost not needed
[[email protected] ~]#gluster peer probe gfs-node1
peer probe: success.
[[email protected] ~]#gluster peer probe gfs-node2
peer probe: success.
  • 檢視叢集狀態
[
[email protected] ~]# gluster peer status Number of Peers: 2 Hostname: gfs-node1 Uuid: 25f7804c-2b48-4f88-8658-3b9302d06a19 State: Peer in Cluster (Connected) Hostname: gfs-node2 Uuid: 0c6196d3-318b-46f9-ac40-29a8212d4900 State: Peer in Cluster (Connected)
  • 檢視volume狀態
[[email protected] ~]#gluster volume info
No volumes present

3.建立資料儲存目錄

假設要建立的目錄為/data/gluster/harbordata, 用於掛載為Harbor的儲存目錄。

  • 所有節點執行建立,目錄
mkdir -p /data/gluster/harbordata
  • manager節點建立GlusterFS磁碟
[[email protected] ~]#gluster volume create harbordata replica 3 gfs-manager:/data/gluster/harbordata gfs-node1:/data/gluster/harbordata gfs-node2:/data/gluster/harbordata force
volume create: harbordata: success: please start the volume to access data
  • manager節點啟動harbordata
[[email protected] ~]#gluster volume start harbordata
volume start: harbordata: success
  • 檢視volume狀態
[[email protected] ~]# gluster volume info
 
Volume Name: harbordata
Type: Replicate
Volume ID: c4fb0a43-c9e5-4a4e-ba98-cf14a7591ecd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gfs-manager:/data/gluster/harbordata
Brick2: gfs-node1:/data/gluster/harbordata
Brick3: gfs-node2:/data/gluster/harbordata
Options Reconfigured:
performance.write-behind: on

4. GlusterFS引數調優(供參考)

  • 配置相關引數:
#開啟指定volume的配額: (harbordata 為volume名)
gluster volume quota harbordata enable

#限制harbordata的根目錄最大使用 100GB 空間
gluster volume quota harbordata limit-usage / 100GB

#設定cache
gluster volume set harbordata performance.cache-size 2GB

#開啟非同步操作
gluster volume set harbordata performance.flush-behind on

#設定io執行緒
gluster volume set harbordata performance.io-thread-count 16

#設定回寫 (寫資料時間,先寫快取,再寫硬碟)
gluster volume set harbordata performance.write-behind on
  • 檢視當前volume狀態
[[email protected] ~]# gluster volume info
 
Volume Name: harbordata
Type: Replicate
Volume ID: c4fb0a43-c9e5-4a4e-ba98-cf14a7591ecd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gfs-manager:/data/gluster/harbordata
Brick2: gfs-node1:/data/gluster/harbordata
Brick3: gfs-node2:/data/gluster/harbordata
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 16
performance.flush-behind: on
performance.cache-size: 2GB
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Habor的安裝後續請關注後面的文章。