StatefulSet有狀態應用副本集
阿新 • • 發佈:2018-11-09
statefulset有狀態應用副本集
PetSet -> StatefulSet
1.文件且唯一的網路識別符號
2.穩定且持久的儲存
3.有序,平滑的部署和擴充套件
4.有序,平滑的刪除和終止
5.有序的滾動更新
三個元件:
headless service
StatefulSet
volumeClaimTemplate
實驗的前期準備條件:
master;192.168.68.10
node1: 192.168.68.20
node2: 192.168.68.30
node3: 192.168.68.40
node3準備:解析node3到本機,同步其他的master和node
各個節點安裝NFS
NFS目錄為:
[ [email protected] /]# tree data
data
└── volumes
├── index.html
├── v1
├── v2
│ └── index.html
├── v3
├── v4
└── v5
啟動nfs
systemctl start nfs
檢視pvc
[[email protected] configmap]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 1d
pv002 7Gi RWO,RWX Retain Bound default/mypvc 1d
pv003 8Gi RWO,RWX Retain Available 1d
pv004 10Gi RWO,RWX Retain Available 1d
pv005 12Gi RWO,RWX Retain Available 1d
[ [email protected] configmap]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv002 7Gi RWO,RWX 1d
刪除已經掛載的pvc
kubectl get pvc
kubectl delete pvc/mypvc
kubectl get pv
[[email protected] configmap]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 1d
pv002 7Gi RWO,RWX Retain Released default/mypvc 1d
pv003 8Gi RWO,RWX Retain Available 1d
pv004 10Gi RWO,RWX Retain Available 1d
pv005 12Gi RWO,RWX Retain Available 1d
Released default/mypvc 狀態是已經釋放了
刪除所有的pv
kubectl delete pv --all
[ [email protected] configmap]# kubectl delete pv --all
persistentvolume "pv001" deleted
persistentvolume "pv002" deleted
persistentvolume "pv003" deleted
persistentvolume "pv004" deleted
persistentvolume "pv005" deleted
[[email protected] configmap]# kubectl get pv
No resources found.
開始重新建立pv
建立5個節點,5個PV
[[email protected] volumes]# cat pvs-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: node3
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v1
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
[[email protected] volumes]# kubectl apply -f pvs-demo.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[[email protected] volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 21s
pv002 5Gi RWO Retain Available 21s
pv003 5Gi RWO,RWX Retain Available 21s
pv004 5Gi RWO,RWX Retain Available 21s
pv005 5Gi RWO,RWX Retain Available 21s
yaml檔案內容:
[[email protected] volumes]# cat stateful-demo-1.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp
replicas: 3
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
[[email protected] volumes]# kubectl apply -f stateful-demo.yaml
service/myapp created
statefulset.apps/myapp created
檢視狀態
[[email protected] volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 6s
myapp-1 1/1 Running 0 5s
myapp-2 1/1 Running 0 3s
[[email protected] volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 11s
myapp-1 1/1 Running 0 10s
myapp-2 1/1 Running 0 8s
[[email protected] volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 24s
myappdata-myapp-1 Bound pv004 5Gi RWO,RWX 23s
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 21s
[[email protected] volumes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
myapp ClusterIP None <none> 80/TCP 1m
[[email protected] volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 15m
pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 15m
pv003 5Gi RWO,RWX Retain Available 15m
pv004 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 15m
pv005 5Gi RWO,RWX Retain Available 15m
通過上面發現:
pods 的名字是自己定義的
自動關聯三個5G的空間
自動繫結空間
[[email protected] volumes]# kubectl get sts
NAME DESIRED CURRENT AGE
myapp 3 3 15m
測試刪除:
[[email protected] volumes]# kubectl delete -f stateful-demo-1.yaml
service "myapp" deleted
statefulset.apps "myapp" deleted
通過監控發現:
kubectl get pods -w
[[email protected] ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 21m
myapp-1 1/1 Running 0 20m
myapp-2 1/1 Running 0 20m
myapp-1 1/1 Terminating 0 21m
myapp-0 1/1 Terminating 0 21m
myapp-2 1/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
建立測試
[[email protected] volumes]# kubectl apply -f stateful-demo-1.yaml
service/myapp created
statefulset.apps/myapp created
[[email protected] ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 ContainerCreating 0 0s
myapp-0 1/1 Running 0 1s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 ContainerCreating 0 0s
myapp-1 1/1 Running 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 ContainerCreating 0 0s
myapp-2 1/1 Running 0 2s
###################################
由此發現,刪除是2,1,0
建立時0,1,2
無論怎麼建立都是繫結的固定的儲存卷
###################################
###################################
它支援滾動更新
###################################
注意:每個的pod名稱都是可以被解析的
myapp-0都是可以解析的
驗證
驗證:
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 5m
myapp-1 1/1 Running 0 5m
myapp-2 1/1 Running 0 5m
[[email protected] ~]# kubectl exec -it myapp-0 /bin/sh
/ # nslookup myapp-0.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-0.myapp.default.svc.cluster.local
Address 1: 10.244.2.65 myapp-0.myapp.default.svc.cluster.local
/ # nslookup myapp-1.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-1.myapp.default.svc.cluster.local
Address 1: 10.244.1.67 myapp-1.myapp.default.svc.cluster.local
/ # nslookup myapp-2.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-2.myapp.default.svc.cluster.local
Address 1: 10.244.2.66 myapp-2.myapp.default.svc.cluster.local
注意:解析的時候必須跟無頭服務
myapp-0 pod名
myapp 服務名
default.svc.cluster.local 名稱空間
規則格式:
pod_name.service_name.ns_name.svc.cluster.local
#################
擴容實驗:
將myapp服務擴容到5個
[[email protected] volumes]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scale
監控:
[[email protected] ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 14m
myapp-1 1/1 Running 0 14m
myapp-2 1/1 Running 0 14m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 1s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 ContainerCreating 0 0s
myapp-4 1/1 Running 0 1s
[[email protected] volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 15m
myapp-1 1/1 Running 0 15m
myapp-2 1/1 Running 0 15m
myapp-3 1/1 Running 0 1m
myapp-4 1/1 Running 0 1m
[[email protected] volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 39m
myappdata-myapp-1 Bound pv004 5Gi RWO,RWX 39m
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 39m
myappdata-myapp-3 Bound pv003 5Gi RWO,RWX 1m
myappdata-myapp-4 Bound pv005 5Gi RWO,RWX 1m
#################
縮減實驗:
[[email protected] volumes]# kubectl scale sts myapp --replicas=2
或者:
[[email protected] volumes]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}'
statefulset.apps/myapp patched
監控:可以發現是逆序的
[[email protected] ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 17m
myapp-1 1/1 Running 0 17m
myapp-2 1/1 Running 0 17m
myapp-3 1/1 Running 0 2m
myapp-4 1/1 Running 0 2m
myapp-4 1/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-3 1/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-2 1/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
[[email protected] volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 19m
myapp-1 1/1 Running 0 19m
[[email protected] volumes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
myapp ClusterIP None <none> 80/TCP 19m
[[email protected] volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 43m
pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 43m
pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-3 43m
pv004 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 43m
pv005 5Gi RWO,RWX Retain Bound default/myappdata-myapp-4 43m
#################
更新實驗:
金絲雀更新:
先更新部分版本,如果使用沒有問題,然後再手動繼續更新後面的
例如有五個myapp:
myapp1,myapp2,myapp3,myapp4,myapp5,
我想更新大於3的
partition:N
>=N
>=0 所有的都更新
擴充套件Pod到5個
[[email protected] volumes]# kubectl patch sts myapp -p '{"spec":{"replicas":5}}'
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
檢視更新策略:
[[email protected] volumes]# kubectl describe sts myapp
Partition: 4
開始更新到V2版本
[[email protected] volumes]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[[email protected] volumes]# kubectl get sts -o wide 控制器已經更新到v2
NAME DESIRED CURRENT AGE CONTAINERS IMAGES
myapp 5 5 37m myapp ikubernetes/myapp:v2
kubectl describe pods myapp-0 到myapp-3
Image: ikubernetes/myapp:v1
kubectl describe pods myapp-4
Image: ikubernetes/myapp:v2
如果想更新全部到v2版本:
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
這樣就全部更新成功了