spotahome-redis-operator部署測試
阿新 • • 發佈:2020-08-13
環境:
k8s master:
q13756v
k8s node:
stark11
stark12
stark13
operator安裝方法
git clone https://github.com/OT-CONTAINER-KIT/redis-operator (機器: q13756v 位置: /home/liwenxin/redis-operator/) cd redis-operator/ helm upgrade redis-operator ./helm/redis-operator --install --namespace redis-operator 檢視結果: helm list -A | grep redis-operator
部署測試例項
修改helm模板
修改 helm/redis-setup/templates/redis-setup.yaml 增加redisConfig欄位的支援
--- apiVersion: redis.opstreelabs.in/v1alpha1 kind: Redis metadata: name: {{ .Values.name }} labels: app.kubernetes.io/name: {{ .Values.name }} helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} app.kubernetes.io/managed-by: {{ .Release.Service }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/version: {{ .Chart.AppVersion }} app.kubernetes.io/component: middleware spec: mode: {{ .Values.setupMode }} {{- if .Values.cluster.size }} size: {{ .Values.cluster.size }} {{- end }} global: image: "{{ .Values.global.image }}:{{ .Values.global.tag }}" imagePullPolicy: "{{ .Values.global.imagePullPolicy }}" {{- if .Values.global.password }} password: {{ .Values.global.password | quote }} {{- end }} {{- if .Values.global.redisConfig }} redisConfig: {{ toYaml .Values.global.redisConfig | indent 4 }} {{- end }} {{- if .Values.global.resources }} resources: {{ toYaml .Values.global.resources | indent 6 }} {{- end }} {{- if .Values.cluster.master }} master: service: type: {{ .Values.cluster.master.serviceType }} {{- end }} {{- if .Values.cluster.slave }} slave: service: type: {{ .Values.cluster.slave.serviceType }} {{- end }} redisExporter: enabled: {{ .Values.exporter.enabled }} image: "{{ .Values.exporter.image }}:{{ .Values.exporter.tag }}" imagePullPolicy: {{ .Values.exporter.imagePullPolicy }} {{- if .Values.exporter.resources }} resources: {{ toYaml .Values.exporter.resources | indent 6 }} {{- end }} {{- if .Values.storageSpec }} storage: {{ toYaml .Values.storageSpec | indent 4 }} {{- end }} {{- if .Values.priorityClassName }} priorityClassName: {{ .Values.priorityClassName }} {{- end }} {{- if .Values.nodeSelector }} nodeSelector: {{ toYaml .Values.nodeSelector | indent 4 }} {{- end }} {{- if .Values.affinity }} affinity: {{ toYaml .Values.affinity | indent 4 }} {{- end }} {{- if .Values.securityContext }} securityContext: {{ toYaml .Values.securityContext | indent 4 }} {{- end }}
叢集模式
修改./helm/redis-setup/cluster-values.yaml預設引數為:
--- name: redis-cluster setupMode: cluster cluster: size: 3 master: serviceType: ClusterIP slave: serviceType: ClusterIP global: image: quay.io/opstree/redis tag: "2.0" imagePullPolicy: IfNotPresent password: "Opstree@1234" redisConfig: timeout: "0" tcp-keepalive: "300" resources: requests: cpu: "10" memory: 20Gi limits: cpu: "10" memory: 20Gi exporter: enabled: true image: quay.io/opstree/redis-exporter tag: "1.0" imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 128Mi limits: cpu: 100m memory: 128Mi # priorityClassName: "-" nodeSelector: {} # memory: medium storageSpec: volumeClaimTemplate: spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 20Gi selector: {} securityContext: {} # runAsUser: 1000 affinity: {} # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: disktype # operator: In # values: # - ssd
根據實際需求,建立1個redis cluster:
REDIS_CLUSTER_NAME=redis-cluster
helm upgrade ${REDIS_CLUSTER_NAME} ./helm/redis-setup -f ./helm/redis-setup/cluster-values.yaml --set name=${REDIS_CLUSTER_NAME} --set setupMode="cluster" --set cluster.size=3 --install --namespace redis-operator
單例項模式
修改./helm/redis-setup/standalone-values.yaml預設引數為:
---
name: redis
setupMode: standalone
cluster: {}
global:
image: quay.io/opstree/redis
tag: "2.0"
imagePullPolicy: IfNotPresent
password: "Opstree@1234"
redisConfig:
timeout: "0"
tcp-keepalive: "300"
resources:
requests:
cpu: "10"
memory: 20Gi
limits:
cpu: "10"
memory: 20Gi
exporter:
enabled: true
image: quay.io/opstree/redis-exporter
tag: "1.0"
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
# priorityClassName: "-"
nodeSelector: {}
# memory: medium
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
selector: {}
securityContext: {}
# runAsUser: 1000
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
根據實際需求,建立1個redis例項:
REDIS_NAME=redis
helm upgrade ${REDIS_NAME} ./helm/redis-setup -f ./helm/redis-setup/standalone-values.yaml --set name=${REDIS_NAME} --set setupMode="standalone" --install --namespace redis-operator
訪問驗證
在master節點上:
q13756v.add.bjyt.qihoo.net
## 檢視自定義資源例項
kubectl get redis -A
## 進入客戶端:
kubectl exec -it $(kubectl get po | grep centos7 | grep Running | awk '{print $1}') bash
(安裝redis,以便使用客戶端: yum install -y redis)
### 連線redis
#### 預設
redis-cli -h redis-cluster-master.redis-operator.svc -a Opstree@12345 -c
## auth "Opstree@1234"
set name "xiaoming"
exit
redis-cli -h redis-cluster-slave.redis-operator.svc -a Opstree@12345 -c
## auth "Opstree@1234"
get name
exit
#### nodeport(參考下面步驟配置nodeport)
redis-cli -h rg1-ceph101 -a Opstree@1234 -p 30379 -c
get name
exit
reids外部訪問NodePort模式
因operator不支援其他型別service,需手工建立NodePort
apiVersion: v1
kind: Service
metadata:
namespace: redis-operator
name: nodeport-redis-cluster-master
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
redis.opstreelabs.in: "true"
labels:
app: redis-cluster-master
role: master
spec:
ports:
- name: redis-cluster-master
port: 6379
protocol: TCP
targetPort: 6379
nodePort: 30379
- name: redis-exporter
port: 9121
protocol: TCP
targetPort: 9121
nodePort: 30121
selector:
app: redis-cluster-master
role: master
sessionAffinity: None
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
namespace: redis-operator
name: nodeport-redis-cluster-slave
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
redis.opstreelabs.in: "true"
labels:
app: redis-cluster-slave
role: slave
spec:
ports:
- name: redis-cluster-slave
port: 6379
protocol: TCP
targetPort: 6379
nodePort: 30479
- name: redis-exporter
port: 9121
protocol: TCP
targetPort: 9121
nodePort: 30221
selector:
app: redis-cluster-slave
role: slave
sessionAffinity: None
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
redis.opstreelabs.in: "true"
labels:
app: redis-standalone
role: standalone
name: redis
namespace: redis-operator
spec:
ports:
- name: redis-standalone
port: 6379
protocol: TCP
targetPort: 6379
nodePort: 30579
- name: redis-exporter
port: 9121
protocol: TCP
targetPort: 9121
nodePort: 30321
selector:
app: redis-standalone
role: standalone
sessionAffinity: None
type: NodePort
清理
注意: 如果修改了儲存配額,需要執行2-3步驟,否則直接跳到4:
1. 刪除原來的服務(如果修改了儲存配額):
helm uninstall ${REDIS_CLUSTER_NAME} -n redis-operator
2. 刪除原來的儲存(如果修改了儲存配額):
kubectl get pvc -n redis-operator --no-headers | awk '{print $1}'
確認沒問題刪除:
kubectl get pvc -n redis-operator --no-headers | awk '{print $1}' | xargs kubectl delete pvc -n redis-operator