Jenkins-k8s-helm-harbor-githab-mysql-nfs微服務釋出平臺實戰
基於 K8S 構建 Jenkins 微服務釋出平臺
實現彙總:
- 釋出流程設計講解
- 準備基礎環境
- K8s環境(部署Ingress Controller,CoreDNS,Calico/Flannel)
- 部署程式碼版本倉庫Gitlab
- 配置本地Git上傳測試程式碼,建立專案到Gitlab
- 部署pinpoint 全鏈路監控系統(提前修改Dockerfile,打包映象上傳)
- 部署映象倉庫Harbor(開啟helm倉庫)
- master節點部署helm應用包管理器(配置本地helm倉庫,上傳helm包)
- 部署K8S 儲存(nfs、ceph),master節點提供pv自動供給
- 部署MySQL叢集(匯入微服務資料庫)
- 部署EFK日誌採集(追加)
- 部署prometheus監控系統(追加)
- 在Kubernetes中部署Jenkins
- Jenkins Pipeline 及引數化構建
- Jenkins在K8S中動態建立代理
- 自定義構建Jenkins-Slave映象
- 基於Kubernetes構建Jenkins CI系統
- Pipeline 整合 Helm 釋出微服務專案
釋出流程設計講解
機器環境
當前環境部署主要是實現微服務自動釋出和推送,具體實現的功能細節主要實現在下述幾大軟體上面。其實自動釋出和推送有很多種方式,如有不足,請留言補充。
IP地址 | 主機名 | 服務配置 |
---|---|---|
192.168.25.223 | k8s-master01 | Kubernetes-Master節點+Jenkins |
192.168.25.225 | k8s-node01 | Kubernetes-Node節點 |
192.168.25.226 | k8s-node02 | Kubernetes-Node節點 |
192.168.25.227 | gitlab-nfs | Gitlab,NFS,Git |
192.168.25.228 | harbor | harbor,mysql,docker,pinpoint |
準備基礎環境
K8s環境(部署Ingress Controller,CoreDNS,Calico/Flannel)
部署命令
單Master版:
ansible-playbook -i hosts single-master-deploy.yml -uroot -k
多Master版:
ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
部署控制
如果安裝某個階段失敗,可針對性測試.
例如:只執行部署外掛
ansible-playbook -i hosts single-master-deploy.yml -uroot -k --tags addons
示例參考:https://github.com/ansible/ansible-examples
部署程式碼版本倉庫Gitlab
部署docker
Uninstall old versions
$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
SET UP THE REPOSITORY
$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
INSTALL DOCKER ENGINE
$ sudo yum install docker-ce docker-ce-cli containerd.io -y
$ sudo systemctl start docker && sudo systemctl enable docker
$ sudo docker run hello-world
部署gitlab
docker run -d \
--name gitlab \
-p 8443:443 \
-p 9999:80 \
-p 9998:22 \
-v $PWD/config:/etc/gitlab \
-v $PWD/logs:/var/log/gitlab \
-v $PWD/data:/var/opt/gitlab \
-v /etc/localtime:/etc/localtime \
passzhang/gitlab-ce-zh:latest
訪問地址:http://IP:9999
初次會先設定管理員密碼 ,然後登陸,預設管理員使用者名稱root,密碼就是剛設定的。
配置本地Git上傳測試程式碼,建立專案到Gitlab
https://github.com/passzhang/simple-microservice
程式碼分支說明:
dev1 交付程式碼
dev2 編寫Dockerfile構建映象
dev3 K8S資源編排
dev4 增加微服務鏈路監控
master 最終上線
拉取master分支,推送到私有程式碼倉庫:
git clone https://github.com/PassZhang/simple-microservice.git
# cd 進入simple-microservice目錄
# 修改.git/config檔案,將地址上傳地址配置成本地gitlab既可以
vim /root/simple-microservice/.git/config
...
[remote "origin"]
url = http://192.168.25.227:9999/root/simple-microservice.git
fetch = +refs/heads/*:refs/remotes/origin/*
...
# 下載之後,還需修改連線資料庫配置(xxx-service/src/main/resources/application-fat.yml),本次測試我將資料庫地址修改成192.168.25.228::3306.
# 修改好資料庫地址後,才可以上傳檔案。
cd microservice
git config --global user.email "[email protected]"
git config --global user.name "passzhang"
git add .
git commit -m 'all'
git push origin master
部署pinpoint 全鏈路監控系統(提前修改Dockerfile,打包映象上傳)
部署映象倉庫Harbor(開啟helm倉庫)
安裝docker與docker-compose
# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# yum install docker-ce -y
# systemctl start docker && systemctl enable docker
curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
2.2 解壓離線包部署
# tar zxvf harbor-offline-installer-v1.9.1.tgz
# cd harbor
-----------
# vi harbor.yml
hostname: 192.168.25.228
http: 8088
-----------
# ./prepare
# ./install.sh --with-chartmuseum --with-clair
# docker-compose ps
--with-chartmuseum 引數表示啟用Charts儲存功能。
配置Docker可信任
由於habor未配置https,還需要在docker配置可信任。
# cat /etc/docker/daemon.json
{"registry-mirrors": ["http://f1361db2.m.daocloud.io"],
"insecure-registries": ["192.168.25.228:8088"]
}
# systemctl restart docker
#這邊配置好倉庫之後,也要保證K8S的master節點和docker節點都能同時連線上。需要修改dameon.json檔案。
master節點部署helm應用包管理器(配置本地helm倉庫,上傳helm包)
安裝Helm工具
# wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
# tar zxvf helm-v3.0.0-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/bin/
配置國內Chart倉庫
# helm repo add stable http://mirror.azure.cn/kubernetes/charts
# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# helm repo list
安裝push外掛
# helm plugin install https://github.com/chartmuseum/helm-push
如果網路下載不了,也可以直接解壓課件裡包:
# tar zxvf helm-push_0.7.1_linux_amd64.tar.gz
# mkdir -p /root/.local/share/helm/plugins/helm-push
# chmod +x bin/*
# mv bin plugin.yaml /root/.local/share/helm/plugins/helm-push
新增repo
# helm repo add --username admin --password Harbor12345 myrepo http://192.168.25.228:8088/chartrepo/ms
推送與安裝Chart
# helm push ms-0.1.0.tgz --username=admin --password=Harbor12345 http://192.168.25.228:8088/chartrepo/ms
# helm install --username=admin --password=Harbor12345 --version 0.1.0 http://192.168.25.228:8088/chartrepo/library/ms
部署K8S 儲存(nfs、ceph),master節點提供pv自動供給
先準備一臺NFS伺服器為K8S提供儲存支援。
# yum install nfs-utils -y
# vi /etc/exports
/ifs/kubernetes * (rw,no_root_squash)
# mkdir -p /ifs/kubernetes
# systemctl start nfs
# systemctl enable nfs
並且要在每個Node上安裝nfs-utils包,用於mount掛載時用。
由於K8S不支援NFS動態供給,還需要先安裝上圖中的nfs-client-provisioner外掛:
具體配置檔案如下:
[root@k8s-master1 nfs-storage-class]# tree
.
├── class.yaml
├── deployment.yaml
└── rbac.yaml
0 directories, 3 files
rbac.yaml
[root@k8s-master1 nfs-storage-class]# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
class.yaml
[root@k8s-master1 nfs-storage-class]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true"
deployment.yaml
[root@k8s-master1 nfs-storage-class]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.25.227
- name: NFS_PATH
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 192.168.25.227
path: /ifs/kubernetes
# 部署時不要忘記將server地址修改成新的nfs地址。
# cd nfs-client
# vi deployment.yaml # 修改裡面NFS地址和共享目錄為你的
# kubectl apply -f .
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-df88f57df-bv8h7 1/1 Running 0 49m
部署MySQL叢集(匯入微服務資料庫)
# yum install mariadb-server -y
# systemctl start mariadb.service
# mysqladmin -uroot password '123456'
或者docker建立
docker run -d --name db -p 3306:3306 -v /opt/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7 --character-set-server=utf8
最後將微服務資料庫匯入。
[root@cephnode03 db]# pwd
/root/simple-microservice/db
[root@cephnode03 db]# ls
order.sql product.sql stock.sql
[root@cephnode03 db]# mysql -uroot -p123456 <order.sql
[root@cephnode03 db]# mysql -uroot -p123456 <product.sql
[root@cephnode03 db]# mysql -uroot -p123456 <stock.sql
# 配置好之後需要修改資料庫授權
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.25.%' IDENTIFIED BY '123456';
部署EFK日誌採集(追加)
部署prometheus監控系統(追加)
在Kubernetes中部署Jenkins
參考:https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes
當前我們直接在kubernetes中部署Jenkins程式,部署之前需要提前準備好儲存,前面已經部署了nfs 儲存,也可以使用其他儲存方案,例如ceph等。接下來我們開始部署吧。
Jenkins yaml檔案彙總
[root@k8s-master1 jenkins]# tree
.
├── deployment.yml
├── ingress.yml
├── rbac.yml
├── service-account.yml
└── service.yml
0 directories, 5 files
rbac.yml
[root@k8s-master1 jenkins]# cat rbac.yml
---
# 建立名為jenkins的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
# 建立名為jenkins的Role,授予允許管理API組的資源Pod
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
# 將名為jenkins的Role繫結到名為jenkins的ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
service-account.yml
[root@k8s-master1 jenkins]# cat service-account.yml
# In GKE need to get RBAC permissions first with
# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
ingress.yml
[root@k8s-master1 jenkins]# cat ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
rules:
- host: jenkins.test.com
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
service.yml
[root@k8s-master1 jenkins]# cat service.yml
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
selector:
name: jenkins
type: NodePort
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
nodePort: 30006
- name: agent
port: 50000
protocol: TCP
deployment.yml
[root@k8s-master1 jenkins]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
labels:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
name: jenkins
template:
metadata:
name: jenkins
labels:
name: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
securityContext:
fsGroup: 1000
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home
spec:
storageClassName: "managed-nfs-storage"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
登入地址:直接輸入ingress配置的域名:http://jenkins.test.com
修改外掛地址:
由於預設外掛源在國外伺服器,大多數網路無法順利下載,需修改國內外掛源地址:
cd jenkins_home/updates
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && \
sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json
Jenkins Pipeline 及引數化構建
Jenkins引數化構建流程圖
Jenkins Pipeline是一套外掛,支援在Jenkins中實現整合和持續交付管道;
- Pipeline通過特定語法對簡單到複雜的傳輸管道進行建模;
- 宣告式:遵循與Groovy相同語法。pipeline { }
- 指令碼式:支援Groovy大部分功能,也是非常表達和靈活的工具。node { }
- Jenkins Pipeline的定義被寫入一個文字檔案,稱為Jenkinsfile。
參考:https://jenkins.io/doc/book/pipeline/syntax/
當前環境中我們需要配置pipeline指令碼,我們可以先來建立一個Jenkins-pipeline指令碼測試一下
安裝pipeline外掛 : Jenkins 首頁 ------ >系統管理 ------ > 外掛管理 ------> 可選外掛 ------> 過濾輸入pipeline, 安裝pipeline外掛既可以使用。
流水線中輸入以下指令碼進行測試
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building'
}
}
stage('Test') {
steps {
echo 'Testing'
}
}
stage('Deploy') {
steps {
echo 'Deploying'
}
}
}
}
測試結果如下:
日誌如下:
控制檯輸出
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/pipeline-test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] echo
Building
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
Testing
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] echo
Deploying
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
輸出SUCCESS即成功測試
Jenkins在K8S中動態建立代理
前面我們已經完成了pipeline指令碼的測試,但是考慮到Jenkins 主機效能有限,如果我們要執行大批量的任務,Jenkins 主機可能會崩潰,這時我們採用Jenkins-slave的方式,給Jenkins主機增加小弟,由Jenkins主機來部署任務,具體任務和編譯則留給小弟去做。
傳統的Jenkins Master/Slave架構
K8S中Jenkins Master/Slave架構
新增Kubernetes外掛
Kubernetes外掛:Jenkins在Kubernetes叢集中執行動態代理.
外掛介紹:https://github.com/jenkinsci/kubernetes-plugin
新增一個kubernetes 雲
當前環境中我們需要將Jenkins和kubernetes 進行關聯,讓Jenkins可以連通kubernetes 並且自動在kubernetes 中 進行命令操作,需要新增kubernetes 雲,操作步驟如下:
Jenkins 首頁 ------ > 系統管理 ------ > 系統配置 ------ > 雲 ------ > 新增一個雲 ------ > Kubernetes
配置一下kubernetes 雲,當前我們部署的Jenkins是在kubernetes 中直接部署的pod,Jenkins可以直接通過service 讀取到kubernetes的地址,所以我們這個地方輸入kubernetes的DNS地址(https://kubernetes.default)就可以了,輸入完之後不要忘記點選連結測試哦。
Jenkins地址我們也直接輸入DNS地址既可以,地址為(http://jenkins.default),這樣我們就新增了一個kubernetes雲。
自定義構建Jenkins-Slave映象推送到映象倉庫
配置所需檔案:
[root@k8s-master1 jenkins-slave]# tree
.
├── Dockerfile #構建Jenkins-slave所需
├── helm #helm 命令:用於在Jenkins-slave pod 工作時,執行helm 操作安裝helm chart庫。
├── jenkins-slave #jenkins-slave所需指令碼
├── kubectl #kebectl 命令:用於在Jenkins-slave pod 工作中,執行pod 建立命令和查詢pod 執行結果等。
├── settings.xml #Jenkins-slave 所需檔案
└── slave.jar #Jenkins-slave jar包
0 directories, 6 files
Jenkins-slave 所需 Dockerfile檔案
FROM centos:7
LABEL maintainer passzhang
RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \
yum clean all && \
rm -rf /var/cache/yum/* && \
mkdir -p /usr/share/jenkins
COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave
COPY helm kubectl /usr/bin/
ENTRYPOINT ["jenkins-slave"]
參考:https://github.com/jenkinsci/docker-jnlp-slave
參考:https://plugins.jenkins.io/kubernetes
推送Jenkins-slave 映象到harbor倉庫
[root@k8s-master1 jenkins-slave]#
docker build -t jenkins-slave:jdk-1.8 .
docker tag jenkins-slave:jdk-1.8 192.168.25.228:8088/library/jenkins-slave:jdk-1.8
docker login 192.168.25.228:8088 #登入私有倉庫
docker push 192.168.25.228:8088/library/jenkins-slave:jdk-1.8 #推送映象到私有倉庫
配置好之後,需要使用pipeline 流水線測試一下是否可以直接呼叫Jenkins-slave ,檢視Jenkins-slave 是否正常工作。
測試pipeline指令碼:
pipeline {
agent {
kubernetes {
label "jenkins-slave"
yaml """
apiVersion: v1
kind: Pod
metadata:
name: jenkins-slave
spec:
containers:
- name: jnlp
image: 192.168.25.228:8088/library/jenkins-slave:jdk-1.8
"""
}
}
stages {
stage('Build') {
steps {
echo 'Building'
}
}
stage('Test') {
steps {
echo 'Testing'
}
}
stage('Deploy') {
steps {
echo 'Deploying'
}
}
}
}
部署截圖如下:
Pipeline 整合 Helm 釋出微服務專案
部署步驟:
拉取程式碼 ——> 程式碼編譯 ——> 單元測試 ——> 構建映象 ——> Helm部署到K8S 測試
建立新的Jenkins任務k8s-deploy-spring-cloud
增加pipeline指令碼:
#!/usr/bin/env groovy
// 所需外掛: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter
// 公共
def registry = "192.168.25.228:8088"
// 專案
def project = "ms"
def git_url = "http://192.168.25.227:9999/root/simple-microservice.git"
def gateway_domain_name = "gateway.test.com"
def portal_domain_name = "portal.test.com"
// 認證
def image_pull_secret = "registry-pull-secret"
def harbor_registry_auth = "9d5822e8-b1a1-473d-a372-a59b20f9b721"
def git_auth = "2abc54af-dd98-4fa7-8ac0-8b5711a54c4a"
// ConfigFileProvider ID
def k8s_auth = "f1a38eba-4864-43df-87f7-1e8a523baa35"
pipeline {
agent {
kubernetes {
label "jenkins-slave"
yaml """
kind: Pod
metadata:
name: jenkins-slave
spec:
containers:
- name: jnlp
image: "${registry}/library/jenkins-slave:jdk-1.8"
imagePullPolicy: Always
volumeMounts:
- name: docker-cmd
mountPath: /usr/bin/docker
- name: docker-sock
mountPath: /var/run/docker.sock
- name: maven-cache
mountPath: /root/.m2
volumes:
- name: docker-cmd
hostPath:
path: /usr/bin/docker
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: maven-cache
hostPath:
path: /tmp/m2
"""
}
}
parameters {
gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '選擇釋出的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH'
extendedChoice defaultValue: 'none', description: '選擇釋出的微服務', \
multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', \
value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030'
choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template')
choice (choices: ['1', '3', '5', '7', '9'], description: '副本數', name: 'ReplicaCount')
choice (choices: ['ms'], description: '名稱空間', name: 'Namespace')
}
stages {
stage('拉取程式碼'){
steps {
checkout([$class: 'GitSCM',
branches: [[name: "${params.Branch}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [], submoduleCfg: [],
userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]
])
}
}
stage('程式碼編譯') {
// 編譯指定服務
steps {
sh """
mvn clean package -Dmaven.test.skip=true
"""
}
}
stage('構建映象') {
steps {
withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
docker login -u ${username} -p '${password}' ${registry}
for service in \$(echo ${Service} |sed 's/,/ /g'); do
service_name=\${service%:*}
image_name=${registry}/${project}/\${service_name}:${BUILD_NUMBER}
cd \${service_name}
if ls |grep biz &>/dev/null; then
cd \${service_name}-biz
fi
docker build -t \${image_name} .
docker push \${image_name}
cd ${WORKSPACE}
done
"""
configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){
sh """
# 新增映象拉取認證
kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true
# 新增私有chart倉庫
helm repo add --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project}
"""
}
}
}
}
stage('Helm部署到K8S') {
steps {
sh """
common_args="-n ${Namespace} --kubeconfig admin.kubeconfig"
for service in \$(echo ${Service} |sed 's/,/ /g'); do
service_name=\${service%:*}
service_port=\${service#*:}
image=${registry}/${project}/\${service_name}
tag=${BUILD_NUMBER}
helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set service.targetPort=\${service_port} myrepo/${Template}"
# 判斷是否為新部署
if helm history \${service_name} \${common_args} &>/dev/null;then
action=upgrade
else
action=install
fi
# 針對服務啟用ingress
if [ \${service_name} == "gateway-service" ]; then
helm \${action} \${helm_args} \
--set ingress.enabled=true \
--set ingress.host=${gateway_domain_name} \
\${common_args}
elif [ \${service_name} == "portal-service" ]; then
helm \${action} \${helm_args} \
--set ingress.enabled=true \
--set ingress.host=${portal_domain_name} \
\${common_args}
else
helm \${action} \${helm_args} \${common_args}
fi
done
# 檢視Pod狀態
sleep 10
kubectl get pods \${common_args}
"""
}
}
}
}
執行結果如下:
當前直接點選構建,構建時前面幾次可能會失敗,多構建一次,打印出所有引數,既可以直接執行成功。
點擊發布gateway-service pod 檢視日誌結果
+ kubectl get pods -n ms --kubeconfig admin.kubeconfig
NAME READY STATUS RESTARTS AGE
eureka-0 1/1 Running 0 3h11m
eureka-1 1/1 Running 0 3h10m
eureka-2 1/1 Running 0 3h9m
ms-gateway-service-66d695c486-9x9mc 0/1 Running 0 10s
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
# 執行成功之後,會打印出來pod 資訊
釋出剩下的服務,並檢視結果:
+ kubectl get pods -n ms --kubeconfig admin.kubeconfig
NAME READY STATUS RESTARTS AGE
eureka-0 1/1 Running 0 3h14m
eureka-1 1/1 Running 0 3h13m
eureka-2 1/1 Running 0 3h12m
ms-gateway-service-66d695c486-9x9mc 1/1 Running 0 3m1s
ms-order-service-7465c47d79-lbxgd 0/1 Running 0 10s
ms-portal-service-7fd6c57955-jkgkk 0/1 Running 0 11s
ms-product-service-68dbf5b57-jwpv9 0/1 Running 0 10s
ms-stock-service-b8b9895d6-cb72b 0/1 Running 0 10s
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
檢視eureka結果:
可以看到所有的服務模組都已經註冊到eureka中了。
訪問一下前端頁面:
可以看到有商品查詢出來,代表已經連線資料庫,同時業務可以正常執行。大功告成了!
總結環境所需外掛
- 使用Jenkins的外掛
- Git & gitParameter
- Kubernetes
- Pipeline
- Kubernetes Continuous Deploy
- Config File Provider
- Extended Choice Parameter
- CI/CD環境特點
- Slave彈性伸縮
- 基於映象隔離構建環境
- 流水線釋出,易維護
- Jenkins引數化構建可幫助你完成更復雜環境CI/CD