k8s項目實戰教程
需要有k8s的基礎,搭建k8s集群可用使用二進制文件,官方推薦使用容器部署的方式,這裏使用的kubeadm快捷部署
使用kubeadm 1.13搭建集群(各個版本的搭建可能存在一些小的差異)
5臺 centos7,最低版本
主機名分別是server1,server2,server3,server4,server5
1:系統更新: yum update -y
2:修改系統主機名 vim /etc/hostname
3:修改selinux:vim /etc/selinux/config,修改SELINUX=disabled,更改為disabled
4:關閉防火墻(非必要,可以自己配置端口):systemctl disable firewalld
6:重啟服務器:shutdown -r now
7:服務器可以配ssh免密登錄(非必要)
8:安裝docker:yum install docker -y
8.1:開啟自啟動:systemctl enable docker
8.2:修改docker的代理,不然會拉取不到k8s的鏡像
vim /etc/systemd/system/multi-user.target.wants/docker.service
添加一行:
Environment=HTTP_PROXY=http://10.99.32.2:1080
9:安裝kubeadm:
9.1:添加kubernetes的repo:vim /etc/yum.repos.d/kubernetes.repo,添加一下內容:
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube*
9.2:配置yum的外網代理:需要使用代理,否則下載不了
vim /etc/yum.conf添加或者修改一行:
proxy=http://yourhost:yourport
9.3:安裝:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
9.4:開啟自啟動:systemctl enable kubelet && systemctl start kubelet
10:配置kubeadm相關:
10.1:在master節點上配置iptable(非必要,可能已經設置好了的)
vim /etc/sysctl.d/k8s.conf,添加兩行內容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
10.2:關閉所有swap:swapoff -a
11:啟動docker,初始化kubeadm,有時需要修改cgroup,這裏不需要
systemctl start docker
kubeadm init,出現kubeadm join即是初始化成功
如果使用calico網絡插件,需要指定一個分配的ip域:
kubeadm init --pod-network-cidr=192.168.0.0/16
12:配置管理端,根據提示復制一個config文件即可使用kubectl操作,操作成功後執行
kubectl get nodes,能看到信息即是正常,看不到提示The connection to the server localhost:8080 was refused - did you specify the right host or port?,則是沒有配置config文件
13:安裝slave節點:和安裝master節點的步驟一樣,1.13版的slave不需要下載一些鏡像了,直接安裝完了,swapoff -a,然後kubeadm join 10.99.32.3:6443 --token euoczm.lhfb8w6ngx98aj3z --discovery-token-ca-cert-hash sha256:d094ed1b6769f25247e6b1586541f7dbee59272cddb93bb35e054472e40984e4
14:全部加入完畢後在master機器上使用kubectl get nodes即可看到所有機器,但是是NotReady狀態。
15:安裝網絡:使用的是calico網絡插件:
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f rbac-kdd.yaml
kubectl apply -f calico.yaml
16:查看kubectl get nodes 看到所有節點都是ready狀態即可
第二部分:搭建pv
1:使用NFS搭建PVC存儲卷(可以搭建動態pvc)
1.1:搭建NFS:在各個節點安裝控件:yum -y install nfs-utils rpcbind
1.2:在master上創建一個共享目錄:mkdir /nfsdisk
1.3:配置NFS服務器:vim /etc/exports,添加以下內容:
/nfsdisk 10.99.32.3(rw,sync,fsid=0,no_root_squash) 10.99.32.10(rw,sync,fsid=0,no_root_squash) 10.99.32.12(rw,sync,fsid=0,no_root_squash) 10.99.32.31(rw,sync,fsid=0,no_root_squash) 10.99.32.32(rw,sync,fsid=0,no_root_squash)
ip地址為需要讀寫這個目錄的客戶端的ip地址
1.4:開啟nfs服務自啟動和運行:systemctl enable nfs && systemctl start nfs
1.5:刷新共享:exportfs -rv,如果看到exporting 10.99.32.3:/nfsdisk即為配置正確
1.6:需要在各個客戶端啟動nfs才可以,否則配置pv會出錯
systemctl enable nfs && systemctl start nfs
1.7:配置PV和pvc,可以一個pvc被多個部署使用
創建一個pc.yaml文件,添加以下內容:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 150Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.99.32.3
path: /nfsdisk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 150Gi
volumeName: nfs-pv
1.8:創建pv和pvc:kubectl create -f pv.yaml
1.9:查看是否創建成功:kubectl get pv || kubectl get pvc
第三部分:搭建各個應用服務器,主要分為兩塊內容:
1:掛載volume(如果需要單獨保存一些數據的,或者是數據庫,redis等需要單獨存儲數據的應用服務器)
2:service配置:需要提供對外訪問和對內訪問的端口映射,這裏配置簡單的nodeport映射,復雜高級點的使用sitemesh方式
3:配置完部署的yaml文件後,直接使用kubectl create -f xxx.yaml即可
mysql服務器:
apiVersion: v1
kind: Service
metadata:
name: mysql-cs
labels:
app: mysql
spec:
type: NodePort
ports:
- name: mysql
port: 3306
nodePort: 31718
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.20
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "0"
- name: MYSQL_ROOT_PASSWORD
value: "123456"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: config
mountPath: /etc/mysql/conf.d/
resources:
requests:
cpu: 800m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
exec:
command: ["mysqladmin","-uroot","-pAwd123456789","ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
command: ["mysql","-h","127.0.0.1","-uroot","-pAwd123456789","-e","SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 2
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
- name: config
configMap:
name: mysql
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
data:
my.cnf: |
[mysqld]
sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
rabbitmq服務器:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.7.2-management-alpine
env:
- name: RABBITMQ_DEFAULT_USER
value: root
- name: RABBITMQ_DEFAULT_PASS
value: awd123456789
- name: RABBITMQ_DEFAULT_VHOST
value: /
ports:
- name: rabbitmq
containerPort: 5672
- name: management
containerPort: 15672
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
subPath: rabbitmq
resources:
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 800m
memory: 1024Mi
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-manager
spec:
type: NodePort
ports:
- name: management
port: 15672
nodePort: 31717
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: rabbitmq
selector:
app: rabbitmq
redis服務器:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:4.0.6-alpine
ports:
- name: redis
containerPort: 6379
volumeMounts:
- name: data
mountPath: /data
subPath: redis
resources:
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 800m
memory: 1024Mi
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- name: redis
port: 6379
nodePort: 31715
selector:
app: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-cs
labels:
app: redis
spec:
ports:
- port: 3306
name: redis
selector:
app: redis
第四部分構建自己的應用:
1:先講自己的應用打包成鏡像,然後推送到公有倉庫,或者私有倉庫
2:配置yaml文件,鏡像使用自己打包的鏡像
k8s項目實戰教程