單機部署k8s + docker
k8s邏輯架構圖
Master元件介紹
etcd
etcd是一個數據庫,它的目標是構建一個高可用的分散式鍵值資料庫,它是基於GO語言實現。在分散式系統中,各種服務的配置資訊的管理分享,服務的發現是一個很基本同時也是很重要的問題。在K8s中用於持久化儲存叢集中所有的資源物件,如Node、Service、Pod、RC、Namespace等;API Server提供了操作etcd的封裝介面API,這些API基本上都是叢集中資源物件的增刪改查及監聽資源變化的介面。
API Server
提供了資源物件的唯一操作入口,其他所有元件都必須通過它提供的API來操作資源資料,通過對相關的資源資料“全量查詢”+“變化監聽
Controller Manager
叢集內部的管理控制中心,其主要目的是實現Kubernetes叢集的故障檢測和恢復的自動化工作,比如根據RC的定義完成Pod的複製或移除,以確保Pod例項數符合RC副本的定義;根據Service與Pod的管理關係,完成服務的Endpoints物件的建立和更新;其他諸如Node的發現、管理和狀態監控、死亡容器所佔磁碟空間及本地快取的映象檔案的清理等工作也是由Controller Manager完成的。
Scheduler
叢集中的排程器,負責Pod在叢集節點中的排程分配。
Node元件介紹
Kubelet
負責本Node節點上的
Proxy
實現了Service的代理與軟體模式的負載均衡器。
##########################安裝配置##########################
1.etcd安裝和配置
1.1.安裝etcd
yum install etcd y
1.2.修改etcd.service
vi /usr/lib/systemd/system/etcd.service
#修改配置如下:
ExecStart=/bin/bash-c "GOMAXPROCS=$(nproc) /usr/bin/etcd--name=\"${ETCD_NAME}\"--data-dir=\"${ETCD_DATA_DIR}\"--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"--listen-
peer-urls=\"${ETCD_LISTEN_PEER_URLS}\"--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\"--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\"--initial-cluster=\"
${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
1.3.修改etcd.conf
vi /etc/etcd/etcd.conf
#修改配置如下:
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.169.21.128:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.169.21.128:2379,http://127.0.0.1:2379"
ETCD_NAME="default"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.169.21.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.169.21.128:2379"
ETCD_INITIAL_CLUSTER="default=http://192.169.21.128:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_STRICT_RECONFIG_CHECK="true"
1.4.啟動etcd
systemctl start etcd
2.flannel安裝和配置
2.1.安裝flannel
yum install flannel y
2.2.修改flanneld配置
vi /etc/sysconfig/flanneld
#修改配置如下:
# Flanneldconfiguration options
# etcd urllocation. Point this to the server where etcdruns
FLANNEL_ETCD_ENDPOINTS="http://192.169.21.128:2379"
# etcd config key. This is the configuration key thatflannel queries
# For addressrange assignment
FLANNEL_ETCD_PREFIX="/flannel/network"
# Any additionaloptions that you want to pass
#FLANNEL_OPTIONS=""
2.3.設定flannel環境變數:
etcdctl set /flannel/network/config'{"Network":"172.17.0.0/16"}'
etcdctlset/flannel/network/subnets/172.17.10.0-24'{"PublicIP":"192.169.21.128"}'
備註:/flannel/network需要與/etc/sysconfig/flanneld中的FLANNEL_ETCD_PREFIX="/flannel/network"一致
2.4.啟動flannel
#more/usr/lib/systemd/system/etcd.service
systemctl start flanneld
3.docker安裝和配置
3.1.docker安裝自行參考其它文件。
3.2.修改docker.service配置
vi /usr/lib/systemd/system/docker.service
#import flannelconfiguration
EnvironmentFile=-/etc/sysconfig/flanneld
EnvironmentFile=-/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd--bip=${FLANNEL_SUBNET}
3.3.啟動docker
systemctl start docker
3.4.檢視flannel0與docker0是否在同一網路,主要看網段是否一致
ip a
4.下載和安裝k8s(V1.7.15)
4.1.下載
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.7.md#downloads-for-v1715
Server
https://dl.k8s.io/v1.7.15/kubernetes-server-linux-arm64.tar.gz
Node
https://dl.k8s.io/v1.7.15/kubernetes-node-linux-amd64.tar.gz
4.2.安裝過程
cd /opt
mkdir kubernetes
mkdir -p {bin,cfg,server,node}
cd /opt/kubernetes/server
tar -zxvf kubernetes-server-linux-arm64.tar.gz
cd /opt/kubernetes/node
tar -zxvf kubernetes-node-linux-amd64.tar.gz
cd /opt/kubernetes/bin
mv/opt/kubernetes/kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl}/opt/kubernetes/bin
mv/opt/kubernetes/node/kubernetes/node/bin/{kubelet,kube-proxy}/opt/kubernetes/bin/
4.3.啟動k8s master相關元件
·啟動kube-apiserver
./kube-apiserver --address=0.0.0.0 --insecure-port=8080--service-cluster-ip-range='10.10.10.1/24'--log_dir=/usr/local/kubernete_test/logs/kube--kubelet_port=10250 --v=0--logtostderr=false--etcd_servers=http://192.169.21.128:2379--allow_privileged=false
·啟動kube-controller-manager
./kube-controller-manager --v=0--logtostderr=false--log_dir=/usr/local/kubernete_test/logs/kube--master=192.169.21.128:8080
·啟動kube-scheduler
./kube-scheduler --master='192.169.21.128:8080'--v=0 --log_dir=/usr/local/kubernete_test/logs/kube
4.4.檢查master節點是否正常啟動
./kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
scheduler Healthy ok
4.5.啟動k8s node相關元件
·啟動kube-proxy
./kube-proxy --logtostderr=false--v=0--master=http://192.169.21.128:8080
·啟動kubelet
./kubelet --logtostderr=false --v=0--allow-privileged=false --log_dir=/usr/local/kubernete_test/logs/kube --address=0.0.0.0 --port=10250 --hostname_override=192.169.21.128--api_servers=http://192.169.21.128:8080
4.6.在node上設定叢集Context
./kubectl config set-clustertest-cluster--server=http://192.169.21.128:8080
./kubectl config set-context test-cluster--cluster=test-cluster
./kubectl config use-context test-cluster
4.7.下載kubernetes/pause映象
docker pull docker.io/kubernetes/pause
4.8.測試K8S是否啟動成功
./kubectl get nodes
NAME STATUS AGE VERSION
192.169.21.128 Ready 21h v1.7.15
5.部署nginx叢集
5.1.建立nginx Pods
./kubectl run nginx --image=nginx --port=80 --replicas=5
./kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-1423793266-5nlrd 0/1 ContainerCreating 0 22s
nginx-1423793266-h03r2 0/1 ContainerCreating 0 23s
nginx-1423793266-jmb6m 0/1 ContainerCreating 0 23s
nginx-1423793266-wj64l 0/1 ContainerCreating 0 23s
nginx-1423793266-xzqdd 0/1 ContainerCreating 0 22s
5.2.建立nginx-service.yaml
vi nginx-service.yaml
#配置檔案如下:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 9090
selector:
name: nginx
5.3.根據Pods建立Service
./kubectl create -f ./nginx-service.yaml
./kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 22h
nginx 10.10.10.228 <none> 80/TCP 14s
到此k8s nginx叢集部署完成。