1. 程式人生 > >k8s搭建文件

k8s搭建文件

什麼是Kubernetes

KubernetesGoogle開源的容器叢集管理系統,實現基於Docker構建容器,利用Kubernetes能很方面管理多臺Docker主機中的容器。

主要功能如下:

1)將多臺Docker主機抽象為一個資源,以叢集方式管理容器,包括任務排程、資源管理、彈性伸縮、滾動升級等功能。

2)使用編排系統(YAML File)快速構建容器叢集,提供負載均衡,解決容器直接關聯及通訊問題

3)自動管理和修復容器,簡單說,比如建立一個叢集,裡面有十個容器,如果某個容器異常關閉,那麼,會嘗試重啟或重新分配容器,始終保證會有十個容器在執行,反而殺死多餘的。

Kubernetes角色組成:

1Pod

Podkubernetes的最小操作單元,一個Pod可以由一個或多個容器組成;

同一個Pod只能執行在同一個主機上,共享相同的volumesnetworknamespace

2ReplicationControllerRC

RC用來管理Pod,一個RC可以由一個或多個Pod組成,在RC被建立後,系統會根據定義好的副本數來建立Pod數量。在執行過程中,如果Pod數量小於定義的,就會重啟停止的或重新分配Pod,反之則殺死多餘的。當然,也可以動態伸縮執行的Pods規模或熟悉。

RC通過label關聯對應的Pods

,在滾動升級中,RC採用一個一個替換要更新的整個Pods中的Pod

3Service

Service定義了一個Pod邏輯集合的抽象資源,Pod集合中的容器提供相同的功能。集合根據定義的Labelselector完成,當建立一個Service後,會分配一個Cluster IP,這個IP與定義的埠提供這個集合一個統一的訪問介面,並且實現負載均衡。

4Label

Label是用於區分PodServiceRCkey/value鍵值對; 

PodServiceRC可以有多個label,但是每個labelkey只能對應一個;

主要是將Service的請求通過

lable轉發給後端提供服務的Pod集合;

Kubernetes元件組成:

1kubectl

客戶端命令列工具,將接受的命令格式化後傳送給kube-apiserver,作為整個系統的操作入口。

2kube-apiserver

作為整個系統的控制入口,以REST API服務提供介面。

3kube-controller-manager

用來執行整個系統中的後臺任務,包括節點狀態狀況、Pod個數、PodsService的關聯等。

4kube-scheduler

負責節點資源管理,接受來自kube-apiserver建立Pods任務,並分配到某個節點。

5etcd

負責節點間的服務發現和配置共享。

6kube-proxy

執行在每個計算節點上,負責Pod網路代理。定時從etcd獲取到service資訊來做相應的策略。

7kubelet

執行在每個計算節點上,作為agent,接受分配該節點的Pods任務及管理容器,週期性獲取容器狀態,反饋給kube-apiserver

8DNS

一個可選的DNS服務,用於為每個Service物件建立DNS記錄,這樣所有的Pod就可以通過DNS訪問服務了。

基本部署步驟:

1minion節點安裝docker

2minion節點配置跨主機容器通訊

3master節點部署並啟動etcdkube-apiserverkube-controller-managerkube-scheduler元件

4minion節點部署並啟動kubeletkube-proxy元件

注意:如果minion主機沒有安裝docker,啟動kubelet時會報如下錯誤:

W0116 23:36:24.205672    2589 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.

W0116 23:36:24.205751    2589 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.

I0116 23:36:24.205817    2589 plugins.go:71] No cloud provider specified.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1、環境介紹及準備:

1.1 物理機作業系統

  物理機作業系統採用Centos7.3 64位,細節如下。

[[email protected]~]# uname -a

Linux localhost.localdomain3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 1813:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[[email protected]~]# cat /etc/redhat-release

CentOS Linuxrelease 7.3.1611 (Core)

1.2 主機資訊

  本文準備了三臺機器用於部署k8s的執行環境,細節如下:

 

節點及功能

主機名

IP

Masteretcdregistry

K8s-master

10.0.251.148

Node1

K8s-node-1

10.0.251.153

Node2

K8s-node-2

10.0.251.155

 

  設定三臺機器的主機名:

Master上執行:

[[email protected]~]#  hostnamectl --static set-hostname  k8s-master

Node1上執行:

[[email protected]~]# hostnamectl --static set-hostname k8s-node-1

Node2上執行:

[[email protected]~]# hostnamectl --static set-hostname k8s-node-2

  在三臺機器上設定hosts,均執行如下命令:

echo'10.0.251.148    k8s-master

10.0.251.148   etcd

10.0.251.148   registry

10.0.251.153   k8s-node-1

10.0.251.155    k8s-node-2'>> /etc/hosts

1.3 關閉三臺機器上的防火牆

systemctl disablefirewalld.service

systemctl stopfirewalld.service

 

2、部署etcd

k8s執行依賴etcd,需要先部署etcd,本文采用yum方式安裝:

[[email protected]~]# yuminstall etcd -y

yum安裝的etcd預設配置檔案在/etc/etcd/etcd.conf。編輯配置檔案,更改以下帶顏色部分資訊:

[[email protected]~]# vi /etc/etcd/etcd.conf

 

# [member]

ETCD_NAME=master

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_WAL_DIR=""

#ETCD_SNAPSHOT_COUNT="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#

#[cluster]

#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

#ETCD_INITIAL_CLUSTER_STATE="new"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

啟動並驗證狀態

[[email protected]~]# systemctl start etcd

[[email protected]~]#  etcdctl set testdir/testkey0 0

0

[[email protected]~]#  etcdctl get testdir/testkey0

0

[[email protected]~]# etcdctl -C http://etcd:4001cluster-health

member8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

cluster is healthy

[[email protected]~]# etcdctl -C http://etcd:2379cluster-health

member8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

cluster is healthy

擴充套件:Etcd叢集部署參見——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

 

3、部署master

3.1 安裝Docker

[[email protected]~]# yuminstall docker

配置Docker配置檔案,使其允許從registry中拉取映象。

[[email protected]~]# vim /etc/sysconfig/docker

 

#/etc/sysconfig/docker

 

# Modify theseoptions if you want to change the way the docker daemonruns

OPTIONS='--selinux-enabled --log-driver=journald--signature-verification=false'

if [ -z "${DOCKER_CERT_PATH}" ]; then

    DOCKER_CERT_PATH=/etc/docker

fi

OPTIONS='--insecure-registryregistry:5000'

設定開機自啟動並開啟服務

[[email protected]~]# chkconfig docker on

[[email protected]~]# service docker start

3.2 安裝kubernets

[[email protected]~]# yuminstall kubernetes

3.3 配置並啟動kubernetes

kubernetesmaster上需要執行以下元件:

Kubernets APIServer

KubernetsController Manager

KubernetsScheduler

相應的要更改以下幾個配置中帶顏色部分資訊:

3.3.1/etc/kubernetes/apiserver

[[email protected]~]# vim /etc/kubernetes/apiserver

 

###

# kubernetessystem config

#

# The followingvalues are used to configure the kube-apiserver

#

 

# The address onthe local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

 

# The port on thelocal server to listen on.

KUBE_API_PORT="--port=8080"

 

# Port minionslisten on

# KUBELET_PORT="--kubelet-port=10250"

 

# Comma separatedlist of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

 

# Address range touse for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

 

# defaultadmission control policies

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

 

# Add your own!

KUBE_API_ARGS=""

3.3.2  /etc/kubernetes/config

[[email protected]~]# vim /etc/kubernetes/config

 

###

# kubernetessystem config

#

# The followingvalues are used to configure various aspects of all

# kubernetesservices, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging tostderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

 

# journal messagelevel, 0 is debug

KUBE_LOG_LEVEL="--v=0"

 

# Should thiscluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

 

# How thecontroller-manager, scheduler, and proxy find theapiserver

KUBE_MASTER="--master=http://k8s-master:8080"

啟動服務並設定開機自啟動

[[email protected]~]# systemctl enable kube-apiserver.service

[[email protected]~]# systemctl start kube-apiserver.service

[[email protected]~]# systemctl enable kube-controller-manager.service

[[email protected]~]# systemctl start kube-controller-manager.service

[[email protected]~]# systemctl enable kube-scheduler.service

[[email protected]~]# systemctl start kube-scheduler.service

 

4、部署node

4.1 安裝docker

  參見3.1

4.2 安裝kubernets

  參見3.2

4.3 配置並啟動kubernetes

  在kubernetes node上需要執行以下元件:

Kubelet

Kubernets Proxy

相應的要更改以下幾個配置文中帶顏色部分資訊:

4.3.1/etc/kubernetes/config

[[email protected]1 ~]# vim /etc/kubernetes/config

 

###

# kubernetessystem config

#

# The followingvalues are used to configure various aspects of all

# kubernetesservices, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging tostderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

 

# journal messagelevel, 0 is debug

KUBE_LOG_LEVEL="--v=0"

 

# Should thiscluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

 

# How thecontroller-manager, scheduler, and proxy find theapiserver

KUBE_MASTER="--master=http://k8s-master:8080"

4.3.2/etc/kubernetes/kubelet

[[email protected]1 ~]# vim /etc/kubernetes/kubelet

 

###

# kuberneteskubelet (minion) config

 

# The address for the info server to serve on (set to 0.0.0.0 or ""for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

 

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

 

# You may leavethis blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

 

# location of theapi-server

KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

 

# podinfrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

 

# Add your own!

KUBELET_ARGS=""

啟動服務並設定開機自啟動

[[email protected]~]# systemctl enable kubelet.service

[[email protected]~]# systemctl start kubelet.service

[[email protected]~]# systemctl enable kube-proxy.service

[[email protected]~]# systemctl start kube-proxy.service

4.4 檢視狀態

  在master上檢視叢集中節點及節點狀態

[[email protected]~]#  kubectl -s http://k8s-master:8080 get node

NAME         STATUS    AGE

k8s-node-1   Ready     3m

k8s-node-2   Ready     16s

[[email protected]~]# kubectl get nodes

NAME         STATUS    AGE

k8s-node-1   Ready     3m

k8s-node-2   Ready     43s

至此,已經搭建了一個kubernetes叢集,但目前該叢集還不能很好的工作,請繼續後續的步驟。

 

5、建立覆蓋網路——Flannel

5.1 安裝Flannel

  在masternode上均執行如下命令,進行安裝

[[email protected]~]# yuminstall flannel

版本為0.0.5

5.2 配置Flannel

masternode上均編輯/etc/sysconfig/flanneld,修改紅色部分

[[email protected]~]# vi /etc/sysconfig/flanneld

 

# Flanneldconfiguration options

 

# etcd urllocation.  Point this to the server whereetcd runs

FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

 

# etcd configkey.  This is the configuration key thatflannel queries

# For addressrange assignment

FLANNEL_ETCD_PREFIX="/atomic.io/network"

 

# Any additionaloptions that you want to pass

#FLANNEL_OPTIONS=""

5.3 配置etcd中關於flannelkey

Flannel使用Etcd進行配置,來保證多個Flannel例項之間的配置一致性,所以需要在etcd上進行如下配置:(‘/atomic.io/network/config’這個key與上文/etc/sysconfig/flannel中的配置項FLANNEL_ETCD_PREFIX是相對應的,錯誤的話啟動就會出錯)

[[email protected]~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

{ "Network": "10.0.0.0/16" }

5.4 啟動

  啟動Flannel之後,需要依次重啟dockerkubernete

  在master執行:

systemctl enableflanneld.service

systemctl startflanneld.service

service dockerrestart

systemctl restartkube-apiserver.service

systemctl restartkube-controller-manager.service

systemctl restartkube-scheduler.service

  在node上執行:

systemctl enableflanneld.service

systemctl startflanneld.service

service dockerrestart

systemctl restartkubelet.service

systemctl restartkube-proxy.service

 

 

2、檢驗配置檔案的正確性。

 

當你不確定宣告的配置檔案是否書寫正確時,可以使用以下命令要驗證:

 

$ kubectl create -f ./hello-world.yaml --validate