1. 程式人生 > >k8s 1M 2N的搭建

k8s 1M 2N的搭建

k8s

簡介

由於現在Centos 已經把kubernetes加入到了源,所以本次安裝是通過yum方式來安裝的

操作環境

設備和環境

本次實驗使用3臺設備,1主2個節點的方式安裝。
系統是centos7.4 ,關閉了防火墻firewalld,selinux,iptables,
master包含kube-apiserver kube-scheduler kube-controller-manager etcd四個組件
node包含kube-proxy kubelet flannel 3個組件

1. kube-apiserver:位於master節點,接受用戶請求。
2. kube-scheduler:位於master節點,負責資源調度,即pod建在哪個node節點。
3. kube-controller-manager:位於master節點,包含ReplicationManager,Endpointscontroller,Namespacecontroller,and Nodecontroller等。
4. etcd:分布式鍵值存儲系統,共享整個集群的資源對象信息。
5. kubelet:位於node節點,負責維護在特定主機上運行的pod。
6. kube-proxy:位於node節點,它起的作用是一個服務代理的角色

安裝部署

要保證三臺設備的時間是一直同步的,所以要安裝NTP(3臺設備都要執行下面的命令)

# yum -y install ntp  
# systemctl start ntpd  
# systemctl enable ntpd 
# ntpdate time1.aliyun.com

master部署

安裝主服務

yum -y install kubernetes-master etcd  

配置主上etcd

[root@localhost ~]# cat /etc/etcd/etcd.conf 
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"# 指定節點的數據存儲目錄,包括節點ID,集群ID,集群初始化配置,Snapshot文件,若未指定—wal-dir,還會存儲WAL文件;
#ETCD_WAL_DIR=""#指定節點的was文件的存儲目錄,若指定了該參數,wal文件會和其他數據文件分開存儲
ETCD_LISTEN_PEER_URLS="http://192.168.56.200:2380"#監聽URL,用於與其他節點通訊
ETCD_LISTEN_CLIENT_URLS="http://192.168.56.200:2379,http://127.0.0.1:2379"#告知客戶端url, 也就是服務的url
ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"#節點名稱
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.56.200:2380"#告知集群其他節點url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.200:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.56.200:2380,etcd2=http://192.168.56.201:2380,etcd3=http://192.168.56.202:2380"#集群中所有節點
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"#集群的ID
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

配置kubu-apiserver

[root@localhost ~]# cat /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.200:2379,http://192.168.56.201:2379,http://192.168.56.202:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

配置controller-manager(可選,實驗中沒有配置)

# Add your own!  
#KUBE_CONTROLLER_MANAGER_ARGS=""  
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"  

配置config

[root@localhost ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

如果端口8080唄占用,可以使用其他的

啟動服務

systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager 
systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager  

etcd網絡配置

定義etcd中的網絡配置,nodeN中的flannel service會拉取此配置
etcdctl mk /coreos.com/network/config ‘{"Network":"172.17.0.0/16"}‘ 
#這裏coreos.com目錄自己可以創建

或者寫腳本

[root@localhost ~]# cat etc.sh 
etcdctl mkdir /atomic.io/network  
etcdctl mk /atomic.io/network/config "{ \"Network\": \"172.17.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

為flannel創建分配的網絡
#只在master上etcd執行
etcdctl mk /coreos.com/network/config ‘{"Network": "10.1.0.0/16"}‘
#若要重新建,先刪除
etcdctl rm /coreos.com/network/ --recursive

Node部署

yum -y install kubernetes-node etcd flannel docker 

修改kube-node

所有節點的配置都一樣

[root@localhost ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.200:8080"

kubelet

[root@localhost ~]# cat /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.56.201"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.56.200:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

#Add your own!
#KUBELET_ARGS=""
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"

修改flannel

為etcd服務配置flannel,修改配置文件 /etc/sysconfig/flanneld
[root@localhost ~]# cat /etc/sysconfig/flanneld 
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.56.200:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network" #這裏的名字和上面etc的裏面設置的名字必須一直

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

Example:

[root@localhost ~]# cat /etc/sysconfig/flanneld 
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.56.200:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

啟動服務

systemctl restart flanneld docker
systemctl start kubelet kube-proxy
systemctl enable flanneld kubelet kube-proxy

通過查看網卡發現有docker和flannel 兩個網卡

[root@localhost ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.30.31.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:ddff:fe03:591c  prefixlen 64  scopeid 0x20<link>
        ether 02:42:dd:03:59:1c  txqueuelen 0  (Ethernet)
        RX packets 69  bytes 4596 (4.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.201  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::d203:ac67:53b0:897b  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::30c1:3975:3246:cc1f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:f1:de:cb  txqueuelen 1000  (Ethernet)
        RX packets 810383  bytes 126454887 (120.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 796437  bytes 163368198 (155.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.30.31.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::54ee:7aff:fe11:ba95  prefixlen 64  scopeid 0x20<link>
        ether 56:ee:7a:11:ba:95  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 1719  bytes 89584 (87.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1719  bytes 89584 (87.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

驗證

[root@localhost ~]# kubectl get node
NAME             STATUS    AGE
192.168.56.201   Ready     4d
192.168.56.202   Ready     4d

k8s 1M 2N的搭建