kubernetes環境搭建(手動篇)
主要內容
1.環境準備(2主機)
2.安裝流程
3.問題分析
4.總結
1.環境準備(2主機)
系統:CentOS 7.3 x64
網路:區域網(VPC)
主機:
master:172.16.0.17
minion-1:172.16.0.7
1.1.修改host配置
修改master和minion-1的host檔案,使得各主機可通過主機名訪問,方便更新和遷移;
echo "172.16.0.17 k8s-master
172.16.0.7 k8s-minion-1" >> /etc/hosts
1.2.關閉selinux與防火牆
修改檔案/etc/sysconfig/selinux
SELINUX=enable 改成 SELINUX=disabled
1.3.修改iptables
安裝iptables
yum install iptables-services #安裝
systemctl start iptables.service #最後重啟防火牆使配置生效
systemctl enable iptables.service #設定防火牆開機啟動
如果系統開啟防火牆(firewalld服務),關閉並禁用;
安裝iptables並設定開機啟動,配置路由規則,開放一下埠
埠開放:
——master——–
2379: ETCD服務埠,對master和minion開放(flaneld服務啟動可能啟動失敗)
2380:ETCD資料埠,對master和minion開放
8080:kube-apiserver對外服務介面
—-minion———–
10250:minion監聽藉口(否則,發現不了minion)
iptables參考
#master
iptables -P INPUT DROP
iptables -I INPUT -s 172.16.0.0/0 -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -s 172.16.0.0/0 -p tcp --dport 2380 -j ACCEPT
iptables -I INPUT -s 172.16.0.0/0 -p tcp --dport 8080 -j ACCEPT
#minion
iptables -I INPUT -s 172.16 .0.0/0 -p tcp --dport 10250 -j ACCEPT
關閉防火牆
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
2.安裝流程
(master&minion):表示master和minion都需要完成的步驟;
2.1.新增軟體源(master&minion)
新增檔案 /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
2.2.安裝相關服務(master&minion)
*服務列表和版本
如果需要特定版本的安裝包,可到官網下載;
- kubernetes-1.5.2
- etcd-3.2.11
- flannel-0.7.1
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
2.3.修改Kubernates配置(master&minion)
vim /etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
#ETCD
KUBE_ETCD_SERVERS="--etcd_servers=http://k8s-master:2379"
2.4.修改etcd配置(master)
vim /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
2.5.修改apiserver(master)
vim /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
2.6.修改flanneld(master)
vim /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://k8s-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
2.7.修改節點kubelet配置(minion)
vim /etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=k8s-minion-1"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# Add your own!
KUBELET_ARGS=""
2.8.修改節點flanneld配置(minion)
vim /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://k8s-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
2.9.啟動相關服務(master&minion)
master
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
minion
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
2.10.服務設定(minion)
如果服務均正常啟動,進行全域性環境設定
kubectl config set-cluster default-cluster --server=http://k8s-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
2.11,安裝測試
檢視所有節點是否被發現,狀態是否正常;
kubectl get nodes
3.問題分析
pod一直處於ContainerCreating:open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory
解決:
在master和minion上分別執行yum install *rhsm*
容器已啟動,但是restarting failed docker container,服務have no endpoint
解決:
kubectl delete {service,deployment},重新建立
服務和pod均正常,flanneld閘道器無法互通:getsockopt: connection timed out’
解決:
Kubernetes節點的防火牆問題導致pod ip無法訪問,清空iptables規則;
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl stop $SERVICES
systemctl status $SERVICES
done
iptables -L -n
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -L -n
service iptables save
systemctl restart iptables
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl status $SERVICES
done
建立pod不成功:creating: No API token found for service account
open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory / No API token found for service account
解決:
To get your setup working, you can do the same thing local-up-cluster.sh is doing:
Generate a signing key:
openssl genrsa -out /tmp/serviceaccount.key 2048Update /etc/kubernetes/apiserver:
KUBE_API_ARGS=”–service_account_key_file=/tmp/serviceaccount.key”
Update /etc/kubernetes/controller-manager:
KUBE_CONTROLLER_MANAGER_ARGS=”–service_account_private_key_file=/tmp/serviceaccount.key”
kubernetes 內部(pod和apiservice通訊不通,沒有secret)
x509: cannot validate certificate for 10.254.0.1 because it doesn’t contain any contain any IP SANs
1.複製openssl.conf配置,並修改,新增
----------------------------
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
# 具體ip與--service-cluster-ip-range配置相關
IP.1 = 10.254.0.1
2.使用自定義配置,生成證書
openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-master-url.com" -days 5000 -out ca.crt -config openssl.cnf
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=k8s-master-in" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 5000 -extensions v3_req -extfile openssl.cnf
openssl x509 -noout -text -in ./server.crt
3.修改kubernetes相關配置
----------------------
vim /etc/kubernetes/apiserver
--client_ca_file=/var/run/kubernetes/ca.crt --tls-private-key-file=/var/run/kubernetes/server.key --tls-cert-file=/var/run/kubernetes/server.crt
----------------------------
/etc/kubernetes/controller-manager
--service_account_private_key_file=/var/run/kubernetes/server.key --root-ca-file=/var/run/kubernetes/ca.crt
4.刪除原有secret,並重啟服務
kubectl get secrets --all-namespaces
kubectl delete secret default-XXX
systemctl restart XXXX
4.總結
本文簡單的記錄了kubernetes手動部署的基本流程,對於正式的開發環境和生產環境需要更多的考慮;同時,由於容器整體環境的快速發展,誕生了很多自動部署kebernetes叢集環境的平臺和工具,比如kubeadm和Rancher等,對於剛剛步入分散式架構和服務化的團隊,這種方式能大大減輕初期的困難。後面,我們講述最近使用頻繁的全棧式容器管理平臺Rancher,基於其實現kubernetes的快速部署。
參考