k8s+docker+calico
enp0s3:192.168.112.149 (master)
enp0s3:192.168.112.42(node1)
enp0s3:192.168.112.249 (node2)
註意:以上操作系統環境都是 Ubuntu Xenial 16.04 (LTS)
一.安裝Docker
註意:docker支持64位操作系統,docker 的內核3.10以上版本
1.更新包信息,確保APT使用https方法,並且已安裝CA證書
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
2.添加新的GPG密鑰
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
3.添加docker源
echo ‘deb https://apt.dockerproject.org/repo ubuntu-xenial main‘ >> /etc/apt/sources.list.d/docker.list
註意:這是Ubuntu Xenial 16.04 (LTS)的安裝源,每個版本的源是不一樣的具體見官方文檔
4.更新源
sudo apt-get update
5.驗證APT的存儲庫是否正確
apt-cache policy docker-engine
註意:出現以上這樣的才正確
6.更新源
sudo apt-get update
7.安裝鏡像
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
8.安裝docker
sudo apt-get install docker-engine
9.啟動docker守護進程
sudo service docker start
10.拉取hello-world鏡像驗證docker是否正確
sudo docker run hello-world
二.在每個節點都要操作
1.拉取鏡像(註意:不是root用戶及時sudo)
docker pull calico/node:v0.23.0
docker pull tristan129/pause-amd64:3.0
docker tag tristan129/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
將附件裏的kubernets_wangjun.tar 放到每個節點的家目錄下並解壓
cp master/calico-node.service /etc/systemd/
systemctl enable /etc/systemd/calico-node.service
cp master/calicoctl.0.23.0 master/kubectl.1.4.3 master/kubelet.1.4.3 /usr/bin/
ln -snf kubelet.1.4.3 kubelet
ln -snf kubectl.1.4.3 kubectl
ln -snf calicoctl.0.23.0 calicoctl
mkdir -p /etc/kubernetes/manifests
sudo mkdir -p /etc/kubernetes/{manifests,ssl}
編輯 /etc/profile
export MASTER_NODE_IP=‘填寫master ip‘
export NODE_IP=‘填寫node ip‘
三.TLS認證
.master節點上生成證書
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 1000 -out ca.pem -subj "/CN=kube-ca"
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
這裏的openssl.conf路徑是在提供的tar包裏的master路徑下
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 1000 -extensions v3_req -extfile openssl.cnf 這裏的openssl.conf路徑是在提供的tar包裏的master路徑下
cp *.pem /etc/kubernetes/ssl/
cp master/kubernetes-master.manifest /etc/kubernetes/manifests/
cp master/ calico-etcd.manifest /etc/kubernetes/ssl
啟動服務
systemctl start calico-node
這時候calico並不會起來,因為連接不到etcd的6666端口
檢查calcio服務狀態
sudo systemctl status calico-node.service
可以看下日誌 /var/log/syslog
Nov 22 15:56:08 master calicoctl[3045]: ERROR: Could not connect to etcd at 192.168.112.123:6666: Connection to etcd failed due to MaxRetryError("HTTPConnectionPool(host=‘192.168.112.123‘, port=6666): Max retries exceeded with url: /version (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f962f0e1510>: Failed to establish a new connection: [Errno 111] Connection refused‘,))",)
Nov 22 15:56:08 master systemd[1]: calico-node.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 15:56:08 master systemd[1]: calico-node.service: Unit entered failed state.
Nov 22 15:56:08 master systemd[1]: calico-node.service: Failed with result ‘exit-code‘.
Nov 22 15:56:18 master systemd[1]: calico-node.service: Service hold-off time over, scheduling restart.
Nov 22 15:56:18 master systemd[1]: Stopped Calico per-node agent.
Nov 22 15:56:18 master systemd[1]: Started Calico per-node agent.
Nov 22 15:56:18 master calicoctl[3066]: ERROR: Could not connect to etcd at 192.168.112.123:6666: Connection to etcd failed due to MaxRetryError("HTTPConnectionPool(host=‘192.168.112.123‘, port=6666): Max retries exceeded with url: /version (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f6d36d0e510>: Failed to establish a new connection: [Errno 111] Connection refused‘,))",)
Nov 22 15:56:18 master systemd[1]: calico-node.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 15:56:18 master systemd[1]: calico-node.service: Unit entered failed state.
Nov 22 15:56:18 master systemd[1]: calico-node.service: Failed with result ‘exit-code‘.
cp master/kubeletserver /etc/systemd/
systemctl enable kubelet.service
systemctl start kubelet.service
此時你docker ps 應該看到 apiserver proxy scheduler controlle pause etcd (2個,一個被kubernetes集群使用另外一個備calico集群使用)
在node節點上
cd client_node/
生成私鑰和證書請求文件
openssl genrsa -out worker-key.pem 2048
mv worker-key.pem /etc/kubernetes/ssl
chmod 600 /etc/kubernetes/ssl/worker-key.pem
chown root:root /etc/kubernetes/ssl/worker-key.pem
export WORKER_IP=這裏填節點IP
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
需要將worker.csr 發送到maste節點上的進行簽名
scp worker.csr ${MASTER_NODE_IP}/../master/
四.在maser節點上簽名
export WORKER_IP=這裏填節點IP
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf/
scp apiserver-key.pem apiserver.pem ca.pem worker-key.pem worker.pem 拷貝到節點的/etc/kubernetes/ssl/
五.在node節點上
mkdir -p /etc/cni/net.d
mkdir -p /opt/cni/bin
cp client_node/10-calico.conf /etc/cni/net.d 註意替換${MASTER_NODE_IP}
cp client_node/cni_network.tar /opt/cni/bin 並解壓
cp client_node/kubelet.service /etc/systemd/
cp client_node/network-environment /etc/ 註意替換${MASTER_NODE_IP} 和${NODE_IP}
export ETCD_AUTHORITY=${MASTER_NODE_IP}:6666 註意替換${MASTER_NODE_IP}
calico pool remove 192.168.0.0/16
calico pool add 172.26.0.0/16
calico pool show
systemctl enable /etc/systemd/kubelet.service
systemctl start systemd/kubelet
這個時候docker ps 應該看到 proxy 進程已經起來了
六 安裝額外組件 dashboard及kube-dns在master 上操作
cd addon
kubectl create -f kubernetes-dashboard.yaml
kubectl create -f kubdns-rc.yaml
kubectl create -f kubdns-svc.yaml
完畢
在集群中任意節點上可以訪問cluster ip 及 pod ip
calico status 可以看到各節點之間為bgp全互聯關系,我們可以關閉全互聯,作為三層交換機的RR client . 這樣我們的容器可以訪問到集群外的應用或者你訪問使用snat
calico pool add 2.26.0.0/16 --nat-outgoing
這時候你的容器就可以訪問外部應用了。
如果外部需要訪問你的服務可以使用node port 或者loadblance的方式,詳細內容見V2將更性使用BGP 反射器的結構及master的高可用。
k8s+docker+calico