1. 程式人生 > >國內在Minikube上搭建Knative及示例演示

國內在Minikube上搭建Knative及示例演示

1. 什麼是Serverless?什麼是Mnative?

什麼是 Severless, 下面是 CNCFServerless 架構給出的定義:

“Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment”

從定義中可以看出 Serverless 架構應該下面的幾個特點:

  • 構建及執行應用的基礎設施環境
  • 無需進行服務的狀態管理
  • 足夠細粒度的部署模式
  • 可擴充套件且按使用量付費

上面的幾個特點,除去足夠細粒度的部署模式外,Kubernetes 都能夠提供非常好的支援。幸運的是,不管是為了讓 Kubernetes 完整支援 Serverless 架構,還是 Google 在 cloud 上更加吸引開發者,Google 在Google Cloud Next 2018 上,釋出了 Knative,並將其稱為 : “ 基於 Kubernetes 的平臺,用來構建、部署和管理現代 Serverless

架構 ”。Knative的主要角色如下圖中所描述:
在這裡插入圖片描述
Knative 致力於提供可重用的“通用模式和最佳實踐組合”的實現,目前可用的元件包括:

  • Build: Cloud-native source to container orchestration
  • Eventing: Management and delivery of events
  • Serving:Request-driven compute that can scale to zero

1.1 Build 構建系統

Knative 的構建工作都是被設計於在 Kubernetes 中進行,和整個 Kubernetes

生態結合更緊密;另外,它旨在提供一個通用的標準化構建元件,使其可以在廣泛的場景內得以使用。正如官方文件中的說 Build 構建系統,更多是為了定義標準化、可移植、可重用、效能高效的構建方法。Knative 提供了 Build CRD 物件,讓使用者可以通過 yaml 檔案定義構建過程。一個典型的 Build 配置檔案如下:

apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
  name: kaniko-build
spec:
  serviceAccountName: build-bot
  source:
    git:
      url: https://github.com/my-user/my-repo
      revision: master
  template:
    name: kaniko
    arguments:
    - name: IMAGE
      value: us.gcr.io/my-project/my-app

1.2 Serving:服務系統

Serving 的核心功能是讓應用執行起來以提供服務。其提供的基本功能包括:

  • 自動化啟動和銷燬容器
  • 根據名字生成網路訪問相關的 Service、ingress 等物件
  • 監控應用的請求,並自動擴縮容
  • 支援藍綠髮布、回滾功能,方便應用方法流程

Knative Serving 功能是基於 KubernetesIstio 開發的,它使用 Kubernetes 來管理容器(deployment、pod),Istio 來管理網路路由(VirtualService、DestinationRule)。
下面這張圖介紹了 Knative Serving 各元件之間的關係。
在這裡插入圖片描述

1.3. Eventing:事件系統

Knative 定義了很多事件相關的概念。介紹一下:

  • EventSource:事件源,能夠產生事件的外部系統。
  • Feed:把某種型別的 EventType 和 EventSource 和對應的 Channel 繫結到一起。
  • Channel:對訊息實現的一層抽象,後端可以使用 kafka、RabbitMQ、Google PubSub 作為具體的實現。channel name 類似於訊息叢集中的topic,可以用來解耦事件源和函式。事件發生後 sink 到某個 channel 中,然後 channel 中的資料會被後端的函式消費。
  • Subscription:把 channel 和後端的函式繫結的一起,一個 channel 可以繫結到多個 Knative Service

目前支援的事件源有三個:github(比如 merge 事件,push 事件等),Kubernetes(events),Google PubSub(訊息系統),後面還會不斷接入更多的事件源。

1.4 Auto-scaling

Auto-scaling 其實本質上是用於提高雲上使用資源的彈性、提供按照使用量計費的能力,以提供給使用者高性價比的雲服務,其有以下兩個特點:

  • Request-driving:根據請求量動態伸縮,目前通過統計系統當前併發請求量、和配置中的基準值比較,做出伸縮決策。
  • Scale to zero:無流量時完全釋放資源,有請求時重新喚醒。

Knative Serving 中抽象了一系列用於定義和控制應用行為的資源物件,稱為Kubernetes Custom Resource Definitions (CRDs)

  • Service:app/function生命週期管理
  • Route:路由管理
  • Configuration:定義了期望的執行狀態
  • Revision: 某一時刻 code + configuration ,Revision 是不可變物件,修改程式碼或配置生成新的 Revision
    4者間的互動如下圖示:
    在這裡插入圖片描述
    Revision 生命週期有三種執行狀態:
  • Active:Revision 啟動,可以處理請求
  • Reserve:一段時間未請求為 0 後,Revision 被標記為 Reserve 狀態,並釋放佔用的資源、伸縮至零
  • Retired: Revision 廢棄,不再收到請求
    其具體的 auto-scaling 的過程,這裡就不介紹了,可以自行了解。

2. Knative 實踐

在上面大致瞭解 Knative 後,本節將詳細介紹如何完成 Knative 的部署。為方便大家能按指引同樣完成 Knative 的部署,因此選擇滴滴雲提供的基本的雲伺服器完成,大家可在滴滴雲上申請雲伺服器然後按下面的步驟,完成 Knative 的基本部署。若未註冊滴滴雲賬號的,可以通過此連結完成註冊,有券_

2.1 雲伺服器申請

註冊好滴滴雲賬號後,申請一個 16核32G記憶體,帶80G本地盤及500G EBS資料盤 的雲伺服器,然後申請一按流量計費的 公網IP。之所以申請這樣的配置,是為後續完成整個部署的過程更為順暢。

首先登入伺服器,滴滴雲出於安全考慮預設的登入賬戶是 dc2-user,並且禁止了 root 使用者的直接登入,登入命令如下:

$Code ssh [email protected]
Warning: Permanently added '116.85.49.244' (ECDSA) to the list of known hosts.
[email protected]'s password:
[[email protected] ~]$
[[email protected] ~]$ sudo su
[roo[email protected] dc2-user]#

伺服器登入成功,使用 sudo su 命令完成到 root 賬戶的切換,購買雲伺服器時,我們購買了一塊500G的資料盤,由於從未掛載過,需要先 格式化雲盤,才能開始使用該雲盤。初始化的過程如下:

[[email protected] dc2-user]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    253:0    0   80G  0 disk
└─vda1 253:1    0   80G  0 part /
vdb    253:16   0  500G  0 disk

vdb 即為那塊新買的 EBS 盤。詳細的掛載流程可見掛載雲盤。通過教程的指引完成了資料盤的掛載,如下:

[[email protected] dc2-user]# df -h
檔案系統        容量  已用  可用 已用% 掛載點
/dev/vda1        80G  1.6G   79G    2% /
devtmpfs        3.8G     0  3.8G    0% /dev
tmpfs           3.9G     0  3.9G    0% /dev/shm
tmpfs           3.9G   17M  3.9G    1% /run
tmpfs           3.9G     0  3.9G    0% /sys/fs/cgroup
tmpfs           783M     0  783M    0% /run/user/1001
tmpfs           783M     0  783M    0% /run/user/0
/dev/vdb1       500G   33M  500G    1% /data

到目前為此,雲伺服器的準備好了,下面開始 Knative 的部署。

2.2 Knative 部署

我們買的是一臺裸的雲伺服器,因此需要完成整個 Knative 的部署,大致需要下面的幾個步驟:

  • Go 安裝
  • Docker 安裝
  • Kubectl 安裝
  • Minikube 安裝
  • Istio 部署
  • Knative Serving/Knative Build 部署

依次完成上面幾個相關元件的安裝。

2.2.1 Go 安裝

安裝 Go 環境,先使用 yum 安裝一個低版本的 Golang,如下:

[[email protected] dc2-user]# yum install golang
已載入外掛:fastestmirror
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Loading mirror speeds from cached hostfile
正在解決依賴關係
--> 正在檢查事務
---> 軟體包 golang.x86_64.0.1.8.3-1.el7 將被 安裝
...
...
作為依賴被安裝:
  golang-bin.x86_64 0:1.8.3-1.el7                                                                                                           golang-src.noarch 0:1.8.3-1.el7

完畢!
[[email protected] dc2-user]# mkdir ~/workspace
[[email protected] dc2-user]# echo 'export GOPATH="$HOME/workspace"' >> ~/.bashrc
[[email protected] dc2-user]# source ~/.bashrc
[[email protected] dc2-user]# go version
go version go1.8.3 linux/amd64

但因為Kubectl必須要大於 Go.1.11 版本的 Golang,需要升級 Golang,如下:

[[email protected] dc2-user]# wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
[[email protected] dc2-user]# tar vxf go1.11.2.linux-amd64.tar.gz
[[email protected] dc2-user]# cd go/src
[[email protected] src]# sh all.bash
Building Go cmd/dist using /usr/lib/golang.
Building Go toolchain1 using /usr/lib/golang.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for linux/amd64.

##### Testing packages.
ok  	archive/tar	0.021s
...
...
##### API check
Go version is "go1.11.2", ignoring -next /home/dc2-user/go/api/next.txt

ALL TESTS PASSED
---
Installed Go for linux/amd64 in /home/dc2-user/go
Installed commands in /home/dc2-user/go/bin
*** You need to add /home/dc2-user/go/bin to your PATH.
[[email protected] src]# export PATH=/home/dc2-user/go/bin:$PATH
[[email protected] src]# go version
go version go1.11.2 linux/amd64

也可以在此地址下載對應的 Go 版本進行安裝。
至此基本完成了 Go 的安裝。

2.2.2 Docker 安裝

Docker 的安裝是為後面的叢集搭建做準備的,如下:

[[email protected] src]# cd -
[[email protected] dc2-user]# sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已載入外掛:fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[[email protected] dc2-user]# yum list docker-ce --showduplicates | sort -r
已載入外掛:fastestmirror
可安裝的軟體包
Loading mirror speeds from cached hostfile
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
...
[[email protected] dc2-user]# yum makecache fast
[[email protected] dc2-user]# yum install  -y docker-ce-18.06.0.ce-3.el7
已載入外掛:fastestmirror
Loading mirror speeds from cached hostfile
正在解決依賴關係
--> 正在檢查事務
---> 軟體包 docker-ce.x86_64.0.18.06.1.ce-3.el7 將被 安裝
--> 正在處理依賴關係 container-selinux >= 2.9,它被軟體包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
--> 正在處理依賴關係 libltdl.so.7()(64bit),它被軟體包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
...
...
完畢!
[[email protected] dc2-user]# docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:23:03 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:25:29 2018
  OS/Arch:          linux/amd64
  Experimental:     false
[[email protected] dc2-user]# service docker start
Redirecting to /bin/systemctl start docker.service

通過上面的步驟,即完成了 Docker 的安裝。下面繼續安裝其它元件。

2.2.3 Kubectl 安裝

因為 Knative 依賴 Kubernates,剛剛在滴滴雲只買了一個 DC2 雲伺服器,在開始之前還需要一個 Kubernates 叢集,由於只有一臺雲伺服器,直接選擇安裝 Minikube。安裝 Minikube 前可以先安裝Kubectl 及相關驅動,這裡選擇通過原始碼編譯安裝,編譯原始碼需要有 Git、Golang 環境的支撐。安裝過程如下:

[[email protected] dc2-user]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[[email protected] dc2-user]# yum install -y kubectl
[[email protected] dc2-user]# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 10.254.150.215:8443 was refused - did you specify the right host or port?

至此完成了 Kubectl 工具的安裝。也可以不通過此方式安裝,但需要注意版本是否正確,否則下面啟動 Minikube 時會報錯。

2.2.4 Minikube 安裝

下一步即開始安裝 Minikube, Minikube 的安裝如下:

[[email protected] dc2-user]# curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v0.30.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
[[email protected] dc2-user]# export PATH=/usr/local/bin/:$PATH
[[email protected] dc2-user]# minikube version
minikube version: v0.30.1

因為 Minikube 的啟動其實也需要依賴一些牆外的映象,為了順利安裝需要將相應的映象提前準備好,然後以 Docker tag 方式進行標記,相關的命令,已經準備好,放在了 github 中,準備過程如下:

[[email protected] dc2-user]# wget https://raw.githubusercontent.com/doop-ymc/gcr/master/docker_tag.sh
--2018-11-09 15:11:30--  https://raw.githubusercontent.com/doop-ymc/gcr/master/docker_tag.sh
正在解析主機 raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
正在連線 raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... 已連線。
已發出 HTTP 請求,正在等待迴應... 200 OK
長度:1340 (1.3K) [text/plain]
正在儲存至: “docker_tag.sh”

100%[===============================================================================================================================================================================================================================================================================================================================================================================================>] 1,340       --.-K/s 用時 0s

2018-11-09 15:11:31 (116 MB/s) - 已儲存 “docker_tag.sh” [1340/1340])
[[email protected] dc2-user]# ls
docker_tag.sh  go  go1.11.2.linux-amd64.tar.gz  go1.11.2.linux-amd64.tar.gz.1  kubernetes-master  master.zip
[[email protected] dc2-user]# sh docker_tag.sh

執行完上面的命令後,在無報錯的情況,通過下面的命令即可完成 Minikube 的啟動, 如下:

[[email protected] dc2-user]#  minikube start  --registry-mirror=https://registry.docker-cn.com  --vm-driver=none  --kubernetes-version=v1.12.1   --bootstrapper=kubeadm   --extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
========================================
kubectl could not be found on your path. kubectl is a requirement for using minikube
To install kubectl, please run the following:

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo cp kubectl /usr/local/bin/ && rm kubectl

To disable this message, run the following:

minikube config set WantKubectlDownloadMsg false
========================================
Starting local Kubernetes v1.12.1 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubelet v1.12.1
Downloading kubeadm v1.12.1
Finished Downloading kubeadm v1.12.1
Finished Downloading kubelet v1.12.1
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
	The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

	sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
	sudo chown -R $USER $HOME/.kube
	sudo chgrp -R $USER $HOME/.kube

	sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
	sudo chown -R $USER $HOME/.minikube
	sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
[[email protected] dc2-user]# minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.255.1.243
[[email protected] dc2-user]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6c66ffc55b-l2hct                1/1     Running   0          3m53s
kube-system   etcd-minikube                           1/1     Running   0          3m8s
kube-system   kube-addon-manager-minikube             1/1     Running   0          2m54s
kube-system   kube-apiserver-minikube                 1/1     Running   0          2m46s
kube-system   kube-controller-manager-minikube        1/1     Running   0          3m2s
kube-system   kube-proxy-6v65g                        1/1     Running   0          3m53s
kube-system   kube-scheduler-minikube                 1/1     Running   0          3m4s
kube-system   kubernetes-dashboard-6d97598877-6g528   1/1     Running   0          3m52s
kube-system   storage-provisioner                     1/1     Running   0          3m52s

經歷上面的過程,Minikube基本是準備好了,下面開始安裝Knative相關元件

2.2.5 Istio 部署

使用下面的命令開始安裝

[[email protected] dc2-user]# curl -L https://raw.githubusercontent.com/knative/serving/v0.2.0/third_party/istio-1.0.2/istio.yaml \
   | sed 's/LoadBalancer/NodePort/' \
   | kubectl apply --filename -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0namespace/istio-system created
configmap/istio-galley-configuration created
configmap/istio-statsd-prom-bridge created
...
...
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created
[[email protected] dc2-user]]# kubectl label namespace default istio-injection=enabled
namespace/default labeled
[[email protected] dc2-user]# kubectl get pods --namespace istio-system
NAME                                        READY   STATUS              RESTARTS   AGE
istio-citadel-6959fcfb88-scskd              0/1     ContainerCreating   0          58s
istio-cleanup-secrets-xcc7w                 0/1     ContainerCreating   0          59s
istio-egressgateway-5b765869bf-7vxs5        0/1     ContainerCreating   0          58s
istio-galley-7fccb9bbd9-p2r5v               0/1     ContainerCreating   0          58s
istio-ingressgateway-69b597b6bd-pqfq9       0/1     ContainerCreating   0          58s
istio-pilot-7b594977cf-fv467                0/2     ContainerCreating   0          58s
istio-policy-59b7f4ccd5-dqstb               0/2     ContainerCreating   0          58s
istio-sidecar-injector-5c4b6cb6bc-p2nwk     0/1     ContainerCreating   0          57s
istio-statsd-prom-bridge-67bbcc746c-mcb74   0/1     ContainerCreating   0          58s
istio-telemetry-7686cd76bd-8f4l6            0/2     ContainerCreating   0          58s

幾分鐘後,各 pods 的狀態均會變為 running 或者 completed,如下:

[[email protected] dc2-user]# kubectl get pods --namespace istio-system
NAME                                        READY   STATUS      RESTARTS   AGE
istio-citadel-6959fcfb88-scskd              1/1     Running     0          6m11s
istio-cleanup-secrets-xcc7w                 0/1     Completed   0          6m12s
istio-egressgateway-5b765869bf-7vxs5        1/1     Running     0          6m11s
istio-galley-7fccb9bbd9-p2r5v               1/1     Running     0          6m11s
istio-ingressgateway-69b597b6bd-pqfq9       1/1     Running     0          6m11s
istio-pilot-7b594977cf-fv467                2/2     Running     0          6m11s
istio-policy-59b7f4ccd5-dqstb               2/2     Running     0          6m11s
istio-sidecar-injector-5c4b6cb6bc-p2nwk     1/1     Running     0          6m10s
istio-statsd-prom-bridge-67bbcc746c-mcb74   1/1     Running     0          6m11s
istio-telemetry-7686cd76bd-8f4l6            2/2     Running     0          6m11s

至此 Istio 基本部署完成。

2.2.6 Knative Serving/Knative Build 部署

下面開始部署 Knative 相關的元件

curl -L https://github.com/knative/serving/releases/download/v0.2.0/release-lite.yaml \
  | sed 's/LoadBalancer/NodePort/' \
  | kubectl apply --filename -

官方提供了上面的部署命令,但是因為科學上網的問題,最後是不可能裝成功的,下載上面的 release-lite.yaml 其實部分依賴的 image 檔案是在 gcr.io 等地方,如下:

    gcr.io/knative-releases/github.com/knative/build/cmd/[email protected]:c1c11fafd337f62eea18a1f02b78e6ae6949779bedgcr72d53d19b2870723a8f104
    gcr.io/knative-releases/github.com/knative/build/cmd/[email protected]:6fa8043ed114920cd61e28db3c942647ba48415fe1208acde2fb2ac0746c9164
    gcr.io/knative-releases/github.com/knative/build/cmd/[email protected]:f94e6413749759bc3f80d33e76c36509d6a63f7b206d2ca8fff167a0bb9c77f2
    ...

除去上面的幾個外,還有一部分這裡不一一給出了,上面的映象地址是帶 digest 引用的,直接用 Docker tag 其實是解決不了問題的,如下會報refusing to create a tag with a digest reference的錯誤:

[[email protected] dc2-user]# docker tag doopymc/knative-queue gcr.io/knative-releases/[email protected]:2e26a33aaf0e21db816fb75ea295a323e8deac0a159e8cf8cffbefc5415f78f1
refusing to create a tag with a digest reference

因此得想其它辦法,一個比較簡單的辦法是利用 Docker Hub ,可在國內 pull,但它同時能拉取國外映象的特點,選擇在 Docker Hub 上構建一個以目標映象base 映象的方式,然後將上面 release-lite.yaml 上的目標映象替換為在 Docker HUb 上建立的映象地址即可。如下為一個 Dockerfile 的示例:

FROM gcr.io/knative-releases/github.com/knative/build/cmd/[email protected]:58775663a5bc0d782c8505a28cc88616a5e08115959dc62fa07af5ad76c54a97
MAINTAINER doop

Docker Hub 的構建示例如圖示:
在這裡插入圖片描述
這裡我已經完成了對 release-lite.yaml 不可使用映象的替換,也放在 github 上了,但只替換了這裡需要用到的部分,下面安裝 Knative 相關元件的過程如下:

[[email protected] dc2-user]# wget https://raw.githubusercontent.com/doop-ymc/gcr/master/release-lite.yaml
[[email protected] dc2-user]# kubectl apply --filename release-lite.yaml
namespace/knative-build created
clusterrole.rbac.authorization.k8s.io/knative-build-admin created
serviceaccount/build-controller created
clusterrolebinding.rbac.authorization.k8s.io/build-controller-admin created
customresourcedefinition.apiextensions.k8s.io/builds.build.knative.dev created
customresourcedefinition.apiextensions.k8s.io/buildtemplates.build.knative.dev created
...
...
clusterrole.rbac.authorization.k8s.io/prometheus-system unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-system unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-system unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-system unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-system unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-system unchanged
service/prometheus-system-np unchanged
statefulset.apps/prometheus-system unchanged
[[email protected] dc2-user]# kubectl get pods --namespace knative-serving
NAME                         READY   STATUS            RESTARTS   AGE
activator-59966ffc65-4l75t   0/2     PodInitializing   0          59s
activator-59966ffc65-98h5c   0/2     PodInitializing   0          59s
activator-59966ffc65-w8kdv   0/2     PodInitializing   0          59s
autoscaler-7b4989466-hpvnz   0/2     PodInitializing   0          59s
controller-6955d8bcc-xn72w   1/1     Running           0          59s
webhook-5f75b9c865-c5pdf     1/1     Running           0          59s

同樣幾分鐘後,所有的 pod 均會變為 Running 狀態,如下:

[[email protected] dc2-user]# kubectl get pods --namespace knative-serving
NAME                         READY   STATUS    RESTARTS   AGE
activator-59966ffc65-4l75t   2/2     Running   0          8m31s
activator-59966ffc65-98h5c   2/2     Running   0          8m31s
activator-59966ffc65-w8kdv   2/2     Running   0          8m31s
autoscaler-7b4989466-hpvnz   2/2     Running   0          8m31s
controller-6955d8bcc-xn72w   1/1     Running   0          8m31s
webhook-5f75b9c865-c5pdf     1/1     Running   0          8m31s

到這一步 Knative 的部署基本完成,我們能看到在整個叢集中有那些 podsvc,及他們對應的狀態,首先是 Service,如下:

[[email protected] dc2-user]# kubectl get svc --all-namespaces
NAMESPACE            NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                   AGE
default              kubernetes                    ClusterIP      10.96.0.1        <none>        443/TCP                                                                                                                   26m
istio-system         istio-citadel                 ClusterIP      10.107.14.76     <none>        8060/TCP,9093/TCP                                                                                                         21m
istio-system         istio-egressgateway           ClusterIP      10.104.246.50    <none>        80/TCP,443/TCP                                                                                                            21m
istio-system         istio-galley                  ClusterIP      10.98.121.169    <none>        443/TCP,9093/TCP                                                                                                          21m
istio-system         istio-ingressgateway          NodePort       10.107.139.191   <none>        80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32043/TCP,8060:30461/TCP,853:31114/TCP,15030:30980/TCP,15031:31742/TCP   21m
istio-system         istio-pilot                   ClusterIP      10.101.106.132   <none>        15010/TCP,15011/TCP,8080/TCP,9093/TCP                                                                                     21m
istio-system         istio-policy                  ClusterIP      10.108.222.26    <none>        9091/TCP,15004/TCP,9093/TCP                                                                                               21m
istio-system         istio-sidecar-injector        ClusterIP      10.103.23.143    <none>        443/TCP                                                                                                                   21m
istio-system         istio-statsd-prom-bridge      ClusterIP      10.103.76.13     <none>        9102/TCP,9125/UDP                                                                                                         21m
istio-system         istio-telemetry               ClusterIP      10.96.92.153     <none>        9091/TCP,15004/TCP,9093/TCP,42422/TCP                                                                                     21m
istio-system         knative-ingressgateway        LoadBalancer   10.97.114.164    <pending>     80:32380/TCP,443:32390/TCP,31400:32400/TCP,15011:31302/TCP,8060:32414/TCP,853:31653/TCP,15030:32327/TCP,15031:30175/TCP   10m
knative-build        build-controller              ClusterIP      10.103.97.112    <none>        9090/TCP                                                                                                                  10m
knative-build        build-webhook                 ClusterIP      10.110.178.246   <none>        443/TCP                                                                                                                   10m
knative-monitoring   grafana                       NodePort       10.104.107.125   <none>        30802:32144/TCP                                                                                                           10m
knative-monitoring   kube-controller-manager       ClusterIP      None             <none>        10252/TCP                                                                                                                 10m
knative-monitoring   kube-state-metrics            ClusterIP      None             <none>        8443/TCP,9443/TCP                                                                                                         10m
knative-monitoring   node-exporter                 ClusterIP      None             <none>        9100/TCP                                                                                                                  10m
knative-monitoring   prometheus-system-discovery   ClusterIP      None             <none>        9090/TCP                                                                                                                  10m
knative-monitoring   prometheus-system-np          NodePort       10.97.205.54     <none>        8080:32344/TCP                                                                                                            10m
knative-serving      activator-service             NodePort       10.103.75.164    <none>        80:30003/TCP,9090:30015/TCP                                                                                               10m
knative-serving      autoscaler                    ClusterIP      10.101.229.196   <none>        8080/TCP,9090/TCP                                                                                                         10m
knative-serving      controller                    ClusterIP      10.109.222.174   <none>        9090/TCP                                                                                                                  10m
knative-serving      webhook                       ClusterIP      10.101.155.150   <none>        443/TCP                                                                                                                   10m
kube-system          kube-dns                      ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP                                                                                                             26m
kube-system          kubernetes-dashboard          ClusterIP      10.104.60.66     <none>        80/TCP                                                                                                                    26m

再來看一下 pod,如下:

[[email protected] dc2-user]# kubectl get pod --all-namespaces
NAMESPACE            NAME                                        READY   STATUS             RESTARTS   AGE
istio-system         istio-citadel-6959fcfb88-scskd              1/1     Running            0          22m
istio-system         istio-cleanup-secrets-xcc7w                 0/1     Completed          0          22m
istio-system         istio-egressgateway-5b765869bf-7vxs5        1/1     Running            0          22m
istio-system         istio-galley-7fccb9bbd9-p2r5v               1/1     Running            0          22m
istio-system         istio-ingressgateway-69b597b6bd-pqfq9       1/1     Running            0          22m
istio-system         istio-pilot-7b594977cf-fv467                2/2     Running            0          22m
istio-system         istio-policy-59b7f4ccd5-dqstb               2/2     Running            0          22m
istio-system         istio-sidecar-injector-5c4b6cb6bc-p2nwk     1/1     Running            0          22m
istio-system         istio-statsd-prom-bridge-67bbcc746c-mcb74   1/1     Running            0          22m
istio-system         istio-telemetry-7686cd76bd-8f4l6            2/2     Running            0          22m
istio-system         knative-ingressgateway-84d56577db-flz59     1/1     Running            0          11m
knative-build        build-controller-644d855ff4-t4w72           1/1     Running            0          11m
knative-build        build-webhook-5f68d76c49-wjvx9              1/1     Running            0          11m
knative-monitoring   grafana-787566b4f6-4rlmk                    1/1     Running            0          11m
knative-monitoring   kube-state-metrics-f5446fc8c-2l94v          3/4     ImagePullBackOff   0          11m
knative-monitoring   node-exporter-kbzc6                         2/2     Running            0          11m
knative-monitoring   prometheus-system-0                         1/1     Running            0          11m
knative-monitoring   prometheus-system-1                         1/1     Running            0          11m
knative-serving      activator-59966ffc65-4l75t                  2/2     Running            0          11m
knative-serving      activator-59966ffc65-98h5c                  2/2     Running            0          11m
knative-serving      activator-59966ffc65-w8kdv                  2/2     Running            0          11m
knative-serving      autoscaler-7b4989466-hpvnz                  2/2     Running            0          11m
knative-serving      controller-6955d8bcc-xn72w                  1/1     Running            0          11m
knative-serving      webhook-5f75b9c865-c5pdf                    1/1     Running            0          11m
kube-system          coredns-6c66ffc55b-l2hct                    1/1     Running            0          27m
kube-system          etcd-minikube                               1/1     Running            0          27m
kube-system          kube-addon-manager-minikube                 1/1     Running            0          26m
kube-system          kube-apiserver-minikube                     1/1     Running            0          26m
kube-system          kube-controller-manager-minikube            1/1     Running            0          26m
kube-system          kube-proxy-6v65g                            1/1     Running            0          27m
kube-system          kube-scheduler-minikube                     1/1     Running            0          27m
kube-system          kubernetes-dashboard-6d97598877-6g528       1/1     Running            0          27m
kube-system          storage-provisioner                         1/1     Running            0          27m

從上可以看出 pod 的狀態基本處於 Runningcompleted,至此 Knative 基本搭建完成,下面開始在 Knative 上跑一下官方的示例。

3. Knative 示例演示

3.1 應用訪問演示

按官方提供的示例, 簡單修改了 service.yaml,如下:

apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
kind: Service
metadata:
  name: hellodidiyun-go # The name of the app
  namespace: default # The namespace the app will use
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: doopymc/helloworld-go
            env:
            - name: TARGET # The environment variable printed out by the sample app
              value: "hello, didiyun"

下面是此應用的啟動過程,如下:

[[email protected] dc2-user]# wget https://raw.githubusercontent.com/doop-ymc/helloworld-go/master/service.yaml
[[email protected] dc2-user]# kubectl apply --filename service.yaml
service.serving.knative.dev/hellodidiyun-go created
[[email protected] dc2-user]# kubectl get pods
NAME                                               READY   STATUS            RESTARTS   AGE
hellodidiyun-go-00001-deployment-d9489b84b-ws8br   0/3     PodInitializing   0          16s

幾分鐘後,應該會被正常拉起,如下:

[[email protected] dc2-user]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hellodidiyun-go-00001-deployment-d9489b84b-ws8br   3/3     Running   0          58s

下面開始訪問此應用,首先找到此服務的IP地址,如下:

[[email protected] dc2-user]#  kubectl get svc knative-ingressgateway --namespace istio-system
NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                   AGE
knative-ingressgateway   LoadBalancer   10.97.114.164   <pending>     80:32380/TCP,443:32390/TCP,31400:32400/TCP,15011:31302/TCP,8060:32414/TCP,853:31653/TCP,15030:32327/TCP,15031:30175/TCP   23m

可以看到 EXTERNAL-IP<pending> 狀態,大概是因為無外部的 LoadBalancer,因此採用示例中的第二種方式,獲取 IP, 如下:

[[email protected] dc2-user]# export IP_ADDRESS=$(kubectl get node  --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system   --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
[[email protected] dc2-user]# echo $IP_ADDRESS
10.255.1.243:32380

下一步獲取服務的訪問地址,如下:

[[email protected] dc2-user]# kubectl get ksvc hellodidiyun-go  --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME              DOMAIN
hellodidiyun-go   hellodidiyun-go.default.example.com

訪問服務如下:

[[email protected] dc2-user]# curl -H "Host: hellodidiyun-go.default.example.com" http://${IP_ADDRESS}
Hello World: hello, didiyun!

成功返回了 Hello World: hello, didiyun! 符合預期。

3.2 auto-scaling 演示

上面介紹 Knative 時,提到了其非常重要的一個機制 auto-scaling。這裡看一下,上面訪問應用一段時間後,hellodidiyun-go 應用的 pod 會慢慢被 Terminate,如下示:

[[email protected] dc2-user]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hellodidiyun-go-00001-deployment-d9489b84b-6zssr   3/3     Running   0          2m43s
[[email protected] dc2-user]# kubectl get pods
NAME                                               READY   STATUS        RESTARTS   AGE
hellodidiyun-go-00001-deployment-d9489b84b-6zssr   2/3     Terminating   0          5m42s
[[email protected] dc2-user]# kubectl get pods
No resources found.

我們重新發起一次請求,然後看一下 pod 的狀態,如下:

[[email protected] dc2-user]# curl -H "Host: hellodidiyun-go.default.example.com" http://${IP_ADDRESS}
Hello World: hello, didiyun!
[[email protected] dc2-user]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
hellodidiyun-go-00001-deployment-d9489b84b-vmcg4   3/3     Running   0          11s

服務重新啟動,符合預期。

4. 結語

以前 Serverless 架構更多隻能在公有云上才可執行及使用, Knative 出現後,相信會有更多服務獨立維護小的 Serverless 服務,當然 Knative 釋出時間不長,問題肯定不少,我們一起來發現它們吧。