1. 程式人生 > 實用技巧 >Kubernetes ---- RBAC授權管理

Kubernetes ---- RBAC授權管理

RBAC(授權外掛)

RBAC基於角色訪問控制:

  許可: 對於任何一個被訪問的物件(k8s元件),對於物件能施加的操作組合,將某些操作許可權賦給角色,就完成了授權;
  角色: 可以讓一個使用者扮演一個角色,而這個角色擁有些許可權,那麼這個使用者就擁有了這個角色的許可權,許可權授權給角色,與rolebinding工作在名稱空間級別,授予名稱空間範圍內的許可許可權的;
    operations: 允許角色做的操作,寫進來就是說明允許,不能定義拒絕;
    subject: 物件,對哪些物件做哪些操作;
    rolebinding:
      將user account OR service account 繫結在哪個角色;

    clusterrole: 定義了角色允許的操作後, 與角色繫結的使用者執行的操作位於叢集,而不僅限於某個名稱空間;
    clusterrolebinding: 將user account OR service account 繫結在哪個角色;

:
  user可通過rolebinding繫結clusterrole:
  所有操作依然是在名稱空間範圍內,當名稱空間過多時,而且每個名稱空間都需要一個管理員,直接定義一個clusterrole使用rolebinding就相當於每個使用者都是在自己的名稱空間中操作的,如果不用這種
  方法的話,有N個名稱空間就要建立N個role,N個rolebinding;

建立角色:

$ kubectl create role --help
  Usage:
    kubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run]
$ kubectl create role pod-reader --verb=get,list,watch --resource=pods --dry-run -o yaml > role-demo.yaml
$ vim role-demo.yaml
apiVersion: rbac.authorization.k8s.io
/v1 kind: Role metadata: name: pod-reader namespace: default rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch $ kubectl get role NAME     AGE pod-reader   39s $ kubectl describe role pod-reader .... PolicyRule: Resources Non-Resource URLs Resource Names     Verbs --------- ----------------- --------------     ----- pods     []          []           [get list watch]

rolebinding建立並繫結:

$ kubectl create rolebinding --help
  Usage:
    kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname]
    [--serviceaccount=namespace:serviceaccountname] [--dry-run] [options]
$ kubectl create rolebinding kfree-read-pods --role=pod-reader --user=kfree --dry-run -o yaml > rolebinding-demo.yaml
$ vim rolebinding-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kfree-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-reader
subjects:
  - apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kfree

$ kubectl config use-context kfree@kubernetes
# 發現之前建立的使用者已經有了檢視pods的許可權;
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-demo-854b57c687-4hbp4 1/1 Running 0 5h26m
deploy-demo-854b57c687-f7txr 1/1 Running 0 5h26m
deploy-demo-854b57c687-t9bbl 1/1 Running 0 5h26m

clusterrole建立並繫結:

$ kubectl create clusterrole --help
  Usage:
    kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run]
$ kubectl create clusterrole cluster-readers --verb=get,list,watch --resource=pods,deployment --dry-run -o yaml > clusterrole-demo.yaml
$ kubectl apply -f clusterrole-demo.yaml
$ kubectl get clusterrole
....
cluster-readers 
....

繫結:

$ kubectl create clusterrolebinding --help
  Usage:
    kubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname]
    [--serviceaccount=namespace:serviceaccountname] [--dry-run] [options]
$ kubectl create clusterrolebinding kfree-read-all-pods --clusterrole=cluster-readers --user=kfree --dry-run -o yaml > clusterrolebinding-demo.yaml
$ kubectl apply -f clusterrolebinding-demo.yaml
$ kubectl config use-context kfree@kubernetes
# 繫結後發現所有名稱空間的deployment與pods資源都可以檢視(get,list,watch)
$ kubectl get pods && kubectl get pods -n kube-system
$ kubectl get deploy && kubectl get deploy -n kube-system
使用rolebinding繫結clusterrole:
$ kubectl delete clusterrolebinding kfree-read-all-pods
$ kubectl create rolebinding kfree-read-pods --clusterrole=cluster-readers --user=kfree --dry-run -o yaml > rolebinding-clusterrole-demo.yaml
$ kubectl apply -f rolebinding-clusterrole-demo.yaml
$ kubectl get rolebinding
NAME       AGE
kfree-read-pods 3m
$ kubectl config view
....    
current-context: kfree@kubernetes
....
$ kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "kfree" cannot list resource "pods" in API group "" in the namespace "kube-system"
$ kubectl get pods 
NAME READY STATUS RESTARTS AGE
deploy-demo-854b57c687-4hbp4 1/1 Running 1 18h
deploy-demo-854b57c687-f7txr 1/1 Running 1 18h
deploy-demo-854b57c687-t9bbl 1/1 Running 1 18h