1. 程式人生 > >kubernetes的service的網路型別ingress的搭建(二)

kubernetes的service的網路型別ingress的搭建(二)

inrgess最新版部署(1.6.2 1.6.3 1.6.4)親測成功

default-backend:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080
scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service
metadata: name: default-http-backend namespace: kube-system labels: k8s-app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: k8s-app: default-http-backend

controller:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: system:ingress
rules:
- apiGroups:
  - ""
  resources: ["configmaps","secrets","endpoints","events","services"]
  verbs: ["list","watch","create","update","delete","get"]
- apiGroups:
  - ""
  - "extensions"
  resources: ["services","nodes","ingresses","pods","ingresses/status"]
  verbs: ["list","watch","create","update","delete","get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      hostNetwork: true
      serviceAccountName: ingress
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend

成功後如下:

這裡寫圖片描述

test service:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: echoheaders
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echoheaders
    spec:
      containers:
      - name: echoheaders
        image: gcr.io/google_containers/echoserver:1.0
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: echoheaders-default
  labels:
    app: echoheaders
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30302
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders

---
apiVersion: v1
kind: Service
metadata:
  name: echoheaders-default
  labels:
    app: echoheaders
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders

---
apiVersion: v1
kind: Service
metadata:
  name: echoheaders-x
  labels:
    app: echoheaders
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders

---
apiVersion: v1
kind: Service
metadata:
  name: echoheaders-y
  labels:
    app: echoheaders
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders

這裡寫圖片描述

ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echomap
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: echoheaders-x
          servicePort: 80
  - host: bar.baz.com
    http:
      paths:
      - path: /bar
        backend:
          serviceName: echoheaders-y
          servicePort: 80
      - path: /foo
        backend:
          serviceName: echoheaders-x
          servicePort: 80

這裡寫圖片描述

test:

curl -v http://10.39.1.45/foo -H 'host: foo.bar.com'

這裡寫圖片描述

curl -v http://10.39.1.45/foo -H 'host: bar.baz.com'

這裡寫圖片描述

curl -v http://10.39.1.45/bar -H 'host: bar.baz.com'

這裡寫圖片描述
說明:
10.39.1.45 是ingress的address,也就是ingress-controller所在的node的ip

結合ingress測試service的loadbalancer網路型別,之前一直沒用過這種網路型別,更多的是用nodePort,所以想測試下
我的叢集中已經存部署好了dashboard了所以就用dashboard來測試:

我新建一個lb型別的dashboard的service

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard-1
  name: dashboard-1
  namespace: kube-system
spec:
  type: LoadBalancer
  loadBalancerIP: 10.39.1.44
  ports:
  - port: 8001
    targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

10.39.1.44是我ingress-controller的ip(負載均衡器的ip)
這裡寫圖片描述

這裡寫圖片描述
發現service的型別是LoadBalancer,但是確隨機生成了NodePort:30617,之後我通過LoadBalancerip加NodePort能訪問,我就猜測是否LoadBalancer型別,只能通過這個LoadBalancerIP去訪問呢? 但是由於有NodePort生成,所以我就嘗試用其他node節點ip加NodePort訪問,發現都是可以訪問的。

kind: Ingress
metadata:
  name: test
  namespace: kube-system
spec:
  rules:
  - host: dashboard.io
    http:
      paths:
      - path: /
        backend:
          serviceName: dashboard
          servicePort: 80

這裡寫圖片描述

總結:
1.之前一直以為loadbanlacer網路型別就是通過loadbalancerip來訪問對應的應用,而測試之後發現service的loadbalancer網路型別通過ingress(或者說是nginx)實現,實質上就是nodePort型別
2.之前也一直以為ingress是這樣的:通過ingress的yaml中的hosts/path就能直接訪問到對應的service,事實是能訪問到對應的service。但是,實際上需要你service中對應的pod中需要存在path的服務存在,否則會報404。 比如:一個web的應用,服務地址是:http://xxxx.xxx.xxx.xxx:8080/app, 傳送請求的path就是/app,ingres.yaml中的path就定義為/app,如果你服務的地址是:http://xxxx.xxx.xxx.xxx:8080/apps,ingres.yaml中的path就定義為/app,那麼就會報404.