1. 程式人生 > >Network Load Balancer Support in Kubernetes 1.9

Network Load Balancer Support in Kubernetes 1.9

Applications deployed on Amazon Web Services can achieve fault tolerance and ensure scalability, performance, and security by using Elastic Load Balancing (ELB). Incoming application traffic to ELB is distributed across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. In addition to Classic Load Balancer and Application Load Balancer, a new Network Load Balancer was introduced last year. It is capable of handling millions of requests per second while maintaining ultra-low latencies. This guest post by Micah Hausler, who added support for Network Load Balancer in Kubernetes, explains how you can enable that support in your applications running on Kubernetes.

Arun

In September, AWS released the new Network Load Balancer, which for many in the AWS community is an exciting advance in the load balancing space. Some of my favorite features are the preservation of the original source IP without any additional setup, and the ability to handle very long running connections. In this post, we’ll show how to create a Network Load Balancer from a Kubernetes cluster on AWS.

Classic Load Balancing in Kubernetes

I’ve been using Kubernetes on AWS for a year and a half, and have found that the easiest way route traffic to Kubernetes workloads has been with a Kubernetes Load Balancer service. An example configuration for a service might look like this:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
  annotations: {}
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
type: LoadBalancer

This would create a Classic ELB routing TCP traffic on a frontend port 80 to port 80 on a pod. The end result is that the client’s source IP is lost and replaced with the ELB’s IP address. Workarounds have included enabling Proxy Protocol or using an X-Forwarded-For header on HTTP or HTTPS listeners with Kubernetes metadata annotations. There are a variety of additional annotations to configure ELB features like request logs, ACM Certificates, connection draining, and more.

Network Load Balancing in Kubernetes

Included in the release of Kubernetes 1.9, I added support for using the new Network Load Balancer with Kubernetes services. This is an alpha-level feature, and as of today is not ready for production clusters or workloads, so make sure you also read the documentation on NLB before trying it out. The only requirement to expose a service via NLB is to add the annotation service.beta.kubernetes.io/aws-load-balancer-type with the value of nlb.

A full example looks like this:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
  labels:
    app: nginx
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  externalTrafficPolicy: Local
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
type: LoadBalancer

To try this for yourself, see Arun’s post on managing a Kubernetes cluster with kops and set the kubernetes-version to 1.9.1.

kops create cluster \
--name cluster.kubernetes-aws.io \
--zones us-west-2a \
--kubernetes-version 1.9.1 \
--yes

Once your cluster is created, you’ll need to grant the Kubernetes master the new permissions to create an NLB. (Once kops officially supports Kubernetes 1.9, this additional step will not be necessary.)

It can take a few minutes for the Network Load Balancer to be created and register the nodes as valid targets (even though the NLB hostname is reported back to Kubernetes). You can check the status in the AWS Console:

If you follow the above example, once the Target Group instances (the Kubernetes nodes) pass the initial setup, you’ll see one node marked as healthy and one as unhealthy.


Nodes are added to an NLB by instance ID, but, to explain a little bit of Kubernetes networking, the traffic from the NLB doesn’t go straight to the pod. Client traffic first hits the kube-proxy on a cluster-assigned nodePort and is passed on to all the matching pods in the cluster. When the spec.externalTrafficPolicy is set to the default value of Cluster, the incoming LoadBalancer traffic may be sent by the kube-proxy to pods on the node, or to pods on other nodes. With this configuration the client IP is sent to the kube-proxy, but when the packet arrives at the end pod, the client IP shows up as the local IP of the kube-proxy.

By changing the spec.externalTrafficPolicy to Local, the kube-proxy will correctly forward the source IP to the end pods, but will only send traffic to pods on the node that the kube-proxy itself is running on. Kube-proxy also opens another port for the NLB health check, so traffic is only directed to nodes that have pods matching the service selector. This could easily result in uneven distribution of traffic, so use a DaemonSet or specify pod anti-affinity to ensure that only one pod for a given service is on a node.

At this point, the Network Load Balancer is ready for use!

There are several other differences in the new Network Load Balancer from how Classic ELBs work, so read through the Kubernetes documentation on NLB and the AWS NLB documentation.

Contribute to Kubernetes!

Adding the NLB integration was my first contribution to Kubernetes, and it has been a very rewarding experience. The Kubernetes community organizes itself into Special Interest Groups (SIGs), and the AWS SIG has been very welcoming and supportive. I’m really thankful to all the reviewers and collaborators from SIG-AWS and from Amazon for their insight.

If you’re interested in seeing deeper integration with AWS or NLB specifically, please participate in the community! Come to a SIG-AWS meeting, file feature requests, or report bugs on Github: Kubernetes is only what it is today because of the community!

Gists containing the above code snippets:

At the time of writing, Micah Hausler was a Senior Site Reliability Engineer at Skuid where he led the DevOps team and was a contributor to Kubernetes. You can (still) find him at @micahhausler on Twitter, Github, and Kubernetes Slack.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

相關推薦

Network Load Balancer Support in Kubernetes 1.9

Applications deployed on Amazon Web Services can achieve fault tolerance and ensure scalability, performance, and security by using Elastic Load B

Kubernetes 1.9集群使用traefik發布服務

k8s rbac traefik deployment 在前文中介紹了在kubernetes 1.5.2集群環境中使用traefik進行服務發布。Traefik采用daemonset方式部署,連接api-server走的是http協議,也未配置rbac。本文將介紹在k8s 1.9版本中使用de

使用 kubeadm 安裝部署 kubernetes 1.9-部署heapster插件

master aml con inf uber kubectl net raw 重新 1.先到外網下載好鏡像倒進各個節點 2.下載yaml文件和創建應用 mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.

kubernetes 1.9安裝

1.機器準備 機器列表 hostname ip docker版本 系統版本 master 192.168.6.39 1.13.1 Centos7.1 node1 192.168.6.163 1.

kubernetes 1.9安裝中遇到的錯誤

執行kubeadm init命令 [[email protected]-kubernetes ~]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16

Kubernetes 1.9.0 alpha.1 版本釋出_Kubernetes中文社群

在2天前,Kubernetes社群同時釋出了兩個版本的kubernetes,1.8.0 rc.1版本和 1.9.0-alpha.1 版本,k8s 1.9.0 也是首次釋出,從 v1.8.0-alpha.3 版本以來更新完善了眾多功能,共提交了279次。相關更新如下: 使用cluster/k

Kubernetes 1.9 版本釋出計劃_Kubernetes中文社群

Kubernetes釋出歷史: 2015年7月21日 Kubernetes 1.0 釋出 2016年3月17日 Kubernetes 1.2 釋出 2016年7月1日 Kubernetes 1.3 釋出 Kubernetes1.9釋出計劃跟蹤 2017年10月4日(星期三)開始

kubernetes 1.9.2 安裝步驟

系統資訊[[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 叢集環境10.10.9.11 master 10.10.9.12 node1 10.10

Target Fails to Connect to Network Load Balancer

When the target of an internal Network Load Balancer establishes a TCP connection to its own Network Load Balancer, the target can get routed t

Elastic Load Balancing – Network Load Balancer в облаке

Elastic Load Balancing автоматически распределяет входящий трафик приложений по нескольким целевым объектам, таким как инстансы Amazon EC2, конте

centos7 使用二進位制包搭建kubernetes 1.9.0叢集

kubernetes 發展速度非常,至少在目前來說是非常不錯的,很多大公司都在使用容器技術部署專案,而最近比較火的容器管理工具就是kubernetes了。 由於之前公司一直使用的還是yum安裝的v1.5.2,因為沒什麼大的需求就一直沒有更新到新版本,這次出來的1.9.0版本

Kubernetes | 學習教程 (一)Kubernetes 1.9.0 離線安裝教程

為了讓產品在迎來業務量爆發式增長前完成應對措施,在瀏覽無數的資料後,決定將服務逐漸進行容器化,最終能達到容器叢集的效果。而容器叢集的解決方案中,kubernetes(簡稱k8s)看起來是個可行的方案。我目前的理解是,k8s就是docker容器叢集的一個管理系統,有很多實用功能

使用 kubeadm 安裝部署 kubernetes 1.9

kubeadm是官方提供的安裝方案,比純手工安裝方便。 零 準備 在安裝前需做些設定讓系統環境一致,確保後面順利安裝。 0.0 硬體情況 系統:centos 7 使用者: root 機器規劃: 角色 數量 配置

Kubernetes 1.9版本帶來更大的穩定性和儲存功能

    Kubernetes開發者社群在Kubernetes 1.9的釋出中取得了一個成功的一年,增加了一些重要的新功能,這些功能將有助於進一步鼓勵企業採用Kubernetes。     Kubernetes是最受歡迎的容器編排工具。它被用來簡化容器的部署和管理,被開發人員

Testlink安裝:Notice:Undefined index: type in C:inetpubwwwroot estlink-1.9.3installinstallCheck.php on line 41

ndk coq vip cbt fhq mft ryu base64 gb2 問題現象: 問題原因:php配置參數中錯誤提示顯示; 問題解決:修改php.ini配置文件,修改為如下: error_reporting = E_ALL & ~E_NOTICE

Kubernetes v1.9.1 單機版本一鍵安裝腳本

k8s kubernetes etcd flanneld #!/bin/bash # ---------------------------------------- # kubernetes v1.9.1 單機一鍵部署腳本 # 用於實驗環境 # CentOS 7.2.1511下測試OK #

純手工搭建kubernetes(k8s)1.9集群 - (二)核心模塊部署

kubernetes 集群部署 環境搭建 devops 持續集成 1. 部署ETCD(主節點) 1.1 簡介 ??kubernetes需要存儲很多東西,像它本身的節點信息,組件信息,還有通過kubernetes運行的pod,deployment,service等等。都需要持久化。etcd就

Support for TLS 1.0 and 1.1 in Office 365

為了確保企業使用者的資料安全性,提供最好的加密方式,微軟於2018年10月31日,將Office 365所有的線上服務遷移到TLS 1.2,這意味著微軟不會處理和修復使用TLS1.0和1.1連線到Office 365 客戶端的新問題。 對IT Admin和企業客戶的影響: 作為IT Admin,您需

idea報錯:java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java

java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, C:

kubernetes升級1.8--1.9--1.10--1.11--1.12

kubernetes升級1.8–>1.9–>1.10–>1.11–>1.12 kubernetes中的docker升級到docker-ce請參考本人文章 https://blog.csdn.net/u010285941/article/details/852762