1. 程式人生 > >Manage Kubernetes Clusters on AWS Using Kops

Manage Kubernetes Clusters on AWS Using Kops

Any containerized application typically consists of multiple containers. There are containers for the application itself, a database, possibly a web server, and so on. During development, it’s normal to build and test this multi-container application on a single host. This approach works fine during early dev and test cycles but becomes a single point of failure for production, when application availability is critical.

In such cases, a multi-container application can be deployed on multiple hosts. Customers may need an external tool to manage such multi-container, multi-host deployments. Container orchestration frameworks provides the capability of cluster management, scheduling containers on different hosts, service discovery and load balancing, crash recovery, and other related functionalities. There are multiple options for container orchestration on Amazon Web Services:

Amazon ECS, Docker for AWS, and DC/OS.

Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using

kops.

Kubernetes and Kops overview

Kubernetes is an open source, container orchestration platform. Applications packaged as Docker images can be easily deployed, scaled, and managed in a Kubernetes cluster. Some of the key features of Kubernetes are:

  • Self-healing
    Failed containers are restarted to ensure that the desired state of the application is maintained. If a node in the cluster dies, then the containers are rescheduled on a different node. Containers that do not respond to application-defined health check are terminated, and thus rescheduled.
  • Horizontal scaling
    Number of containers can be easily scaled up and down automatically based upon CPU utilization, or manually using a command.
  • Service discovery and load balancing
    Multiple containers can be grouped together discoverable using a DNS name. The service can be load balanced with integration to the native LB provided by the cloud provider.
  • Application upgrades and rollbacks
    Applications can be upgraded to a newer version without an impact to the existing one. If something goes wrong, Kubernetes rolls back the change.

Kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. A rolling upgrade of an older version of Kubernetes to a new version can also be performed. It also manages the cluster add-ons. After the cluster is created, the usual kubectl CLI can be used to manage resources in the cluster.

Download Kops and Kubectl

There is no need to download the Kubernetes binary distribution for creating a cluster using kops. However, you do need to download the kops CLI. It then takes care of downloading the right Kubernetes binary in the cloud, and provisions the cluster.

The different download options for kops are explained at github.com/kubernetes/kops#installing. On MacOS, the easiest way to install kops is using the brew package manager.

brew update && brew install kops

The version of kops can be verified using the kops version command, which shows:

Version 1.6.1

In addition, download kubectl. This is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Make sure to include the directory where kubectl is downloaded in your PATH.

IAM user permission

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Create an Amazon S3 bucket for the Kubernetes state store

Kops needs a “state store” to store configuration information of the cluster.  For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.

This post uses the bucket name kubernetes-aws-io. Bucket names must be unique; you have to use a different name. Create an S3 bucket:

aws s3api create-bucket --bucket kubernetes-aws-io

I strongly recommend versioning this bucket in case you ever need to revert or recover a previous version of the cluster. This can be enabled using the AWS CLI as well:

aws s3api put-bucket-versioning --bucket kubernetes-aws-io --versioning-configuration Status=Enabled

For convenience, you can also define KOPS_STATE_STORE environment variable pointing to the S3 bucket. For example:

export KOPS_STATE_STORE=s3://kubernetes-aws-io

This environment variable is then used by the kops CLI.

DNS configuration

As of Kops 1.6.1, a top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master.

This domain may be registered with AWS, in which case a Route 53 hosted zone is created for you. Alternatively, this domain may be at a different registrar. In this case, create a Route 53 hosted zone. Specify the name server (NS) records from the created zone as NS records with the domain registrar.

This post uses a kubernetes-aws.io domain registered at a third-party registrar.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name cluster.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

This shows an output such as the following:

[
"ns-94.awsdns-11.com",
"ns-1962.awsdns-53.co.uk",
"ns-838.awsdns-40.net",
"ns-1107.awsdns-10.org"
]

Create NS records for the domain with your registrar. Different options on how to configure DNS for the cluster are explained at github.com/kubernetes/kops/blob/master/docs/aws.md#configure-dns.

Experimental support to create a gossip-based cluster was added in Kops 1.6.2. This post uses a DNS-based approach, as that is more mature and well tested.

Create the Kubernetes cluster

The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:

  • Provisioning EC2 instances
  • Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups
  • Installing Kubernetes.

Start the Kubernetes cluster using the following command:

kops create cluster \
--name cluster.kubernetes-aws.io \
--zones us-west-2a \
--state s3://kubernetes-aws-io \
--yes

In this command:

  • --zones
    Defines the zones in which the cluster is going to be created. Multiple comma-separated zones can be specified to span the cluster across multiple zones.
  • --name
    Defines the cluster’s name.
  • --state
    Points to the S3 bucket that is the state store.
  • --yes
    Immediately creates the cluster. Otherwise, only the cloud resources are created and the cluster needs to be started explicitly using the command kops update --yes. If the cluster needs to be edited, then the kops edit cluster command can be used.

This starts a single master and two worker node Kubernetes cluster. The master is in an Auto Scaling group and the worker nodes are in a separate group. By default, the master node is m3.medium and the worker node is t2.medium. Master and worker nodes are assigned separate IAM roles as well.

Wait for a few minutes for the cluster to be created. The cluster can be verified using the command kops validate cluster --state=s3://kubernetes-aws-io. It shows the following output:

Using cluster from kubectl context: cluster.kubernetes-aws.io

Validating cluster cluster.kubernetes-aws.io

INSTANCE GROUPS
NAME                 ROLE      MACHINETYPE    MIN    MAX    SUBNETS
master-us-west-2a    Master    m3.medium      1      1      us-west-2a
nodes                Node      t2.medium      2      2      us-west-2a

NODE STATUS
NAME                                           ROLE      READY
ip-172-20-38-133.us-west-2.compute.internal    node      True
ip-172-20-38-177.us-west-2.compute.internal    master    True
ip-172-20-46-33.us-west-2.compute.internal     node      True

Your cluster cluster.kubernetes-aws.io is ready

It shows the different instances started for the cluster, and their roles. If multiple cluster states are stored in the same bucket, then --name <NAME> can be used to specify the exact cluster name.

Check all nodes in the cluster using the command kubectl get nodes:

NAME                                          STATUS         AGE       VERSION
ip-172-20-38-133.us-west-2.compute.internal   Ready,node     14m       v1.6.2
ip-172-20-38-177.us-west-2.compute.internal   Ready,master   15m       v1.6.2
ip-172-20-46-33.us-west-2.compute.internal    Ready,node     14m       v1.6.2

Again, the internal IP address of each node, their current status (master or node), and uptime are shown. The key information here is the Kubernetes version for each node in the cluster, 1.6.2 in this case.

The kubectl value included in the PATH earlier is configured to manage this cluster. Resources such as pods, replica sets, and services can now be created in the usual way.

Some of the common options that can be used to override the default cluster creation are:

  • --kubernetes-version
    The version of Kubernetes cluster. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.
  • --master-size and --node-size
    Define the instance of master and worker nodes.
  • --master-count and --node-count
    Define the number of master and worker nodes. By default, a master is created in each zone specified by --master-zones. Multiple master nodes can be created by a higher number using --master-count or specifying multiple Availability Zones in --master-zones.

A three-master and five-worker node cluster, with master nodes spread across different Availability Zones, can be created using the following command:

kops create cluster \
--name cluster2.kubernetes-aws.io \
--zones us-west-2a,us-west-2b,us-west-2c \
--node-count 5 \
--state s3://kubernetes-aws-io \
--yes

Both the clusters are sharing the same state store but have different names. This also requires you to create an additional Amazon Route 53 hosted zone for the name.

By default, the resources required for the cluster are directly created in the cloud. The --target option can be used to generate the AWS CloudFormation scripts instead. These scripts can then be used by the AWS CLI to create resources at your convenience.

Get a complete list of options for cluster creation with kops create cluster --help.

More details about the cluster can be seen using the command kubectl cluster-info:

Kubernetes master is running at https://api.cluster.kubernetes-aws.io
KubeDNS is running at https://api.cluster.kubernetes-aws.io/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Check the client and server version using the command kubectl version:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Both client and server version are 1.6 as shown by the Major and Minor attribute values.

Upgrade the Kubernetes cluster

Kops can be used to create a Kubernetes 1.4.x, 1.5.x, or an older version of the 1.6.x cluster using the --kubernetes-version option. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.

Or, you may have used kops to create a cluster a while ago, and now want to upgrade to the latest recommended version of Kubernetes. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.

As of kops 1.6.1, upgrading a cluster is a three-step process.

First, check and apply the latest recommended Kubernetes update.

kops upgrade cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

The --yes option immediately applies the changes. Not specifying the --yes option shows only the changes that are applied.

Second, update the state store to match the cluster state. This can be done using the following command:

kops update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Lastly, perform a rolling update for all cluster nodes using the kops rolling-update command:

kops rolling-update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Previewing the changes before updating the cluster can be done using the same command but without specifying the --yes option. This shows the following output:

NAME                 STATUS        NEEDUPDATE    READY    MIN    MAX    NODES
master-us-west-2a    NeedsUpdate   1             0        1      1      1
nodes                NeedsUpdate   2             0        2      2      2

Using --yes updates all nodes in the cluster, first master and then worker. There is a 5-minute delay between restarting master nodes, and a 2-minute delay between restarting nodes. These values can be altered using --master-interval and --node-interval options, respectively.

Only the worker nodes may be updated by using the --instance-group node option.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster using the kops command. This ensures that all resources created by the cluster are appropriately cleaned up.

The command to delete the Kubernetes cluster is:

kops delete cluster --state=s3://kubernetes-aws-io --yes

If multiple clusters have been created, then specify the cluster name as in the following command:

kops delete cluster cluster2.kubernetes-aws.io --state=s3://kubernetes-aws-io --yes

Conclusion

This post explained how to manage a Kubernetes cluster on AWS using kops. Kubernetes on AWS users provides a self-published list of companies using Kubernetes on AWS.

Try starting a cluster, create a few Kubernetes resources, and then tear it down. Kops on AWS provides a more comprehensive tutorial for setting up Kubernetes clusters. Kops docs are also helpful for understanding the details.

In addition, the Kops team hosts office hours to help you get started, from guiding you with your first pull request. You can always join the #kops channel on Kubernetes slack to ask questions. If nothing works, then file an issue at github.com/kubernetes/kops/issues.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

— Arun

相關推薦

Manage Kubernetes Clusters on AWS Using Kops

Any containerized application typically consists of multiple containers. There are containers for the application itself, a database, possibly a w

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernet

Running FaaS on a Kubernetes Cluster on AWS using Kubeless

Serverless computing allows you to build and run applications and services without provisioning, scaling, or managing any servers. FaaS (

Running Bleeding-Edge Kubernetes on AWS with kops

In an earlier blog post, I explained how to set up a Kubernetes cluster on AWS using kops. By default, the kops create cluster command chooses the

Using Presto in our Big Data Platform on AWS

Using Presto in our Big Data Platform on AWSby Eva Tse, Zhenxiao Luo, Nezih Yigitbasi @ Big Data Platform teamAt Netflix, the Big Data Platform team is res

Microservices on AWS Compute Using Containers and Serverless

Deploying microservices-based applications can be complex. First, it requires setting up your basic compute, storage, and networking capa

kube-aws: Highly Available, Scalable, and Secure Kubernetes on AWS

There are many ways to manage a Kubernetes cluster on AWS. Kube-AWS is a Kubernetes incubator project that allows you to create,

Mastering Kubernetes on AWS – Video Replay

Shalom! I had the opportunity to speak at AWS Summit Tel Aviv a few weeks ago. It is always refreshing to meet our customers and learn how

DevOps on AWS之Cloudformation概念介紹篇

Cloudformation的相關概念 AWS cloudformation是一項典型的(IAC)基礎架構即程式碼服務。。通過編寫模板對亞馬遜雲服務的資源進行呼叫和編排。藉助cloudformation可以極大幫助DevOps提升工作效率,減少重複勞動,配置和部署相關服務的時間,並把更多的精力花在應用程式領

DevOps on AWS之Cloudformation實踐篇

cloudformation入門實踐 AWS cloudformation通過模板對AWS雲資源進行編排和呼叫。並且可以通過模板程式碼層面的修改就可以對現有環境進行升級改造,雲端業務的靈活便捷特點展現無疑。下面我們通過一個入門級的簡單動手案例給大家展示cloudformation是如何使用的。希望大家也動手

DevOps on AWS之Elastic BeanStalk

Elastic BeanStalk相關概念 童話世界中存在著一種魔力beanstalk(豆莢),種在花盆裡可以無限的向上生長,越長越高直達雲端。AWS Elastic Beanstalk也採用類似概念,使用者只需部署程式碼即可自動處理包括容量預置、負載均衡、自動擴充套件和應用程式執行狀況監控在內的部署工作。

DevOps on AWS之OpsWorks初體驗

AWS OpsWorks 是一款配置管理服務,提供 Chef 和 Puppet 的託管EC2虛擬機器例項。Chef 和 Puppet 是自動化平臺,允許使用者使用程式碼來自動配置伺服器。使用者藉助OpsWorks可以使用 Chef或Puppet 自動完成所有 EC2 例項或本地計算環境中的

Building Serverless Apps on AWSAWS上構建無伺服器應用程式 Lynda課程中文字幕

Building Serverless Apps on AWS 中文字幕 在AWS上構建無伺服器應用程式 中文字幕Building Serverless Apps on AWS 瞭解如何在Amazon Web Services(AWS)上開發NodeJS無伺服器應用程式 首先,介紹

Why You Should Not Neglect Your Developer’s Kubernetes Clusters

What could go wrong?Let’s look at some of the issues you could run into, when putting less importance on DEV, and the impact they might have.I did not come

This surge protector lets you turn on devices using Alexa or Google Assistant

If there is anything the widely-acclaimed, sob-inducing series This Is Us has taught us, it's that you should always be wary of how you plug in your home a

Highly Available WordPress on AWS: Don’t Learn The Hard Way

Highly Available WordPress on AWS: Don’t Learn The Hard WayToday we’ll be going over some of potential strategies for hosting WordPress on AWS in a scalabl

throughput genomics workflows on AWS | AWS Startups Blog

Guest post by Tomaz Berisa, Cofounder and CTO at Gencove We have been working hard to scale low-pass sequencing at Gencove and ran into a

Go Serverless! Let’s create a File Sharing application based on AWS services

Let’s start illustrating the services that are utilized according to design choices.Amazon S3“Amazon S3 is an object storage service created to memorize an

In the news: Go on AWS Lambda · Applied Go

On Jan 15th, Amazon announced Go support for AWS Lambda. This was exciting news for many, according to the number of blog posts that followed this annou

Check your balance on Coinbase using Python

Check your balance on Coinbase using Pythoncredits: pixabayEven though Coinbase has a mobile application for you to check your balance on the go, I prefer