1. 程式人生 > >Running Bleeding-Edge Kubernetes on AWS with kops

Running Bleeding-Edge Kubernetes on AWS with kops

In an earlier blog post, I explained how to set up a Kubernetes cluster on AWS using kops. By default, the kops create cluster command chooses the default Kubernetes version from the stable channel of kops, which would work for most developers. If you want a specific version, the --kubernetes-version

option can be used to specify that. But what if you’d like to use kops to create a build using a recently-merged PR in Kubernetes? (Maybe, like me, you’re too impatient to wait through a test cycle for kops to support this in the channel – I want to try out the feature now!) This guest post by Micah Hausler explains how you can use a development build of Kubernetes to spin up your own cluster using kops.

– Arun

One of the easiest tools for creating, running, and managing Kubernetes clusters on AWS is kops. You can create clusters for released versions of Kubernetes easily, as explained in Arun’s earlier post, Manage Kubernetes Clusters on AWS Using Kops. In this post, we’ll show you how to use kops to create a cluster running a development build of Kubernetes. Kops states the version of Kubernetes that it

officially supports, and, at the time of writing, Kubernetes 1.8.4 is the latest supported version. The examples in this post all use versions of Kubernetes not supported by kops, so they are only recommended for development, prototyping, and testing.

Before trying these examples yourself, you’ll need kops, kubectl, gsutil, and the AWS command line interface.

Background – release binaries

When creating a cluster with kops, you have the option of specifying a release of Kubernetes that kops supports, or providing kops an HTTP URL address where the Kubernetes binaries are located.

The Kubernetes release team places release binaries in the Google Cloud Storage bucket kubernetes-releases at the following location: https://storage.googleapis.com/kubernetes-release/release/

You can use the gsutil command line tool to verify that all the required binaries are located in a particular path. (The “https://storage.googleapis.com/” of the URL is replaced with “gs://”)

gsutil ls gs://kubernetes-release/release/v1.9.0/

Each pull request to Kubernetes gets tested with kops and shows up in the Github status as the “pull-kubernetes-e2e-kops-aws” test.

When you click on “Details” for the kops test, you’ll see the test output and a version for the specific commit of the given Pull Request.

The Kubernetes binaries used for this test are also stored in a Google Cloud Storage bucket, titled “kubernetes-release-pull”. If you search the raw build log for the kops test for that version string, you’ll see that the release binaries are uploaded to the location:

gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/<version>

So for pull request #56759, you would set the Kubernetes version in kops to the URL

Read the kops docs if you’re interested in using a custom Kubernetes build and uploading your binaries to S3.

Create the cluster

Before you can create a cluster, you will need the proper AWS permissions. (If you are using a cross-account role, set the environment variable AWS_SDK_LOAD_CONFIG=1) Read through the kops documentation for guidance on creating an AWS user with the IAM permissions required, but the short list of IAM permissions are:

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess

Once you have a user with the correct permissions, you’re ready to create your cluster. You’ll need a name for the cluster, an S3 bucket for kops to store the state, and the AWS region and availability zones you want to create the cluster in. The example below uses us-east-1 and Kubernetes 1.9.0.

export CLUSTER_NAME="example.cluster.k8s.local"
export KUBERNETES_VERSION="https://storage.googleapis.com/kubernetes-release/release/v1.9.0/"
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-us-east-1}
export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"
export S3_BUCKET=${S3_BUCKET:-kops-state-store-$(cat /dev/random | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 32)}
export KOPS_STATE_STORE=s3://$S3_BUCKET

If you are using a new S3 bucket for kops, you’ll need to create it first:

aws s3api create-bucket \
 --bucket $S3_BUCKET \
 --create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION

At this point, you are ready to create your cluster! There are a lot of options when creating a cluster, so check the help output for “kops create cluster -h” to see the available options.

kops create cluster \
 --name $CLUSTER_NAME \
 --zones $AWS_AVAILABILITY_ZONES \
 --kubernetes-version $KUBERNETES_VERSION \
 --yes

After running kops create, your kubectl context will be updated to point to your new cluster. It will take a few minutes to bring all the resources online; you can check the status of your cluster using the command kops validate cluster.

validate cluster

You now have a Kubernetes cluster on AWS!

Get Involved

The kops project has grown a lot in the last year, and it has been great to see the community jump in to contribute. If you are looking for help, join the Kubernetes Slack and ask questions in the #kops channel. If you want to contribute, read the kops documentation on contributing.

Here are all the above steps compiled in a single script:

At the time of writing, Micah Hausler was a Senior Site Reliability Engineer at Skuid where he led the DevOps team and was a contributor to Kubernetes. You can (still) find him at @micahhausler on Twitter, Github, and Kubernetes Slack.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

相關推薦

Running Bleeding-Edge Kubernetes on AWS with kops

In an earlier blog post, I explained how to set up a Kubernetes cluster on AWS using kops. By default, the kops create cluster command chooses the

Manage Kubernetes Clusters on AWS Using Kops

Any containerized application typically consists of multiple containers. There are containers for the application itself, a database, possibly a w

Data Lake on AWS with Talend

An out-of-the-box open data lake solution with AWS and Talend allows you to build, manage, and govern your cloud data lake in the AWS Cloud so tha

Running Container-Enabled Microservices on AWS

このコースでは、Amazon Elastic Container Service (Amazon ECS) を使用して、コンテナ対応アプリケーションを管理およびスケールする方法を學習します。コンテナ化されたアプリケーションを大規模に実行するという課題に注目し、Amazo

SQL Server HA on AWS with SIOS

SIOS DataKeeper Cluster Edition Amazon Machine Image (AMI) is available in AWS Marketplace. You can bring your own license (BYOL) or pay as y

Rapid Analytics and Machine Learning on AWS with Inawisdom

Initiate a pre-sales consultation to identify the business opportunities to explore during the discovery process. During the Discovery-as-a-Se

kube-aws: Highly Available, Scalable, and Secure Kubernetes on AWS

There are many ways to manage a Kubernetes cluster on AWS. Kube-AWS is a Kubernetes incubator project that allows you to create,

Mastering Kubernetes on AWS – Video Replay

Shalom! I had the opportunity to speak at AWS Summit Tel Aviv a few weeks ago. It is always refreshing to meet our customers and learn how

Pervasive Integration on AWS with TIBCO

TIBCO BusinessWorks Container Edition, along with other TIBCO solutions, is available on AWS Marketplace. You can choose to obtain a license o

Running FaaS on a Kubernetes Cluster on AWS using Kubeless

Serverless computing allows you to build and run applications and services without provisioning, scaling, or managing any servers. FaaS (

Running Kubernetes on windows

docke .com target ase top mmu tps release targe docker-for-desktop minikube GKE cluster(?) docker-for-desktop https://doc

How to Deploy JupyterHub with Kubernetes on OpenStack

Deploying JupyterHub with Kubernetes on OpenStackJupyter is now widely used for teaching and research. The use of Kubernetes for deploying a JupyterHub has

Predictive Data Science with Amazon SageMaker and a Data Lake on AWS

This Quick Start builds a data lake environment for building, training, and deploying machine learning (ML) models with Amazon SageMaker on the Am

Web Application Proxy with AD FS on AWS

You are responsible for the cost of the AWS services used while running this Quick Start reference deployment. There is no additional cost fo

Machine Learning with Data Lake Foundation on AWS

The Machine Learning with Data Lake Foundation on Amazon Web Services (AWS) solution integrates with a variety of AWS services to provide a fully

Resolve Errors with Captcha on AWS

Amazon Web Services is Hiring. Amazon Web Services (AWS) is a dynamic, growing business unit within Amazon.com. We are currently hiring So

Running an Inteligent Analytical System on AWS

Amazon Web Services is Hiring. Amazon Web Services (AWS) is a dynamic, growing business unit within Amazon.com. We are currently hiring So

Hyperparameter Optimization with SigOpt on AWS

SigOpt is delivered through a software-as-a-service (SaaS) model that enables you to deploy quickly and pay only for what you use. In addition

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernet

Amazon Redshift Data Warehouse with Matillion ETL on AWS

AWS offers a common architectural structure that enables you to leverage new and existing big data technologies and data warehouse methods. Throug