Using AWS CodePipeline to Perform Multi-Region Deployments
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Now that AWS CodePipeline supports cross-region actions, you can deploy your application across multiple regions from a single pipeline. Deploying your application to multiple regions can improve both latency and availability for your application.
Other AWS services
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Lambda, and your on-premises servers.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.
Amazon S3 has a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
Key AWS CodePipeline concepts
Stage: AWS CodePipeline breaks up your release workflow into a series of stages. For example, there might be a build stage, where code is built and tests are run. There are also deployment stages, where code updates are deployed to runtime environments. You can label each stage in the release process for better tracking, control, and reporting (for example “Source,” “Build,” and “Staging”).
Action: Every pipeline stage contains at least one action, which is some kind of task performed on the artifact. Pipeline actions occur in a specified order, in sequence or in parallel, as determined in the configuration of the stage.
For more information, see How AWS CodePipeline Works in the AWS CodePipeline User Guide.
In this blog post, you will learn how to:
- Create a continuous delivery pipeline using AWS CodePipeline and provisioned by AWS CloudFormation.
- Set up pipeline actions to execute in an AWS Region that is different from the region where the pipeline was created.
- Deploy a sample application to multiple regions using an AWS CodeDeploy action in the pipeline.
High-level deployment architecture
The deployment process can be summarized as follows:
- The latest application code is uploaded into an Amazon S3 bucket. Any new revision uploaded to the bucket
triggers a pipeline execution. - For each AWS CodeDeploy action in the pipeline, the application code from the S3 bucket is replicated to the
artifact store of the region that is configured for that action. - Each AWS CodeDeploy action deploys the latest revision of the application from its artifact store to Amazon
EC2 instances in the region.
In this blog post, we set our primary region to us-west-2 region. The secondary regions are set to us-east-1 and ap-southeast-2.
Note: The resources created by the AWS CloudFormation template might result in charges to your account. The cost depends on how long you keep the AWS CloudFormation stack and its resources running.
We will walk you through the following steps for creating a multi-region deployment pipeline:
- Set up resources to which you will deploy your application using AWS CodeDeploy.
- Set up artifact stores for AWS CodePipeline in Amazon S3.
- Provision AWS CodePipeline with AWS CloudFormation.
- View deployments performed by the pipeline in the AWS Management Console.
- Validate the deployments.
Getting started
Step 1. As part of this process of setting up resources, you install the AWS CodeDeploy agent on the instances. The AWS CodeDeploy agent is a software package that enables an instance to be used in AWS CodeDeploy deployments. There are two tasks in this step:
- Create Amazon EC2 instances and install the AWS CodeDeploy agent.
- Create an application in AWS CodeDeploy.
The AWS CloudFormation template automates both tasks. We will launch the AWS CloudFormation template in each of the three regions (us-west-2, us-east-1, and ap-southeast-2).
Note: Before you begin, you must have an instance key pair to enable SSH access to the Amazon EC2 instance for that region. For more information, see Amazon EC2 Key Pairs.
To create the EC2 instances and an AWS CodeDeploy application, in the AWS CloudFormation console, launch the following AWS CloudFormation templates in each region (us-west-2, us-east-1, and ap-southeast-2). For information about how to launch AWS CloudFormation from the AWS Management Console, see Using the AWS CloudFormation Console.
On the Specify Details page, do the following:
- In Stack name, enter a name for the stack (for example, USEast1CodeDeploy).
- In ApplicationName, enter a name for the application (for example, CrossRegionActionSupport).
- In DeploymentGroupName, enter a name for the deployment group (for example, CrossRegionActionSupportDeploymentGroup).
- In EC2KeyPairName, if you already have a key pair to use with Amazon EC2 instances in that region, choose an existing key pair, and then select your key pair. For more information, see Amazon EC2 Key Pairs.
- In EC2TagKeyName, enter Name.
- In EC2TagValue, enter NVirginiaCrossRegionInstance.
- Choose Next.
It could take several minutes for AWS CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console. When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.
You should see new EC2 instances running in each of the three regions (us-west-2, us-east-1, and ap-southeast-2).
Step 2. Set up artifact stores for AWS CodePipeline. AWS CodePipeline uses Amazon S3 buckets as an artifact store. These S3 buckets are regional and versioned. All of the artifacts are copied to the same region in which the pipeline action is configured to execute.
To create the artifact stores by using the AWS CloudFormation console, launch this AWS CloudFormation template in each region (us-west-2, us-east-1, and ap-southeast-2).
On the Specify Details page, do the following:
- In Stack name, enter a name for the stack (for example, artifactstore).
- In ArtifactStoreBucketNamePrefix, enter a prefix string of up to 30 characters. Use only lowercase letters, numbers, periods, and hyphens (for example, useast1).
- Choose Next.
It might take several minutes for AWS CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console. When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.
Now, copy the Amazon S3 bucket names created in each region. You need the bucket names in later steps.
Note: All Amazon S3 buckets, including the bucket used by the Source action in the pipeline, must be version-enabled to track versions that are being uploaded and processed by AWS CodePipeline.
Step 3. Provision AWS CodePipeline with AWS CloudFormation. We will create a new pipeline in AWS CodePipeline with an Amazon S3 bucket for its Source action and AWS CodeDeploy for its Deploy action. Using AWS CloudFormation, we will provision a new Amazon S3 bucket for the Source action and then provision a new pipeline in AWS CodePipeline.
To provision a new S3 bucket in the AWS CloudFormation console, launch this AWS CloudFormation template in our primary region, us-west-2.
On the Specify Details page, do the following:
- In Stack name, enter a name for the stack (for example, code-pipeline-us-west2-source-bucket).
- In SourceCodeBucketNamePrefix, enter a prefix string of up to 30 characters. Use only lowercase letters, numbers, periods, and hyphens (for example, uswest2).
- Choose Next.
It might take several minutes for AWS CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console. When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.
Download the sample app from s3-app-linux.zip and upload it to the source code bucket.
To provision a new pipeline in AWS CodePipeline
In the AWS CloudFormation console, launch this AWS CloudFormation template in our primary region, us-west-2.
On the Specify Details page, do the following:
- In Stack name, enter a name for the stack (for example, CrossRegionCodePipeline).
- In ApplicationName, enter a name for the application (for example, CrossRegionActionSupport).
- In APSouthEast2ArtifactStoreBucket, enter cross-region-artifact-store-bucket-ap-southeast-2 or enter the name you provided in step 2 for the S3 bucket created in ap-southeast-2.
- In DeploymentGroupName, enter a name for the deployment group (for example, CrossRegionActionSupportDeploymentGroup).
- In S3SourceBucketName, enter code-pipeline-us-west-2-source-bucket or enter the name you provided in step 3.
- In USEast1ArtifactStoreBucket, enter cross-region-artifact-store-bucket-us-east-1 or enter the name you provided in step 2 for the S3 bucket created in us-east-1.
- In USWest2ArtifactStoreBucket, enter cross-region-artifact-store-bucket-us-west-2 or enter the name you provided in step 2 for the S3 bucket created in us-west-2.
- In S3SourceBucketKey, enter s3-app-linux.zip.
- Choose Next.
It might take several minutes for AWS CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console. When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.
Step 4. View deployments performed by our pipeline in the AWS Management Console.
- In the Amazon S3 console, navigate to source bucket and copy the version ID (for example, in the following screenshot, kTtNtrHIhMt4.cX6YZHZ5lawDVy3R4Aj).
- Go to the AWS CodePipeline console and open the pipeline we just executed. Notice that the version ID is the same across the source S3 bucket, the source action, and all three CodeDeploy actions across three regions (us-west-2, us-east-1, and ap-southeast-2) in the pipeline.
- We can see that the deployment actions ran successfully across all three regions.
Step 5. To validate the deployments, in a browser, type the public IP address of the Amazon EC2 instances provisioned through AWS CodeDeploy in step 1. You should see a deployment page like the one shown here.
Conclusion
You have now created a multi-region deployment pipeline in AWS CodePipeline without having to worry about the mechanics of copying code across regions. AWS CodePipeline abstracted the copying of the code in the background using the artifact stores in each region. You can now upload new source code changes to the Amazon S3 source bucket in the primary region and changes will be deployed automatically to other regions in parallel using AWS CodeDeploy actions configured to execute in each region. Cross-region actions are very powerful and are not limited to deploy actions alone. They can also be used with build and test actions.
Wrapping up
After you’ve finished exploring your pipeline and its associated resources, you can do the following:
- Extend the setup. Add more stages and actions to your pipeline in AWS CodePipeline. For complete AWS CloudFormation sample code, see the GitHub repository.
- Delete the stack in AWS CloudFormation. This deletes the pipeline, its resources, and the stack itself. This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important because you don’t want to continue to be charged.
To delete the CloudFormation stack
- Delete the Amazon S3 buckets used as the artifact stores in AWS CodePipeline in the source and destination regions. Although the buckets were created as part of the AWS CloudFormation stack, Amazon S3 does not allow AWS CloudFormation to delete buckets that contain objects. To delete the buckets, open the Amazon S3 console, choose the buckets you created in this setup, and then delete them. For more information, see Delete or Empty a Bucket.
- Follow the steps in the AWS CloudFormation User Guide to delete a stack.
If you have questions about this blog post, start a new thread on the AWS CodePipeline forum or contact AWS Support.
相關推薦
Using AWS CodePipeline to Perform Multi-Region Deployments
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application a
How to perform CRUD operations using Blazor with MongoDB
How to perform CRUD operations using Blazor with MongoDBIntroductionIn this article, we will create a Blazor application using MongoDB as our database prov
A practical ES6 guide on how to perform HTTP requests using the Fetch API
In this guide, I’ll show you how to use the Fetch API (ES6+) to perform HTTP requests to an REST API with some practical examples you’ll most likely encoun
How to build a front-line concussion monitoring system using AWS IoT and serverless data lakes
In part 1 of this series, we demonstrated how to build a data pipeline in support of a data lake. We used key AWS services such as Amazon Kinesis
Mount S3 Buckets to a PC Using AWS Snowball Edge
After you mount one or more Amazon Simple Storage Service (Amazon S3) buckets in your AWS Snowball Edge to your PC, you can transfer files betw
Flatiron Health – Using AWS to Help Improve Cancer Treatment
Flatiron Health is a hot startup with a great idea – providing cancer patients, physicians, researchers, and drug firms with a solution that organ
Using AWS IoT Device Management in a Retail Scenario to Process Order Requests
In this blog post, we will simulate a common business scenario to show you how to use the group policy feature in AWS IoT Device Management. Speci
Architecting Multi-Region SaaS Solutions on AWS
By Tod Golding, Partner Solutions Architect at AWS As software-as-a-service (SaaS) organizations grow and begin to extend their global rea
error rabbitMQ:Error: unable to perform an operation on node 'rabbit@xxxx'.
rabbit xxx this inline nbsp matching one server text C:\Program Files\RabbitMQ Server\rabbitmq_server-3.7.4\sbin>rabbitmqctl list_queu
使用bedtools提取vcf多個位置的變異(extract multi-region of genotypes by bedtools)
提取 targe intersect doc pre orm mut 生成 dto 1、下載安裝bedtools; 2、生成bed文件;標準的bed文件格式如下: chr7 127471196 127472363 Pos1 0 + 127471196
How to solve multi-version conflict of OpenCV or PCL on ROS kinetic?
Solve multi-version conflict prepare: make sure you know which version is in your machine: dpk-config --modversion opencv Note: If it don't work, try
「Computer Vision」Notes on Multi-Region CNN
QQ Group: 428014259 Sina Weibo:小鋒子Shawn Tencent E-mail:[email protected] http://blog.csdn.net/dgyuanshaofeng/article/details/83834303 [1]
關於yum安裝出現You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest解決方法
[[email protected] ~]# yum -y localinstall zabbix-proxy-mysql-4.0.0-2.el7.x86_64.rpmLoaded plugins: fastestmirror, securitySetting up Local Package Pr
Using SQL Patch to add hints to a packaged application
From Oracle Database 12c Release 2 onwards there's a public API call to create SQL patches using DBMS_SQLDIAG.CREATE_SQL_PATCH. If you're
RabbitMQ_____error rabbitMQ:Error: unable to perform an operation on node '
Error: unable to perform an operation on node '[email protected]' 將登入身份改為指定帳戶,重啟RabbitMq服務 Error: unable to perform an operation on node '
錯誤:php70w-common conflicts with php-common-5.3.3-49.el6.x86_64 You could try using --skip-broken to
記錄一下 由於之前系統自帶的php5.3.3沒有解除安裝乾淨; 在執行phpize時報錯說需要php-devel 然後yum -y install php-delel ; 然後就報錯 錯誤:php70w-common conflicts with php-common
9 popular ways to perform Data Visualization in Python
原文地址:https://www.analyticsvidhya.com/blog/2015/05/data-visualization-python/ Introduction The beauty of an art lies in the message it con
How to perform custom validation in your Express.js app (Part
How to perform custom validation in your Express.js app (Part-2)In the previous post, I showed how to get started with input validation in an express.js ap
Serverless Backend using AWS Lambda: Hands
Storing Data in DynamoDBBefore we can start storing data in our DynamoDB, we need to set some permissions for the Lambda function to have write access.Insi
Using personal data to predict blood pressure
Their work earned the title of Best Paper at IEEE Healthcom 2018. To the researchers' knowledge, this is the first work investigating daily blood pressure