Distributing your AWS OpsWorks for Chef Automate infrastructure
Organizations that manage many nodes over larger geographical AWS Regions may wish to reduce latency and load between nodes in their AWS OpsWorks for Chef Automate implementation. By distributing nodes between multiple servers, organizations encounter the challenge of how to ensure that cookbooks and other configurations are consistently deployed across two or more Chef Servers residing in one or more Regions. To accomplish this, customers can make use of several supplemental AWS services that will drive the process of distributing cookbooks to one or more Chef Automate instances.
Overview
One situation large-scale Chef users might run into is that the number of nodes managed by their OpsWorks for Chef Automate (OWCA) server exceeds its capacity. Customers can switch to a bigger Amazon EC2 instance by using their most recent backup, but this limit may be reached at some point. Additionally, latency in communications may be experienced in globally-distributed environments.
This is easy to overcome by distributing management of nodes to multiple OpsWorks for Chef Automate instances. However, this approach introduces the challenge to synchronize cookbooks across multiple OWCA instances. This can be accomplished with the use of AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and optionally AWS Lambda. For this blog post, we will show you how customers can scale their Chef-managed infrastructure across two Regions.
By using CodePipeline and CodeCommit, a cookbook developer simply has to push changes to a central repository. CodePipeline will trigger off this commit and send the updated repository contents to CodeBuild for processing. With simple scripting, CodeBuild will pull dependencies from Chef Supermarket, and upload the needed cookbooks to each Chef Automate instance in the account (or accounts). By using the chef-client cookbook, nodes will check in automatically on a pre-configured schedule. In this blog post, an Invoke stage in the pipeline can use AWS Lambda and AWS Systems Manager to run chef-client on any nodes in the environment. This step is optional, but useful for testing scenarios where it is helpful to deploy changes more rapidly.
Setup
To set this up, we use an AWS CloudFormation template. We will walk through each resource to be added to the template. Prior to doing so, at least one OpsWorks for Chef Automate instance must be created. The starter kit for each instance will contain a private key in the [Starter-Kit]/.chef/
directory. Though this key can be used for authentication from CodeBuild, we recommend that you create a separate user in the Chef Automate console, and assign it a public/private key pair. Follow the instructions on Chef.io for reference. At minimum, this user account will require Committer permissions. The private key for this user can be saved in an Amazon S3 bucket in your account, which will be accessed by CodeBuild to authenticate with the Chef Automate server during the cookbook upload process. It’s important to ensure that access to this bucket is tightly controlled with appropriate bucket policies. An alternative to consider would be storing the keys in AWS Key Management Service (KMS).
Components
Parameters
This template requires only one input parameter, KeyBucket
, which corresponds to the Amazon S3 bucket that contains the private keys for each Chef Automate instance to be synchronized with this cookbook repository.
{
"Parameters": {
"KeyBucket": {
"Type": "String",
"Description": "Name of S3 bucket which contains the OWCA private keys.",
"AllowedPattern": "^[a-z0-9][a-z0-9-.]*$",
"MinLength": 3,
"MaxLength": 63,
"ConstraintDescription": "Please provide a valid S3 bucket name."
}
}
}
IAM Permissions
Two AWS Identity and Access Management (IAM) roles will be needed for the pipeline to function correctly.
The first IAM role, BuildRole
, will be used by CodeBuild during build tasks, and will need the default permissions for CodeBuild containers. Additionally, the role will need access to the Amazon S3 bucket containing the private keys for authentication with each Chef Automate instance (referenced in the Parameters section of the template).
The second IAM role, FunctionRole
, will be used by AWS Lambda to execute chef-client
on each instance in the environment. In addition to the Amazon CloudWatch Logs permissions required by Lambda execution roles, this role will require the ability to send commands via AWS Systems Manager.
{
"Resources": {
"BuildRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "codebuild.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/",
"Policies": [ {
"PolicyName": "CodeBuildS3WithCWL",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
{ "Fn::GetAtt": [ "ArtifactBucket", "Arn" ] },
{
"Fn::Join": [ "", [
{ "Fn::GetAtt": [ "ArtifactBucket", "Arn" ] },
"/*"
] ]
},
{
"Fn::Join": [ "", [
"arn:aws:s3:::",
{ "Ref": "KeyBucket" }
] ]
},
{
"Fn::Join": [ "", [
"arn:aws:s3:::",
{ "Ref": "KeyBucket" },
"/*"
] ]
}
]
},
{
"Effect": "Allow",
"Resource": [
{
"Fn::Join": [ "", [
"arn:aws:logs:",
{ "Ref": "AWS::Region" },
":",
{ "Ref": "AWS::AccountId" },
":log-group:/aws/codebuild/*"
] ]
},
{
"Fn::Join": [ "", [
"arn:aws:logs:",
{ "Ref": "AWS::Region" },
":",
{ "Ref": "AWS::AccountId" },
":log-group:/aws/codebuild/*:*"
] ]
}
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [ "arn:aws:s3:::codepipeline-*" ],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Action": [ "ssm:GetParameters" ],
"Resource": {
"Fn::Join": [ "", [
"arn:aws:ssm:",
{ "Ref": "AWS::Region" },
":",
{ "Ref": "AWS::AccountId" },
":parameter/CodeBuild/*"
] ]
}
}
]
}
} ]
}
},
"FunctionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/",
"Policies": [ {
"PolicyName": "LambdaBasicExecutionWithSSM",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:SendCommand",
"ssm:GetCommandInvocation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
{
"Fn::Join": [ "", [
"arn:aws:logs:",
{ "Ref": "AWS::Region" },
":",
{ "Ref": "AWS::AccountId" },
":log-group:/aws/lambda/*"
] ]
},
{
"Fn::Join": [ "", [
"arn:aws:logs:",
{ "Ref": "AWS::Region" },
":",
{ "Ref": "AWS::AccountId" },
":log-group:/aws/lambda/*:*"
] ]
}
]
},
{
"Effect": "Allow",
"Action": [
"codepipeline:PutJobSuccessResult",
"codepipeline:PutJobFailureResult"
],
"Resource": "*"
}
]
}
} ]
}
}
}
}
Amazon S3 bucket
For use in CodePipeline to store artifacts, the ArtifactBucket
resource is created and referenced when creating the pipeline itself.
{
"Resources": {
"ArtifactBucket": {
"Type": "AWS::S3::Bucket"
}
}
}
CodeCommit repository
The CodeCommit repository being created will act as the Chef Repo to store cookbooks. The repository structure should adhere to the following format:
. ├── .chef │ ├── knife.rb │ ├── ca_certs │ │ └── opsworks-cm-ca-2016-root.pem │ └── [CHEF_SERVER_NAME] │ └── config.yml ├── Berksfile ├── buildspec.yml └── cookbooks/
{
"Resources": {
"Repo": {
"Type": "AWS::CodeCommit::Repository",
"Properties": {
"RepositoryDescription": "Cookbook repository for multiple region OWCA deployment.",
"RepositoryName": "owca-multi-region-repo"
}
}
}
}
.chef
The .chef/
directory structure is slightly different than what is normally included in the OpsWorks for Chef Automate starter kit. The modifications that follow will allow the knife utility to use environment variables when determining which Chef Automate instance to communicate with and which certificate file to use.
The knife.rb
file below uses the yaml
gem to parse configuration data from config.yml
. The correct configuration file is determined based on the value of the CHEF_SERVER
environment variable.
require 'yaml'
CHEF_SERVER = ENV['CHEF_SERVER'] || "NONE"
current_dir = File.dirname(__FILE__)
base_dir = File.join(File.dirname(File.expand_path(__FILE__)), '..')
env_config = YAML.load_file("#{current_dir}/#{CHEF_SERVER}/config.yml")
log_level :info
log_location STDOUT
node_name 'pivotal'
client_key "#{current_dir}/#{CHEF_SERVER}/private.pem"
syntax_check_cache_path File.join(base_dir, '.chef', 'syntax_check_cache')
cookbook_path [File.join(base_dir, 'cookbooks')]
chef_server_url env_config["server"]
ssl_ca_file File.join(base_dir, '.chef', 'ca_certs', 'opsworks-cm-ca-2016-root.pem')
trusted_certs_dir File.join(base_dir, '.chef', 'ca_certs')
To determine the correct Chef Automate instance to communicate with, each instance should have its own directory underneath .chef/
, corresponding to the name of the instance. Within this directory, the config.yml
file must follow the below format.
server: 'https://[SERVER_FQDN]/organizations/default'
Currently, each Chef Automate instance directory does not contain the private key needed to communicate with the server. It is not recommended to include authentication information such as SSL/API keys or passwords within files committed to source control systems. Instead, steps have been added to the build process to include the needed keys from S3 instead.
Berksfile
Within the Berksfile, any cookbooks contained in the cookbooks/
directory must be referenced in the format that follows. This will indicate to Berkshelf that the cookbook can be found within the local repository.
# Local Cookbooks cookbook '[COOKBOOK_NAME]', path: 'cookbooks/[COOKBOOK_NAME]'
Cookbooks that are imported from Chef Supermarket can be included as normal.
source 'https://supermarket.chef.io' # Supermarket Cookbooks cookbook '[COOKBOOK_NAME]'
buildspec.yml
The buildspec.yml
file provides instructions to CodeBuild for downloading dependencies and uploading the cookbooks to each Chef Automate instance. To use the berks command, ChefDK must be installed during the build process. In this example the installation package is downloaded from Chef.io. Alternatively, ChefDK can be packaged and installed preemptively on a custom build environment to reduce build times. As shown in the following example, these are the major steps for the build process:
- Download and install ChefDK.
- Copy the private keys to authenticate to two Chef Automate instances from S3.
- Run
berks install
to download and install any dependencies. - Upload cookbooks to both Chef Automate instances.
version: 0.2
phases:
install:
commands:
- "wget https://packages.chef.io/files/stable/chefdk/1.5.0/ubuntu/14.04/chefdk_1.5.0-1_amd64.deb"
- "dpkg -i ./chefdk_1.5.0-1_amd64.deb"
build:
commands:
- "aws s3 cp s3://[KEY_BUCKET]/[CHEF_SERVER_1]/private.pem ./.chef/[CHEF_SERVER_1]/private.pem"
- "aws s3 cp s3://[KEY_BUCKET]/[CHEF_SERVER_2/private.pem ./.chef/[CHEF_SERVER_2]/private.pem"
- "CHEF_SERVER=[CHEF_SERVER_1] berks install"
- "CHEF_SERVER=[CHEF_SERVER_2] berks install"
- "CHEF_SERVER=[CHEF_SERVER_1] berks upload --no-ssl-verify"
- "CHEF_SERVER=[CHEF_SERVER_2] berks upload --no-ssl-verify"
post_build:
commands:
- "echo 'Complete'"
This build specification will be ingested by CodeBuild during the build stage of the pipeline.
{
"Resources": {
"BuildProject": {
"Type": "AWS::CodeBuild::Project",
"Properties": {
"Artifacts": { "Type": "CODEPIPELINE" },
"Description": "Installs cookbook dependencies from Berksfile and uploads to one or more OWCA servers.",
"Environment": {
"ComputeType": "BUILD_GENERAL1_SMALL",
"Image": "aws/codebuild/ubuntu-base:14.04",
"Type": "LINUX_CONTAINER"
},
"ServiceRole": {
"Fn::GetAtt": [ "BuildRole", "Arn" ]
},
"Source": { "Type": "CODEPIPELINE" }
}
}
}
}
Lambda function
The Lambda function, ChefClientFunction
, uses AWS Systems Manager via Boto3 to call sudo chef-client on a list of instances provided in the function code. The example below uses two instance IDs passed in a list. However, Boto3 can be leveraged further to generate lists of nodes by tags or other relevant properties. This is left as an exercise for the reader.
{
"Resources": {
"ChefClientFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"ZipFile": {
"Fn::Join": [ "\n", [
"from __future__ import print_function",
"",
"import boto3",
"",
"ssm = boto3.client('ssm')",
"code_pipeline = boto3.client('codepipeline')",
"",
"def lambda_handler(event,context):",
" job_id = event['CodePipeline.job']['id']",
"",
" try:",
" response = ssm.send_command(",
" InstanceIds=[",
" '[INSTANCE_ID_1]',",
" '[INSTANCE_ID_2]'",
" ],",
" DocumentName='AWS-RunShellScript',",
" Comment='chef-client',",
" Parameters={",
" 'commands': ['sudo chef-client']",
" }",
" )",
"",
" command_id = response['Command']['CommandId']",
" print('SSM Command ID: ' + command_id)",
" print('Command Status: ' + response['Command']['Status'])",
"",
" # Include monitoring of job success/failure as needed.",
"",
" code_pipeline.put_job_success_result(jobId=job_id)",
"",
" return",
" except Exception as e:",
" print(e)",
"",
" code_pipeline.put_job_failure_result(jobId=job_id, failureDetails={'message': e.message, 'type': 'JobFailed'})",
"",
" raise e"
] ]
}
},
"Description": "Executes chef-client on specified nodes.",
"Handler": "index.lambda_handler",
"Role": {
"Fn::GetAtt": [ "FunctionRole", "Arn" ]
},
"Runtime": "python2.7",
"Timeout": "10"
}
}
}
}
Pipeline
Lastly, the Pipeline deployed will use each of the components to deploy a cookbook to both Chef Automate servers simultaneously.
{
"Resources": {
"OWCAPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"ArtifactStore": {
"Location": { "Ref": "ArtifactBucket" },
"Type": "S3"
},
"RoleArn": {
"Fn::Join": [ "", [
"arn:aws:iam::",
{ "Ref": "AWS::AccountId" },
":role/AWS-CodePipeline-Service"
] ]
},
"Stages": [
{
"Actions": [
{
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Provider": "CodeCommit",
"Version": "1"
},
"Configuration": {
"BranchName": "master",
"RepositoryName": {
"Fn::GetAtt": [ "Repo", "Name" ]
}
},
"Name": "ChefRepo",
"OutputArtifacts": [
{ "Name": "Cookbooks" }
]
}
],
"Name": "Source"
},
{
"Actions": [
{
"ActionTypeId": {
"Category": "Build",
"Owner": "AWS",
"Provider": "CodeBuild",
"Version": "1"
},
"Configuration": {
"ProjectName": { "Ref": "BuildProject" }
},
"InputArtifacts": [
{ "Name": "Cookbooks" }
],
"Name": "Berkshelf"
}
],
"Name": "Build"
},
{
"Actions": [
{
"ActionTypeId": {
"Category": "Invoke",
"Owner": "AWS",
"Provider": "Lambda",
"Version": "1"
},
"Configuration": {
"FunctionName": { "Ref": "ChefClientFunction" }
},
"Name": "LambdaChefClient"
}
],
"Name": "Invoke"
}
]
}
}
}
}
Summary
By distributing nodes across multiple OpsWorks for Chef Automate instances, the average workload per instance can be reduced, allowing management of considerably more instances as your infrastructure grows. Using several AWS services, it is a simple task to ensure consistency and ease of management of cookbooks across servers, Regions, and even AWS accounts.
About the Author
Nick Alteen is a Lab Development Engineer at Amazon Web Services. In his role, he enjoys process automation and configuration management. Along with this, he supports customers on best practices and on-boarding OpsWorks services.
相關推薦
Distributing your AWS OpsWorks for Chef Automate infrastructure
Organizations that manage many nodes over larger geographical AWS Regions may wish to reduce latency and load between nodes in their AWS OpsWorks
Using AWS OpsWorks for Chef Automate in a federated environment
Many large enterprises operate on a federated model. That is, they are separated into different business units or organizations, with different go
AWS OpsWorks for Chef Automate Features
The Chef server acts as the hub for configuration data and distributes information about desired configurations to nodes. It stores your cookbook
AWS OpsWorks for Chef Automate FAQs
Q: What is AWS OpsWorks for Chef Automate? AWS OpsWorks for Chef Automate provides a fully managed Chef server and suite of autom
AWS OpsWorks for Chef Automate Pricing
You pay an hourly rate for each running EC2 instance or on-premises server that is registered with your Chef server as a node. For example, if you
AWS OpsWorks for Chef Automate Resources
API REFERENCES Describes the API operations for AWS OpsWorks for Chef Automate in detail. In addition, it provides sample requests, re
AWS OpsWorks for Chef Automate 定價
對於作為節點註冊到您的 Chef 伺服器的各個正在執行的 EC2 例項或本地伺服器,您可以按小時費率付費。例如,如果您使用 Chef 伺服器管理 10 個 EC2 例項 24 個小時,則您將支付 240 個節點小時的費用。登出、關閉或終止作為 Chef 節點註冊的 EC2 例項或本地伺服器後
Restore OpsWorks for Chef Automate Servers From a Backup
Amazon Web Services is Hiring. Amazon Web Services (AWS) is a dynamic, growing business unit within Amazon.com. We are currently hiring So
AWS OpsWorks for Puppet Enterprise – Managed Puppet Master
AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools fr
AWS OpsWorks for Puppet Enterprise Features
Puppet uses SSL and a certification approval process when communicating to ensure that the Puppet master responds only to requests made by truste
AWS OpsWorks for Puppet Enterprise Pricing
You pay an hourly rate for each running EC2 instance or on-premises server that is registered with your Puppet master as a node. For example, if y
Sell your AWS/GCP credit for cryptos
I have an idea. It seems like that AWS/GCP are giving out tremendous amount of credits out to people. Sometimes startups don't need that much. Starting a s
How to Enable LDAPS for Your AWS Microsoft AD Directory
Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Servic
AWS OpsWorks Stacks – Manage Servers with Chef Solo
AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises. With OpsWorks Stacks, you can model your application as a st
Confirm Your AWS Infrastructure Is GDPR
Amazon Web Services is Hiring. Amazon Web Services (AWS) is a dynamic, growing business unit within Amazon.com. We are currently hiring So
Enable a DHCP Options Set for Your AWS Directory Service Directory
Amazon Web Services is Hiring. Amazon Web Services (AWS) is a dynamic, growing business unit within Amazon.com. We are currently hiring So
AWS OpsWorks新增Amazon RDS支持
ack 依據 json 傳遞 blank ice 層次 編輯 html AWS OpsWorks是一個應用管理服務。你可以通過它把你的應用在一個 堆棧中定義成為不同層的集合。每一個堆棧提供了須要安裝和配置的軟件包信息,同一時候也能部署不論什麽在OpsWorks層中定義的
AWS RDS for MySQL 維護
jaAWS RDS維護檢查雲數據庫的數據庫參數 max_connections=4000 #但實際連接數到800,就無法訪問了VPC: 邏輯的虛擬的網絡,可以配置自由的IP,子網、路由表和網關 提供了安全組合網絡訪問控制列表等高級安全功能 AWS RDS for MySQL 維護
AWS RDS for MySQL 基本維護
ja1、AWS查詢慢日誌select count(1) ,user_host from mysql.slow_log group by user_host;select count(*) from mysql.slow_log ; mysql> desc mysql.slow_log ;+---
--- no python application found, check your startup logs for errors
clas image bubuko art nbsp bsp app found cat --- no python application found, check your startup logs for errors 碰到這個問題,請留意下系統執行的python版本