1. 程式人生 > >Running FaaS on a Kubernetes Cluster on AWS using Kubeless

Running FaaS on a Kubernetes Cluster on AWS using Kubeless

Serverless computing allows you to build and run applications and services without provisioning, scaling, or managing any servers. FaaS (Functions as a Service) is the runtime that enables serverless computing by firing off bits of code (functions) as they are needed, freeing the developer from managing infrastructure and enabling the developer to simply write business logic code. With the rise of Kubernetes, several open source FaaS platforms have been created. This two-part post will introduce one such FaaS, Kubeless, and how to get it up and running on a Kubernetes cluster on AWS.

Arun

Kubeless is an open source Function as a Service (FaaS) solution built on top of Kubernetes. Inspired by AWS Lambda, Kubeless aims to bring the functional programming paradigm to Kubernetes users, by bringing function-based packaging into the container. The technical strength of Kubeless lies in the fact that it is a Kubernetes extension using the

Custom Resource Definition API object. Kubeless uses k8s primitives to build a Lambda-like system which enables developers to deploy small units of code as functions without worrying about underlying infrastructure. When you use Kubernetes, using Kubeless to deploy small units of code is as straightforward as creating a Kubernetes resource. While you can use Kubeless to deploy HTTP webhooks, you can also deploy functions that can be triggered by events happening in your cloud, such as a file being uploaded to a storage bucket, a data stream, and so on.

Since Kubeless is built on top of upstream Kubernetes, it will be deployable in Amazon EKS, and any functions deployed can be triggered based on multiple AWS sources.

In this post we’ll show you how to run Kubeless on a Kubernetes cluster on AWS created using kops, and how to deploy a trivial function. In a follow up post, we will show you how to trigger your Kubeless functions based on events published to Kinesis streams.

Create a cluster on AWS with kops

kops is one of the provisioning tools available to create a Kubernetes cluster, with very advanced AWS support. Detailed documentation is available. Here we will only show the main steps, so do make sure to check out the full walkthrough, especially if this is your first time using kops. (For more information, read Manage Kubernetes Clusters on AWS Using Kops.)

Get the pre-reqs and set up your environment, create a kops IAM user with required roles, then create an S3 bucket which will store your cluster configuration, set two environment variables, and make sure you know which zone you are going to use, like so:

aws s3api create-bucket --bucket kops-Kubeless --region us-east-1

export NAME=Kubeless.k8s.local

export KOPS_STATE_STORE=s3://kops-Kubeless

aws ec2 describe-availability-zones --region eu-west-1

You are then ready to create your cluster:

kops create cluster --zones eu-west-1a ${NAME}

kops update cluster Kubeless.k8s.local --yes

Remember to delete it once you are all done:

kops delete cluster --name ${NAME} --yes

Install Helm to Deploy an Ingress Controller

Using an Ingress controller allows us to expose functions to the public internet.

Get the Helm client from the GitHub release page, then create a service account with proper RBAC privileges.

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

helm init --service-account tiller --upgrade

Deploy an nginx Ingress controller with a Load Balancer service:

helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true

Once your Ingress controller is running, you can get the public DNS hostname used:

kubectl get svc nginx-ingress-nginx-ingress-controller -o json | jq -r .status.loadBalancer.ingress[0].hostname

Deploy Kubeless

To deploy on Kubeless on the cluster, you need to create a dedicated namespace and then post a few manifests to the Kubernetes API server. Those manifests will create a custom resource definition to declare a new Function object kind, and will launch the Kubeless controller.

Below are the two command lines that show you how to create the namespaces and how to launch the latest version of Kubeless:

kubectl create ns kubeless

kubectl create -f https://github.com/Kubeless/Kubeless/releases/download/v1.0.0-alpha.2/Kubeless-v1.0.0-alpha.2.yaml

Finally, to be able to use Kubeless from the command line, you will need to install the CLI. You can get it from the Github release page or, if you are an OSX user, you can get it directly from brew.

$ brew install kubeless

Deploy a function

Let’s create a simple echo function in Python:

cat << EOF >> echo.py

def handler(event, context):

print(event['data'])

return event['data']

EOF

Deploy the function:

kubeless function deploy foo --runtime python3.6 --from-file echo.py --handler echo.handler

Soon the function will be ready and a corresponding pod will be running (note that you can configure autoscaling so that your functions scale based on requests or load).

$ kubeless function ls

NAME    NAMESPACE    HANDLER        RUNTIME      DEPENDENCIES    STATUS

foo     default      echo.handler    python3.6                   1/1 READY

$ kubectl get pods|grep foo

foo-697454fcd4-n7g5g     1/1 Running 0      1m

Create a route to the function. Note that the function will be exposed publicly (adding TLS and authentication is possible but not explained here, please see the full documentation).

kubeless trigger http create foo --function-name foo --gateway nginx

Once the function is up, you can call it using the Host header defined in the Ingress object and the public endpoint of the Ingress controller.

$ export FOO_HOST=$(kubectl get ingress foo -o json |jq -r .spec.rules[0].host)

$ export FOO_INGRESS=$(kubectl get svc nginx-ingress-nginx-ingress-controller -o json | jq -r .status.loadBalancer.ingress[0].hostname)

$ curl -d '{"kubeless": "on AWS"}' -H "Host: ${FOO_HOST}" -H "Content-Type:application/json" ${FOO_INGRESS}

{"kubeless": "on AWS"}

Conclusion

Congratulations! If you made it this far, you have a running Kubeless installation in an Kubernetes cluster on AWS. In a follow up post, we will show you the really exciting part: how to trigger your function based on cloud events, focusing on AWS Kinesis.

Sebastien GoasguenSebastien Goasguen is a twenty year open source veteran. A member of the Apache Software Foundation, he worked on Apache CloudStack for several years before diving into the container world. He is the founder of Skippbox, a Kubernetes startup acquired by Bitnami. He is the creator of Kubeless and is its current tech lead and product manager. An avid blogger, he enjoys spreading the word about new cutting edge technologies. Sebastien is the author of the O’Reilly Docker Cookbook and co-author of the Kubernetes Cookbook.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

相關推薦

Running FaaS on a Kubernetes Cluster on AWS using Kubeless

Serverless computing allows you to build and run applications and services without provisioning, scaling, or managing any servers. FaaS (

Set up a multi-data center Cassandra cluster on a Kubernetes platform

Video & podcast producer & Strategist for developerWorks. I've also been a radio reporter and show director for programming on Public Radio Interna

Building a Kubernetes cluster in the IBM Cloud

Doug Tidwell is a Senior Software Engineer at IBM. His job as a content strategist for the developerWorks team is to figure out how to deliver as much usef

Connect a Private Network to AWS Using a Direct Connect VIF

You can configure the on-premises router terminating the Direct Connect public VIF to network address translation (NAT) or port address transla

Setup Kubernetes on a Raspberry Pi Cluster easily the official way!

轉自 Kubernetes shares the pole position with Docker in the category “orchestration solutions for Raspberry Pi cluster”. However it’s setup proces

【maven】maven的web項目打包報錯:No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK

應用 cga snapshot ace owin span ons sed sse 打包過程中報錯如下: No compiler is provided in this environment. Perhaps you are running on a JRE rather

No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?

his all true .so java body program mpi .com (1)需要設置JDKWindows -> Perferences -> Java -> Installed JRES加入JDK 如:D:\Program Files\J

No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK? idea maven 打包報錯問題解決。

jdk dma pom test plugins vat ogr true 指定 mvn clean install -X -Dmaven.test.skip=true -P dev 打包報錯:No compiler is provided in this environm

maven install 報錯 No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?

1、控制檯列印資訊 [INFO] Scanning for projects... [INFO] [INFO] ---------------------< org.cqupt.mauger:Resource >---------------------- [INFO] Building Re

No compiler is provided in this environment . Perhaps you are running on a jre rather than a JDK?

背景:利用IDEA+maven構建一個非web的spring boot專案(Windows電腦下),程式碼完成時,使用mvn package打包報錯: No compiler is provided in this environment . Perhaps you are runni

解決方案:Perhaps you are running on a JRE rather than a JDK?

當在maven的時候出現了這麼一個問題,在網上找了好多答案,都是說:沒有找懂啊一個jdk。如圖: 可是我配置的路徑就是jdk的路徑呀。一直以為是因為其他原因。 後來看了一下路徑下的資料夾,發現我的jdk資料夾下面少了一個jre資料夾。如圖: 後來才想到編譯java檔案需要的是jre

Eclipse中執行Maven打包編譯時出現:Perhaps you are running on a JRE rather than a JDK?

出現錯誤後,查看了許多網友的解決方式,但都沒有解決我遇到的問題,可能是情況還是有些許的不同;下面直接說說我的問題的解決辦法,希望對大家有所幫助;Eclipse中Maven報錯資訊如下: 本以為可能是Eclipse配置Maven沒配置好,或者是jdk和jre環境出現問題,後來在cmd視窗找到

關於maven項報:[ERROR] No compiler is provided in this environment. Perhaps you are running on a JRE rath

關於eclipse建立maven工後進行測試報[ERROR] No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?的解決辦法: 報錯資訊如下: [E

No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK

小編今天在執行專案時報錯,如下圖: 第一行錯誤排查原因,發現是因為版本不對應,找到專案引用版本的地方發現沒有錯誤,內容如下: 第二行錯誤排查原因:java編譯環境未配置成jdk目錄,檢查確實是這裡的

elasticsearch-cluster on kubernetes

1.elasticsearch叢集基礎 預設情況下,elasticsearch叢集中每個節點都有成為主節點的資格,也都儲存資料,還可以提供查詢服務。這些功能是由兩個屬性控制的:node.master和node.data。 node.master:這個屬性表

Deploy a serverless Jekyll website on a CDN at AWS

When I was developing my personal blog website with Jekyll, I was eager to deploy the site on a well-known performant and scalable platform. Namely Amazon

Get Kubeflow up and running on a private cloud

Today more and more companies use artificial intelligence (AI) to improve the user experiences for their products. These enterprises

Running a TensorFlow model on iOS and Android

Running a TensorFlow model on iOS and AndroidShrink the model size and reduce the computational resources needed to do the inference calculationsSo you are

Maven編譯失敗,提示No compiler is provided in this enviroment. Perhaps you are running on a JRE rathen a JDK ?

style detail wid image 構建項目 then 提示 tail eight 用maven對項目進行構建時,提示No compiler is provided in this enviroment. Perhaps you are running on a

解決案例No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?

maven install 初始化失敗 錯誤狀況: 師兄師姐說 .............................................. SUCCESS [  0.614 s] [INFO] sxsj-common ..................