Skip to content
Eric EvansDec 6, 2019 12:00:00 AM2 min read

Create a Serverless AWS EKS Cluster using Pulumi

Create a Serverless AWS EKS Cluster using Pulumi (Photo by Ihor Dvoretskyi https://unsplash.com/s/photos/kubernetes)

Create a Serverless AWS EKS Cluster using Pulumi

This week at AWS Re:Invent 2019, Fargate support for the Elastic Kubernetes Service (EKS) was announced with general availability. Soon afterwards, compatibility with Pulumi was also announced. In this post we will create a serverless managed Kubernetes cluster from scratch in AWS in about 30 minutes.

If you haven’t set up Pulumi yet, I would recommend following the instructions here. Once you are done, follow the steps below!

Setting up the VPC

To begin with, let’s set up a VPC. For simple proof of concepts (such as the one that is outlined in this article) or testing environments, setting one up with a single NAT Gateway is okay, but it is not recommended for production. For production workloads, it’s recommended to use a NAT in each availability zone.

// Set up VPC with one NAT Gateway (not recommended for production)
const vpc = new awsx.ec2.Vpc("custom", {
    numberOfNatGateways: 1

});

Setting up the EKS Fargate Cluster

You may need to install the @pulumi/eks dependency. You can do this by running the following command:

npm install @pulumi/eks

Now let’s declare our Fargate-enabled EKS cluster and set it up in the VPC we created. Here’s the entire code so far:

import * as awsx from "@pulumi/awsx";
import * as eks from "@pulumi/eks";

// Set up VPC with one NAT Gateway (not recommended for production)
const vpc = new awsx.ec2.Vpc("custom", {
    numberOfNatGateways: 1
});

// Set up a Fargate-enabled EKS cluster
const cluster = new eks.Cluster("custom-cluster", {
    fargate: true,
    deployDashboard: false, // dashboard is deprecated
    vpcId: vpc.id,
    publicSubnetIds: vpc.publicSubnetIds,
    privateSubnetIds: vpc.privateSubnetIds,
});

// Export the cluster's kubeconfig.
export const kubeconfig*** ***= 
cluster.kubeconfig;

Create the infrastructure by executing the command pulumi up. Don’t be discouraged if this takes a while. For me, the process of infrastructure being created took over 20 minutes, so feel free to make a coffee and come back.

Connecting to the Fargate Cluster

Now that we have our infrastructure up, we can begin performing operations on it. To do this, we need to set up our Kubernetes configuration. This is relatively easy — using the stack output, first export the kubeconfig as shown below:

pulumi stack output kubeconfig > 
kubeconfig.json

Next export the KUBECONFIG environment variable so we can access the cluster:

KUBECONFIG=./kubeconfig.json

Finally try a test kubectl command:

kubectl get nodes

If it is successful, you should see something like this:

NAME                                   STATUS   
ROLES AGE VERSION fargate-ip-10-0-238-56.ec2.internal Ready
<none> 14m v1.14.8-eks fargate-ip-10-0-243-200.ec2.internal Ready
<none> 14m v1.14.8-eks

Congratulations, you now have a Fargate-enabled AWS EKS cluster deployed in your VPC!

Using infrastructure as code, creating a scaleable, serverless container orchestration system in the cloud can be done with ease. These state-of-the-art technologies can help deploy cloud workloads with minimal management overhead in an amazingly short amount of setup time.

RELATED ARTICLES

The information presented in this article is accurate as of 7/19/23. Follow the ScaleSec blog for new articles and updates.