11. January 2024 By Dominik Táskai
Leveraging Dagger for AWS CDK deployments
What is Dagger?
Dagger consists of 3 main parts at the time of this post; the Dagger Engine, Dagger SDKs and the recently announced Dagger Cloud. The heart of Dagger is the Dagger Engine, which is a CI/CD engine that allows you to run your pipelines inside containers. One of the main selling points of Dagger is that you can develop your pipelines as code in the programming language of your choice, as long as a Dagger SDK is available for the chosen language. The SDK is currently available for Go, Node.js, Python and Elixir, with Rust and Java support currently in the works.
This architecture of Dagger lets you easily and iteratively run and test your pipelines locally and then integrate this pipeline into the CI tool of your choice (GitHub Actions, GitLab CI/CD etc.), without any changes, and without vendor lock-in.
What is AWS CDK?
The AWS Cloud Development Kit (AWS CDK) is an open-source framework for defining your AWS infrastructure as code in modern programming languages. AWS CDK is currently available for TypesScript, Python, Java, C#, JavaScript and Go. The infrastructure code that you develop is deployed to AWS through CloudFormation, as your CDK code is "synthesized" into a CloudFormation template upon deployment.
As you can see, both the Dagger SDK and AWS CDK are available for a lot of modern languages and in this post we are going to be using Go for our Dagger code and TypeScript for our CDK code.
Our CDK Stack
Our CDK Stack is going to be really simple, and it will only contain a VPC with an EC2 instance placed inside of it and a Service Role for the instance.
import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as cdk from 'aws-cdk-lib';
import * as iam from 'aws-cdk-lib/aws-iam'
import { Construct } from 'constructs';
export class Ec2CdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.Vpc(this, 'VPC', {
natGateways: 0,
subnetConfiguration: [{
cidrMask: 24,
name: "asterisk",
subnetType: ec2.SubnetType.PUBLIC
}]
});
const role = new iam.Role(this, 'ec2Role', {
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com')
})
const ami = new ec2.AmazonLinuxImage({
generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
cpuType: ec2.AmazonLinuxCpuType.X86_64
});
const ec2Instance = new ec2.Instance(this, 'Instance', {
vpc,
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MICRO),
machineImage: ami,
role: role
});
}
}
Dagger pipeline
The Dagger pipeline is going to consist of three steps; installing the dependencies, linting our CDK code and finally deploying it to AWS.
In the installation step, we first create a container based on the `node:18` image and then mount our host directory to the `/src` path inside the container. With our source code mounted into a container, we are able to run the `npm ci && npm run build` commands to install the dependencies and build the CDK project. After running the commands, we check the exit code of our last command and end the execution if any errors have occurred, and finally return the directory.
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
// Pin dependency versions
const (
eslintVersion = "8"
nodeVersion = "18"
awsCliVersion = "2.11.1"
alpineVersion = "3.17.2"
)
func install(ctx context.Context, d *dagger.Client) *dagger.Directory {
localPath := "/src"
install := d.Container().
From(fmt.Sprintf("node:%v", nodeVersion)).
WithMountedDirectory(localPath, d.Host().Directory(".")).
WithWorkdir(localPath).
WithExec([]string{"npm", "ci", "&&", "npm", "run", "build"})
_, err := install.ExitCode(ctx)
if err != nil {
panic(err)
}
return install.Directory(localPath)
}
The linting step of our pipeline is going to look very similar to the installation step above: we run the installation step early on in our function to make sure we have all the dependencies installed and, after that, we use the `cytopia/eslint` image to do the linting for us, with our project mounted under the `/data` directory. We once again check the error code of our linting and if nothing goes wrong we return the directory, which contains our perfectly linted code.
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
const (
...
)
func lint(ctx context.Context, d *dagger.Client) *dagger.Directory {
localPath := "/data"
install := install(ctx, d)
lint := d.Container().
From(fmt.Sprintf("cytopia/eslint:%v", eslintVersion)).
WithMountedDirectory(localPath, install).
WithExec([]string{".", "--max-warnings=0"})
_, err := lint.ExitCode(ctx)
if err != nil {
panic(err)
}
return lint.Directory(localPath)
}
func install(ctx context.Context, d *dagger.Client) *dagger.Directory {
...
}
To tie everything together, we create a deployment step in our pipeline which will take care of deploying (duh!) our CDK project to an AWS account. Our deployment step is really standard, first we retrieve the required credentials from the environment variables of the host. If you decide to integrate Dagger with a CI tool, then you have to make sure that these secrets are available to the instance running your CI pipeline. As you might have noticed, Dagger has a functionality for handling sensitive information, both when retrieving it from the host and when using it inside a container. This way, the credentials won't be exposed in plaintext logs or written to the filesystem of the container or inserted into the cache.
After getting the credentials we run the lint step, and thus also the install step on our code and then create another `node:18` based container, with the crendetials mounted as secrets. After that we mount our project to `/build`` inside the container and navigate to the directory containing our source code with the `WithWorkdir` function. After this we run the standard commands used for deploying a CDK project and finally check for any errors that might have surfaced during the deployment.
To make sure our pipeline can be run, we have to wrap our deploy step into a main function, in which we also create a Dagger client, which will be used in our pipeline.
package main
import (
"context"
"fmt"
"os"
"dagger.io/dagger"
)
const (
...
)
func main() {
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
fmt.Println(err)
}
defer client.Close()
if err = deploy(context.Background(), client); err != nil {
fmt.Println(err)
}
}
func deploy(ctx context.Context, d *dagger.Client) {
accessKey := hostEnv(ctx, d.Host(), "AWS_ACCESS_KEY_ID").Secret()
secretKey := hostEnv(ctx, d.Host(), "AWS_SECRET_ACCESS_KEY").Secret()
defaultRegion := hostEnv(ctx, d.Host(), "AWS_DEFAULT_REGION").Secret()
account := hostEnv(ctx, d.Host(), "CDK_DEFAULT_ACCOUNT").Secret()
localPath := "/build"
lint := lint(ctx, d)
_, err := d.Container().
From(fmt.Sprintf("node:%v", nodeVersion)).
WithSecretVariable("AWS_ACCESS_KEY_ID", accessKey).
WithSecretVariable("AWS_SECRET_ACCESS_KEY", secretKey).
WithSecretVariable("AWS_DEFAULT_REGION", defaultRegion).
WithSecretVariable("CDK_DEFAULT_REGION", defaultRegion).
WithSecretVariable("CDK_DEFAULT_ACCOUNT", account).
WithMountedDirectory(localPath, lint).
WithWorkdir(localPath).
WithExec([]string{"npm", "install", "-g", "aws-cdk"}).
WithExec([]string{"npm", "install"}).
WithExec([]string{"npm", "run", "build"}).
WithExec([]string{"cdk", "deploy", "--require-approval", "never"}).
ExitCode(ctx)
if err != nil {
fmt.Println(err)
}
}
func lint(ctx context.Context, d *dagger.Client) *dagger.Directory {
...
}
Conclusion
As you see, Dagger is a perfect candidate if you wish to write your CI pipelines using code, instead of battling with YAML, and it is a perfect fit for more developer-oriented teams as it allows you to write your pipelines in the same language as your project, though it does require some knowledge around working with containers. Dagger is still in its early stages and if this post has convinced you to give it a go, then I'm sure that you'll find joy in exploring all the other fascinating features that Dagger offers, including caching and the newly introduced Dagger Cloud.