AWS in Plain English

New AWS, Cloud, and DevOps content every day. Follow to join our 3.5M+ monthly readers.

Follow publication

From GitHub to Continuous Deployment in 5 Minutes

James Turner
AWS in Plain English
7 min readJul 6, 2021

--

The Tech line-up

I’m fed up of centralised Continuous Integration (CI) servers.

No really I am. I’m locked out (by sysadmins), i’m thwarted by lack of, and contention of resources.

I’m stuck with installed versions of packages that I don’t want, can’t use, and can’t escape. I dislike pointy clicky intensely (yes i’m referring to you Jenkins). I don’t want to share a CI setup between environments because security DOES matter, and provisioning “system” users for external CI providers such as CircleCI and the like is a security time bomb waiting to go off.

So here’s how I shook the shackles off, did it my way, and gave all my fellow engineers, you included, the bootstrap that is really needed for project lift-off.

Take aways from this article

In this article you will learn how to go from just having a Github repository to having a continuous integration pipeline in AWS where you can run tests, and continually deploy changes (to both the code AND the pipeline).

We’re going to make heavy use of the following AWS components:

  • Codepipeline
  • Codebuild
  • Cloudformation
  • S3

Features

Let’s begin by clarifying what features we want from our unicorn 🦄

  • Environment independent deployments
  • Branch aware deployments
  • Automated tests

Getting going

If you haven’t already there are a couple of things you might need before bootstrapping.

What are we deploying?

A diagram always helps to give you a sense of where we’re going to end up. Don’t worry if you don’t understand all the component parts yet.

Bootstrapping

In order to get going you will need an initial bootstrap. You can do this by executing a cloudformation deploy from your local machine (or wherever you have AWS CLI credentials available) using the AWS CLI v2 you installed. I’ve wrapped this up for you in a shell script bootstrap.sh but i’m laying out some of the internals here for you to see.

CREDENTIALS_ARN=$(aws codestar-connections list-connections --provider-type-filter GitHub --max-results 10 --query "Connections[?ConnectionStatus=='AVAILABLE']|[0].ConnectionArn" --output text)BRANCH=$(git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/')
PROJECT_NAME=$(basename `pwd`)
REPOSITORY_OWNER=$(git remote -v | grep push | cut -d ':' -f2 | cut -d '/' -f1)
REPOSITORY_ID=$REPOSITORY_OWNER/$PROJECT_NAME
aws cloudformation deploy \
--template-file aws/00-pipeline.yml \
--stack-name $PROJECT_NAME-pipeline-$BRANCH \
--parameter-overrides CredentialsArn=$CREDENTIALS_ARN \
BranchName=$BRANCH \
ProjectName=$PROJECT_NAME \
RepositoryId=$REPOSITORY_ID \
--capabilities CAPABILITY_NAMED_IAM

This essentially puts our codepipeline definition into AWS which contains all the subsequent build steps that our project will require. This is usually a 1-time operation, and typically won’t need to be done thereafter (unless you want side-stacks/branches— read on to find out more 😉).

The project setup

Here’s the directory structure we’re going to use.

/
├── aws/
│ ├── 00-pipeline.yml
│ ├── 01-infrastructure.yml
│ └── 02-codebuild.yml
├── src/
│ └── index.js
├── test/
│ └── index.spec.js
├── buildspec.yml
└── package.json

To cover an important convention; we have an aws folder, yes our project contains all our infrastructure too. AND, we’ve named our files such that they are lexicographically sortable, and thus represent the approximate run order of how the pipeline is going to execute them. Anyone should be able to look at this and say “ah hah, that’s the order”.

There’s one other file there that you may not be familiar with, buildspec.yml. Don’t worry about this for now, we’ll run through it in a bit.

Convention

We use a few conventions with our project, and these help to facilitate a couple of useful pieces.

  1. $BRANCH is baked into most places as is the name of the branch
  2. $PROJECT_NAME is utilised to facilitate resource naming conventions and should be representative of the project (git) name.

Branch Awareness

So we want our pipeline to be aware of branches. One of the main hitches (and probably complaints) about AWS Codepipeline has been its lack of ability to deal with branching well. Initially this was a cause for concern for me because I had been so used to the wildcard style branching mechanisms of tools like Jenkins. Codepipeline on the other hand basically asks that you specify (explicitly) the branch upon which you want your pipeline to build. How then do we have different branches do a build? The simple answer; have a different pipeline. Creating pipelines is cheap and easy, and i’ve already shown you how to deploy a pipeline using cloudformation, but in case you skim read the bits above here it is again, but this time we want to deploy and build our new feature specific branch. Here we go:

git checkout -b my-feature-branch
BRANCH=my-feature-branch
aws cloudformation deploy \
--template-file aws/00-pipeline.yml \
--stack-name $PROJECT_NAME-pipeline-$BRANCH \
--parameter-overrides CredentialsArn=$CREDENTIALS_ARN \
BranchName=$BRANCH \
ProjectName=$PROJECT_NAME \
RepositoryId=$REPOSITORY_ID \
--capabilities CAPABILITY_NAMED_IAM

👊 — Fist bump! You just deployed a complete set of new infrastructure bound to your NEW branch alongside your previous infrastructure bound to your previous branch. They are independently deployed and buildable, and pushing to your branch will invoke a build only for your changes. There is 1 caveat you should be aware of however; you CANNOT use a branch name that has / in it. I’m sure however that those of you familiar with git flow will forgive me this one thing.

Pipeline Stages [diagram stages 1–8]

1. Source

Defined as the first component of our pipeline this step helps get us off the ground by binding the git clone source operation of AWS codepipeline to the branch and remote repo we defined during bootstrap.

Name: "Source"
Actions:
- Name: SourceCode
Namespace: "SourceVariables"
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: 1
Configuration:
BranchName: !Ref BranchName
FullRepositoryId: !Ref RepositoryId
ConnectionArn: !Ref CredentialsArn
OutputArtifacts:
- Name: !Ref ProjectName
RunOrder: 1

2. The pipeline deploys itself 🤯

No really, it does. Once you’ve bootstrapped the pipeline, you can alter the pipeline in your repo, commit and push, and the pipeline will run and redeploy itself! Try doing that with Jenkins…

We deploy the pipeline using the pipeline via cloudformation:

Name: "Deploy-Pipeline"
Actions:
- Name: "Deploy-CodePipeline"
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
StackName: !Sub "${ProjectName}-pipeline-${BranchName}"
TemplatePath: !Sub "${ProjectName}::aws/00-pipeline.yml"
Capabilities: "CAPABILITY_NAMED_IAM"
RoleArn: !GetAtt DeployRole.Arn
ParameterOverrides: !Sub |
{
"BranchName": "${BranchName}",
"CredentialsArn": "${CredentialsArn}",
"RepositoryId": "${RepositoryId}",
"ProjectName": "${ProjectName}"
}
InputArtifacts:
- Name: !Ref ProjectName
RunOrder: 1

3. Infrastructure

Typically in most projects you will require a place to put your Artifacts, and in order to do that we provision ourselves an s3 bucket such that we can place our packaged up javascript somewhere accessible for usage. Again deployed via cloudformation:

Name: "Deploy-Infrastructure"
Actions:
- Name: "Deploy-Infrastructure"
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
StackName: !Sub "${ProjectName}-infrastructure-${BranchName}"
TemplatePath: !Sub "${ProjectName}::aws/01-infrastructure.yml"
Capabilities: "CAPABILITY_NAMED_IAM"
RoleArn: !GetAtt DeployRole.Arn
ParameterOverrides: !Sub |
{
"BranchName": "${BranchName}"
}
InputArtifacts:
- Name: !Ref ProjectName
RunOrder: 1

4–8. Codebuild and buildspec.yml

This is how we build our project code. Basically it can be considered the equivalent of running arbitrary shell commands, similar to how you might do in Jenkins in order to run your build/compile/package steps.

We use a buildspec.yml file to define the build process for our code by doing npm run test and package operations (amongst others).

But before that we need 3 things:

  1. A cloudformation stack that deploys a codebuild project
  2. A codebuild project resource
  3. A step in our pipeline to run our codebuild project.

Here we define the codebuild cloudformation deployment and the subsequent codebuild build in our pipeline:

Name: "Deploy-CodeBuild"
Actions:
- Name: "Deploy-CodeBuild"
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
StackName: !Sub "${ProjectName}-codebuild-${BranchName}"
TemplatePath: !Sub "${ProjectName}::aws/02-codebuild.yml"
Capabilities: "CAPABILITY_NAMED_IAM"
RoleArn: !GetAtt DeployRole.Arn
ParameterOverrides: !Sub |
{
"ProjectName": "${ProjectName}",
"BranchName": "${BranchName}"
}
InputArtifacts:
- Name: !Ref ProjectName
RunOrder: 1
- Name: "Build-And-Package"
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Sub ${ProjectName}-${BranchName}
EnvironmentVariables: !Sub |
[
{
"name": "COMMIT_HASH",
"value": "#{SourceVariables.CommitId}",
"type": "PLAINTEXT"
}
]
InputArtifacts:
- Name: !Ref ProjectName
RunOrder: 2

One of the most important things to note from the above is the usage of
#{SourceVariables.CommitId} . In our very first step (1) we set the namespace of this step as Namespace: "SourceVariables" . This allows us to utilise certain special variables inside codepipeline such as the commit ID (hash). Using this commit hash and being able to pass it to our codebuild run means we can have immutably packaged artifacts created and stored into the artifacts bucket we defined earlier. This means you can point to a specific commit hash representation of your build and deployed code 🥳. Here’s the buildspec.yml :

version: 0.2

phases:
install:
runtime-versions:
nodejs: 12
build:
commands:
- yum install zip -y
- echo Build started on `date`
- npm install
- npm run test
- npm prune --production
- npm run package
post_build:
commands:
- aws s3 cp package.zip s3://$RESOURCES_BUCKET/$COMMIT_HASH/lambdas/

As you can see the aws s3 cp we perform directs our package.zip into our artifacts bucket under the commit hash we have for our build.

That’s it, you’re done.

If you want to dig a little deeper the repo that facilitates all of this can be found here: https://github.com/james-turner/github-to-ci-in-5-minutes . Go crazy; fork it, submit PRs, leave comments/bugs, all feedback is welcome.

Conclusion

If you managed to stay the course and digest all of this, great. If not, head on back to the TLDR and run the magic to see for yourself how 1 single line can change your CI/CD journey for good.

Hopefully you’ll walk away with a reasonable appreciation for how you might want to use AWS codepipeline, codebuild, and cloudformation to have your own independent CI/CD environment.

See you all in the next article. Happy AWS’ing in the mean time.

TL;DR

The following ditty will deploy a CI pipeline straight to AWS with automated testing based on your master branch.

git clone <YOUR_REPO_URL_HERE>
cd <THE_REPO_NAME>
$(curl https://raw.githubusercontent.com/james-turner/github-to-ci-in-5-minutes/master/fork.sh | sh) && git add . && git commit -m "GO TIME" && git push -u origin $(git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/') && sh bootstrap.sh

More content at plainenglish.io

--

--

Published in AWS in Plain English

New AWS, Cloud, and DevOps content every day. Follow to join our 3.5M+ monthly readers.

Written by James Turner

Data Engineer, Data Ops, and Machine Learner

Write a response