Skip to content

Deploy Docker Swarm on AWS EC2 via cloud-formation templates - Step 5 - Worker Launch Template

In this step we will create the a launch templat for the EC2 Worker instances.

This post is part of a thread that includes these steps:

  1. Network Setup
  2. Storage
  3. Roles
  4. Manager Instance
  5. Worker Launch Template (this post)
  6. Worker Instances
  7. Docker Swarm
  8. Cleanup

Worker Launch Template

Start in the project directory:

cd ~/swift-aws-ec2-swarm

cloud-formation Template

Create a folder ec2-worker-lt and a ec2-worker-lt.yml file in it.

mkdir -p ec2-worker-lt
touch ec2-worker-lt/ec2-worker-lt.yml
nano ec2-worker-lt/ec2-worker-lt.yml

Copy and paste this code into ec2-worker-lt.yml:

Description: Launch template for Docker Swarm worker instances

Parameters:
  LatestAmiId:
    Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
    Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2

  HostedZoneId:
    Type: AWS::Route53::HostedZone::Id
    Description: ID of the Route53 HostedZone

  HomeVolumeId:
    Type: AWS::EC2::Volume::Id
    Description: ID of the volume to be mounted as /home

Resources:
  LaunchTemplate:
    Type: AWS::EC2::LaunchTemplate
    Properties:
      LaunchTemplateData:
        ImageId: !Ref LatestAmiId

        BlockDeviceMappings: 
          # Docker volume
          - DeviceName: /dev/sdi
            Ebs: 
              Encrypted: true
              DeleteOnTermination: true
              VolumeSize: 100
              VolumeType: gp2          

        UserData:
          Fn::Base64:
            !Sub |
              #!/bin/bash -x

              # !!! DO NOT ENABLE THIS !!! Use in case of boot problems only
              # usermod --password $(echo test123 | openssl passwd -1 -stdin) ec2-user

              # see: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-log-user-data/
              exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

              EC2_INSTANCE_ID=$(ec2-metadata --instance-id | awk '{print $2}')
              EC2_REGION=$(ec2-metadata --availability-zone | awk '{print $2}' | sed 's/.$//')

              ## Timezone
              # see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#change_time_zone
              timedatectl set-timezone America/Los_Angeles

              ## DNS
              PRIVATE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
              DNS_NAME=$(aws ec2 describe-categories --filters "Name=resource-id,Values=$EC2_INSTANCE_ID" "Name=key,Values=Name" --region $EC2_REGION --output=text | cut -f5)
              aws route53 change-resource-record-sets --hosted-zone-id ${HostedZoneId} --change-batch '{
                "Changes": [
                  {
                    "Action": "UPSERT",
                    "ResourceRecordSet": {
                      "Name": "'$DNS_NAME'.swift.internal.",
                      "Type": "A",
                      "TTL": 60,
                      "ResourceRecords": [
                        {
                          "Value": "'$PRIVATE_IP'"
                        }
                      ]
                    }
                  }
                ]
              }'

              # Add the .swift.internal domain to the list of searchable domains
              echo search swift.internal >> /etc/resolv.conf

              # Amazon Linux specific hack to preserve the domain search config between reboots
              echo 'prepend domain-search "swift.internal";' >> /etc/dhcp/dhclient.conf

              ## Hostname
              # Change hostname to the DNS NAME, which in turn is the name tag of the instance 
              hostnamectl set-hostname $DNS_NAME.swift.internal

              # Amazon EC2 specific hack to preserve hostname between reboots
              echo 'preserve_hostname: true' >> /etc/cloud/cloud.cfg


              ## Attach the EBS volumes
              # see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html

              # Home
              aws ec2 attach-volume --region $EC2_REGION --instance-id $EC2_INSTANCE_ID --volume-id ${HomeVolumeId} --device /dev/sdh
              while [ ! -e /dev/sdh ]; do 
                echo Waiting for Home EBS volume to attach
                sleep 30
              done

              # Docker
              # Docker volume is already attached as /dev/sdi


              ## Format EBS volumes
              # Check if formatted and if not, format using ext4

              # Home
              # Home volume is shared between manager and worker nodes

              # Docker
              # Docker volume is not shared. Each instance has its own Docker volume 
              device_fs_type=`file -sL /dev/sdi`
              if [[ $device_fs_type != *"ext4"* ]]; then
                  mkfs --type ext4 /dev/sdi
              fi              


              ## Mount EBS file systems

              # home
              mkdir -p /ebs/home
              echo '/dev/sdh /ebs/home ext4 defaults,nofail 0 2' | tee -a /etc/fstab

              # docker
              mkdir -p /ebs/docker
              echo '/dev/sdi /ebs/docker ext4 defaults,nofail 0 2' | tee -a /etc/fstab

              mount --all


              ## Users

              # add users
              # runner
              groupadd --gid 200000 runner 
              useradd --gid runner --uid 200000  runner

              # worker
              useradd --create-home --home-dir /ebs/home/worker worker

              # install software
              yum update -y
              yum install docker git jq htop -y


              ## Docker config
              #see: https://docs.docker.com/engine/security/userns-remap/

              # - Use `/ebs/docker` as data-root (for containers and volumes)
              # - Map container `root` user to host `runner` user             
              cat > /etc/docker/daemon.json <<EOF
              {
                "data-root": "/ebs/docker",
                "userns-remap": "runner"
              }            
              EOF

              # additional config needed for the Docker user namespace mapping
              touch /etc/subuid /etc/subgid
              echo "runner:$(id -u runner):65536" | sudo tee -a /etc/subuid
              echo "runner:$(id -g runner):65536" | sudo tee -a /etc/subgid

              # Enable Docker to run at boot and start it
              systemctl enable docker
              systemctl start docker

              # add users to the docker group
              usermod --append --groups docker ec2-user
              usermod --append --groups docker worker

              # download and install docker compose (optional)
              # platform=$(uname -s)-$(uname -m)
              # wget https://github.com/docker/compose/releases/latest/download/docker-compose-$platform 
              # mv docker-compose-$platform /usr/local/bin/docker-compose
              # chmod -v +x /usr/local/bin/docker-compose 

              ## Install AWS CLI v2
              curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
              unzip awscliv2.zip
              ./aws/install

Outputs:
  LaunchTemplateId:
    Description: Launch template ID
    Value: !Ref LaunchTemplate

Scripts

Add a script deploy-ec2-worker-lt.sh and paste this code in it:

#!/usr/bin/env bash

# switch to parent directory
script_path=`dirname ${BASH_SOURCE[0]}`
pushd $script_path/..

source config/names.sh

echo
echo "Deploying $stack_ec2_worker_lt stack via cloud-formation:"
echo 'https://us-west-2.console.aws.amazon.com/cloudformation/home'
echo

hosted_zone_id=$(aws cloudformation describe-stacks --stack-name $stack_vpc | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "HostedZoneId") | .OutputValue')

# home volume is shared between manager and worker(s)
home_volume_id=$(aws cloudformation describe-stacks --stack-name $stack_ebs | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "HomeVolumeId") | .OutputValue')

set -x

aws cloudformation deploy \
    --template-file ec2-worker-lt/ec2-worker-lt.yml \
    --stack-name $stack_ec2_worker_lt \
    --parameter-overrides \
        HostedZoneId=$hosted_zone_id \
        HomeVolumeId=$home_volume_id

popd

Let's also add a clean up script rm-ec2-worker-lt.sh:

#!/usr/bin/env bash

# switch to parent directory
script_path=`dirname ${BASH_SOURCE[0]}`
pushd $script_path/..

source config/names.sh

echo
echo "Destroying $stack_ec2_worker_lt stack via cloud-formation:"
echo 'https://us-west-2.console.aws.amazon.com/cloudformation/home'
echo

set -x

aws cloudformation delete-stack \
    --stack-name $stack_ec2_worker_lt

aws cloudformation wait stack-delete-complete \
    --stack-name $stack_ec2_worker_lt

popd

Make the scripts executable:

chmod +x ec2-worker-lt/deploy-ec2-worker-lt.sh 
chmod +x ec2-worker-lt/rm-ec2-worker-lt.sh

Deploy

Finally let's run the "deploy" script to create the Worker launch template:

./ec2-worker-lt/deploy-ec2-worker-lt.sh

You should see output similar to this:

Deploying swift-swarm-ec2-worker-lt stack via cloud-formation:
https://us-west-2.console.aws.amazon.com/cloudformation/home

+ aws cloudformation deploy --profile swift --template-file ec2-worker/ec2-worker-lt.yml --stack-name swift-swarm-ec2-worker-lt --parameter-overrides HostedZoneId=Z07362313E0WMP6Y4DBYT HomeVolumeId=vol-08b4fb87713440e48

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - swift-swarm-ec2-worker-lt

At this point your project structure should look like this:

.
├── config
│   └── names.sh
├── ebs
│   ├── deploy-ebs.sh
│   ├── ebs.yml
│   └── rm-ebs.sh
├── ec2-manager
│   ├── deploy-ec2-manager.sh
│   ├── ec2-manager.yml
│   └── rm-ec2-manager.sh
├── ec2-worker-lt
│   ├── deploy-ec2-worker-lt.sh
│   ├── ec2-worker-lt.yml
│   └── rm-ec2-worker-lt.sh
├── iam
│   ├── deploy-iam-manager.sh
│   ├── deploy-iam-worker.sh
│   ├── iam-manager.yml
│   ├── iam-worker.yml
│   ├── rm-iam-manager.sh
│   └── rm-iam-worker.sh
├── ssh
│   └── ssh-manager.sh
└── vpc
    ├── deploy-vpc.sh
    ├── rm-vpc.sh
    └── vpc.yml

Congratulations!

We are done with Step 5. Worker Launch Template.

Next step is: Step 6. Worker Instances