Blue/Green ECS-optimized AMI Update For ECS Instances

If you are running on Amazon EC2 Container Service (ECS) you are familiar with the concept of “ECS-optimized AMI update”. This is a notification Amazon sends you when there is a new Docker Engine version that addresses a certain vulnerability. This means it’s time to update Amazon ECS-optimized AMI.  And effectively switch to the new version of Docker Engine and ECS Agent.

What I’d like to show here is a safe method of updating ECS AMI.  This method is a proven Blue/Green Deployment strategy that keeps your old AMI Instances on standby just in case the new AMI Instances fail under load.  It gives you the time to burn-in the new Docker/ACS-Agent/AMI stack under real production load.  And when you feel solid – you simply terminate the old instances.

Blue/Green ECS-optimized AMI Update For ECS Instances

But before we look at the solution lets examine what is the problem.

The Problem – NO Blue/Green!

As of today (Jan-19-2017), if you simply swap the ECS Instances from underneath your ECS Cluster they go away for good.  There is no way to safely re-attach them back to the ECS Cluster.  Here’s an Issue I opened on this on GitHub.  Let me provide the summary here:

  1. I feel we should have a way to mark ECS Instances as StandBy and have the ECS Agent not schedule any tasks on them for as long as that status is active.  I don’t think “Deregister” functionality is sufficient here because there is no way that I know of to bring deregistered instances back into service.
  2. I also don’t like that a specific version of Docker/ECS Agent is not pinned to a specific version of Amazon ECS-optimized AMI.  If it were – this would not be an issue,  we could always bring back a known, good working set of versions into service.  But as it is now – even if we used an older AMI – it will pull in the most recent version of ECS Agent and Docker on instance launch.

However, the good news is that until above two points are addressed we have another solution – read below.


Just 1 week after I reported this issue on github – AWS team implemented a solution called “Container Instance Draining”. I’m impressed! Way to go AWS ECS Team!  see: GitHub issue update.  Now you have two solutions – the one below and the new Container Instance Draining.

Solution – Task Placement Constraints

Solution is to utilize Task Placement Constraints in combination with ECS Instance Platform Attribute ecs.ami-id. This combination forces ECS Agent to place Running Tasks on the instances with specific AMI that we designate for a Task. Here’s how it works:

Lets say you are updating from amzn-ami-2015.09.g-amazon-ecs-optimized (ami-33b48a59) to amzn-ami-2016.09.d-amazon-ecs-optimized (ami-a58760b3).  And lets say you register 4 NEW AMI Instances and 4 OLD AMI Instances to your ECS Cluster concurrently.  Now, Task Placement Constraint can filter these 8 instances by AMI attribute directly. You can test how this works via aws ecs list-container-instances API call using --filter flag:

## List 4 OLD AMI Instances
ubuntu@a01:~$ aws ecs list-container-instances --cluster "my-cluster" --filter "attribute:ecs.ami-id == ami-33b48a59"

    "containerInstanceArns": [

Armed with knowledge we can create two Task Definitions with placementConstraints = specific AMI and then assign the desired task definition to the ECS Service. Then, automagically, ECS Agent does all the heavy lifting by effectively moving Running Tasks from one set of AMI Instances to another, draining connections and updating ELB.  The best part is that we can go back and forth – effectively employing Blue/Green Deployment strategy.

The Runbook (AWS Console)

Here’s a step by step process using AWS Console.  Once you know how to do this manually – it’s possible to automate it.

For this example, lets pretend we have the following stack:

  1. We have an ECS Cluster with a Service and a Task that already runs under this Service.
  2. There is an Auto Scaling Group with 4 ECS Instances serving this ECS Cluster
  3. Each Task is placed on a single instance, this is achieved via a static Host Port / Container Port mapping (80->8080)
  4. The Service is defined with a Minimum healthy percent = 50 and a Maximum percent = 100
  5. These current ECS instances are running an “OLD AMI” ami-33b48a59
  6. The goal is to safely upgrade this ECS Cluster and switch it to “NEW AMI” ami-a58760b3

Lets get on with it:

1. Create New Task Definition (OLD AMI)

Go to: AWS Console -> Amazon ECS -> Task Definitions

  1. Click on the Task Definition
  2. Click [x] Next to the latest Task Definition Revision
  3. Click Create Revision
  4. Click (+) next to “Add constraint”
  5. Fill in the following:
    Type: memberOf (already pre-filled/can’t change this)
    Expression: attribute:ecs.ami-id == ami-33b48a59
  6. Click Create

Resulting JSON (relevant section):

  "placementConstraints": [
      "expression": "attribute:ecs.ami-id == ami-33b48a59",
      "type": "memberOf"

2. Update ECS Service with new Task Definition Revision (OLD AMI ID)

Go to: AWS Console -> Amazon ECS -> Clusters

  1. Click on Cluster Name
  2. Under Services Tab – Click On Service Name
  3. This Brings up Service Detail page – Click Update Button
  4. Under Task Definition column pull down the drop list and pick the Task Definition we created in step 1
  5. Click Update Service


At this point ECS Agent will start draining connection to old Tasks and start placing new Revision of tasks onto the same instances (it drops 4 tasks to 2 and then swaps them one at a time): QA_swap_tasks
End result: all tasks are running at latest revision, still on the OLD AMI ID.

3. Launch 4 Additional EC2 Instances (NEW AMI)

Go to: AWS Console -> Amazon EC2 -> Launch Configuration

  1. Select [x] Next to your launch configuration
  2. In the detail pane Click Copy launch configuration button
  3. Edit AMI – change from amzn-ami-2015.09.g-amazon-ecs-optimized – ami-33b48a59 to amzn-ami-2016.09.d-amazon-ecs-optimized – ami-a58760b3
  4. Click Yes to confirm AMI change and warnings about possible changes to instance type selection, Spot Instance configuration, storage configuration, and security group configuration
  5. Leave selection on existing instance size/type
  6. Change name to something new (we append a number to basename)
  7. Leave everything else as is
  8. Click Next (Storage) – leave as is
  9. Click Next (Security Groups) – leave as is
  10. Click Review
  11. Click Create launch configuration
  12. Confirm you have Key Pair
  13. Create New launch configuration

Go to: AWS Console -> Amazon EC2 -> Auto Scaling Groups

  1. Select [x] next to your Auto Scaling Group (ASG)
  2. Pull Down “Actions”
  3. Select Edit
  4. Change Desired: 8 (from 4); Change Max:     8 (from 4); Change Launch Configuration to the name you created in previous step
  5. Click Save

Wait Until 4 new Instances are added and their status is InService.

At this point you have 8 instances – 4 with OLD AMI and 4 with NEW AMI.  The reason this works is because ASG doesn’t do anything with existing running instances when you change it’s Launch Configuration (LC).  It just lets them run as-is unless you downsize ASG – at which point it’ll scale-in (terminate) the instances with old LC, which is exactly what we’ll use in the last step.

Alternative method is to create a whole new ASG with new LC – this is the way I would do this from now on – it’s a safer process.  However changing LC works as well and that’s what I did here.

Regardless of the method you use to add 4 new instances — they should register under ECS Cluster.  Lets verify this — go back to the ESC Cluster page and click on the Instances Tab – it should show 8 instances registered with 4 OLD AMI Instances and 4 NEW AMI Instances.  And all tasks are still running on the 4 OLD AMI Instances:

Outdated ECS Agent upgrade process

And now our next step is to migrate the Running Tasks to the 4 NEW AMI Instances.

4. Create New Task Definition (NEW AMI)

Go to: AWS Console -> Amazon ECS -> Task Definitions

  1. Click on the Task Definition
  2. Click [x] Next to the latest Task Definition Revision
  3. Click Create Revision
  4. Under “Constraint” Update memberOf to new AMI ID (ami-a58760b3)
  5. Click Create

Resulting JSON (relevant section):

  "placementConstraints": [
      "expression": "attribute:ecs.ami-id == ami-a58760b3",
      "type": "memberOf"

5. Update ECS Service with new Task Definition Revision (NEW AMI ID)

Go to: AWS Console -> Amazon ECS -> Clusters

  1. Click on Cluster Name
  2. Under Services Tab – Click On Service Name
  3. This Brings up Service Detail page – Click Update Button
  4. Under Task Definition column pull down the drop list and pick the Task Definition we created in step 4
  5. Click Update Service

End result:

  1. All tasks are running at latest revision and are placed on the NEW AMI Instances
  2. 4 OLD AMI Instances are still in service and we can switch to them by updating the ECS Service with the old Task Definition which is bound to use OLD AMI Instances via it’s Constraint

ECS Agent Rolling Upgrade AMI

6. Finally Switch Back The Auto Scaling Group to 4 Instances

Once we feel solid the new AMI/Docker/ECS-Agent stack performs under production load – we can terminate the old instances by setting ASG’s Max and Desired back to 4.   This automatically Terminates the Instances with OLD Launch Configuration and leaves the instances with new Launch Configuration active:



So far I am very happy with Amazon EC2 Container Service. It served us well for almost a year now. And during our first major upgrade we found a Zero-Downtime solution utilizing a classic, well proven Blue/Green Deployment strategy.

Upgrade Docker Engine to Specific Version

I always recommend upgrading Docker Engine to a specific version that matches the rest of your infrastructure. For example if you are running the latest Amazon ECS-optimized AMI (currently amzn-ami-2016.09.d-amazon-ecs-optimized) — it contains Docker Engine 1.12.6. This writeup will show you exactly how to match that version. The steps are specific to Ubuntu 14.04.3 LTS.

1. Update package information

This is to ensure that APT works with the https method, and that CA certificates are installed:

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates

2. Add the new GPG key

FYI: This commands downloads the key with the ID 58118E89F3A912897C070ADBF76221572C52609D from the keyserver hkp:// and adds it to the adv keychain. For more info, see the output of man apt-key.

sudo apt-key adv \
--keyserver hkp:// \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D

3. Add specific repo for your distro to docker.list

FYI available repos:

Ubuntu version Repository
Precise 12.04 (LTS) deb ubuntu-precise main
Trusty 14.04 (LTS) deb ubuntu-trusty main
Wily 15.10 deb ubuntu-wily main
Xenial 16.04 (LTS) deb ubuntu-xenial main

So in our case:

mkdir -p /etc/apt/sources.list.d
echo "deb ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list

4. Update the APT package index

sudo apt-get update

5. Verify that APT is pulling from the right repository

FYI: When you run the following command, an entry is returned for each version of Docker that is available for you to install. Each entry should have the URL The version currently installed is marked with ***.

apt-cache policy docker-engine

6. Install the linux-image-extra-* kernel packages

FYI: For Ubuntu Trusty, Wily, and Xenial, install the linux-image-extra-* kernel packages, which allows you use the aufs storage driver.

sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

7. Finally Install a specific version of Docker Engine

7.1 List all available versions using apt-cache madison

apt-cache madison docker-engine


ubuntu@dev01:~$ apt-cache madison docker-engine
docker-engine | 1.12.6-0~ubuntu-trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.5-0~ubuntu-trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.4-0~ubuntu-trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.3-0~trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.2-0~trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.1-0~trusty | ubuntu-trusty/main amd64 Packages
docker-engine | 1.12.0-0~trusty | ubuntu-trusty/main amd64 Packages

7.2 Install docker-engine 1.12.6

FYI: If you already have a newer version installed, you will be prompted to downgrade Docker. Otherwise, the specific version will be installed.

sudo apt-get install docker-engine=1.12.6-0~ubuntu-trusty

8. Start the docker daemon

sudo service docker start

9. Verify that docker is installed correctly by running the hello-world image

sudo docker run hello-world

Based on: