To make some pipeline sample easier to use, we usually create at the beginning of the yaml file of a pipeline a section nammed shared which will contain YAML Alias indicators.

Produce a configured stack from stack and config (merge)

While building a pipeline which follows the Concept of a stack this is a way to implement the merge between the defined stack and the config to produce a configured stack

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.












 
 

shared:
  - &merge-stack-and-config
    platform: linux
    image_resource:
      type: docker-image
      source:
        repository: cycloid/cycloid-toolkit
        tag: latest
    run:
      path: /usr/bin/merge-stack-and-config
    outputs:
    - name: merged-stack
      path: "merged-stack"
1
2
3
4
5
6
7
8
9
10
11
12
13

Then add this new merge step into your job which needs the configured stack code (usually before a run of Ansible or terraform) :

Ansible example :










 
 
 
 
 
 
 
 
 
 
 





- name: run-ansible
...
  plan:
    - do:
      - get: git_config-ansible
        trigger: true
      - get: git_stack-ansible
        trigger: true

      - task: merge-stack-and-config
        config:
          <<: *merge-stack-and-config
          inputs:
          - name: git_config-ansible
            path: "config"
          - name: git_stack-ansible
            path: "stack"
        params:
          CONFIG_PATH: ((config_ansible_path))
          STACK_PATH: ansible
...
      - task: run-ansible
        <<: *run-ansible-from-bastion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

Terraform example :












 
 
 
 
 
 
 
 
 




 

- name: terraform-plan
...
  plan:
    - do:
      - get: git_stack-terraform
        trigger: true
      - get: git_config-terraform
        trigger: true

      - task: merge-stack-and-config
        config:
          <<: *merge-stack-and-config
          inputs:
          - name: git_config-terraform
            path: "config"
          - name: git_stack-terraform
            path: "stack"
        params:
          CONFIG_PATH: ((config_terraform_path))
          STACK_PATH: terraform

      - put: tfstate
        params:
          env_name: ((project))-((env))
          terraform_source: merged-stack/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

How to run ansible-playbook on servers

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.













 

 
 

shared:
  - &run-ansible
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/ansible-runner
      caches:
        - path: ansible-playbook/roles
      inputs:
      - name: merged-stack
        path: ansible-playbook
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

And create an Ansible runner task using this sample of yaml just after getting a configured stack :














 
 
 
 
 
 
 
 
 
 
 
 
 
 

  - name: deploy-prometheus-((env))
    build_logs_to_retain: 10
    plan:
    - do:
      - get: git_config-ansible
      - get: git_stack-ansible

      - task: merge-stack-and-config
        config:
          <<: *merge-stack-and-config
...

      - task: ansible-runner
        <<: *run-ansible
        params:
          BASTION_URL: ((bastion_url))
          SSH_PRIVATE_KEY: ((bastion_private_key_pair))
          ANSIBLE_VAULT_PASSWORD: ((ansible_vault_password))
          ANSIBLE_PLAYBOOK_PATH: ansible-playbook
          ANSIBLE_PLAYBOOK_NAME: prometheus.yml
          AWS_DEFAULT_REGION: ((aws_default_region))
          AWS_ACCESS_KEY_ID: ((aws_access_key))
          AWS_SECRET_ACCESS_KEY: ((aws_secret_key))
          EXTRA_VARS:
            env: ((env))
            project: ((project))
            customer: ((customer))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

TIP

The ansible runner script is able to get dependencies from Ansible Galaxy before running ansible, decrypt vaulted files using a password, and also contain Amazon EC2 dynamic inventory script to find automatically your EC2 instances based on Amazon tags.

See more information about Ansible integration here

How to cleanup old Amazon AMI after a build

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.














 
 
 
 
 
 
 

shared:
  - &aws-ami-cleaner
    task: aws-ami-cleaner
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/aws-ami-cleaner
      params:
        AWS_ACCESS_KEY_ID: ((aws_access_key))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_key))
        AWS_NAME_PATTERNS: >
                  [
                    "projcet1_front_prod",
                    "project1_batch_prod"
                  ]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Then create a dedicated job to call this new tasks or simply call it in your existing job after the build of the Amazon AMI :









 

- name: clean-worker-ami
  plan:
  - do:
    - get: ((project))-worker-build-ami
      passed:
        - deploy-((project))-((env))
      trigger: true

    - *aws-ami-cleaner
1
2
3
4
5
6
7
8
9

How to cleanup old container images from Amazon ECR

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.














 
 
 
 
 
 
 
 
 
 

shared:
  - &aws-ecr-cleaner
    task: aws-ecr-cleaner
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/aws-ecr-cleaner
      params:
        AWS_ACCESS_KEY_ID: ((aws_access_key))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_key))
        REGION: ((aws_default_region))
        DRYRUN: False
        IMAGES_TO_KEEP: 2
        REPOSITORIES_FILTER: 'foo bar'
        # For a global clean with exclude:
        IGNORE_TAGS_REGEX: 'dev|staging|prod|latest-'
        # For a clean on specific tag/env
        FILTER_TAGS_REGEX: '^dev-'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Then create a dedicated job to call this new tasks or simply call it in your existing job after the push of container image :



 

- name: clean-worker-ami
  plan:
  - *aws-ecr-cleaner
1
2
3

How to run terraform

To run Terraform inside a pipeline, we usually do it on a configured stack. Terraform requires to define a new resource_types and configure it.

For the configuration we usually recommend on Amazon infrastructure to use S3 backend for Terraform tfstate file.





 





 
 
 
 
 
 
 
 
 
 
 
 
 

resource_types:
- name: terraform
  type: docker-image
  source:
    repository: ljfranklin/terraform-resource

resources:
- name: tfstate
  type: terraform
  source:
    storage:
      bucket: ((terraform_storage_bucket_name))
      bucket_path: ((terraform_storage_bucket_path))
      region_name: ((aws_default_region))
      access_key_id: ((terraform_storage_access_key))
      secret_access_key: ((terraform_storage_secret_key))
    vars:
      access_key: ((aws_access_key))
      secret_key: ((aws_secret_key))
      env: ((env))
      project: ((project))
      customer: ((customer))
      aws_region: ((aws_default_region))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

To provide a manual validation before applying any changes with Terraform, we usually create 2 jobs.

One for running the plan and let you review the changes. Then one to apply the changes on your infrastructure.














 
 
 
 
 




















 
 
 
 
 

jobs:
  - name: terraform-plan)
    plan:
      - do:
        - get: git_stack-terraform
          trigger: true
        - get: git_config-terraform
          trigger: true

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            env_name: ((project))-((env))
            plan_only: true
            terraform_source: merged-stack/

  - name: terraform-apply
    plan:
      - do:
        - get: git_stack-terraform
          passed:
            - terraform-plan
        - get: git_config-terraform
          trigger: false
          passed:
            - terraform-plan
        - get: tfstate
          trigger: false
          passed:
            - terraform-plan

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            env_name: ((project))-((env))
            plan_run: true
            terraform_source: merged-stack/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

How to destroy the infrastructure created by Terraform

To remove everything created by Terraform, create a dedicated job similar to the Terraform apply one but using Terraform destroy instead.











 
 
 
 
 
 
 

  - name: terraform-destroy
    plan:
      - do:
        - get: git_stack-terraform
        - get: git_config-terraform

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            action: destroy
            env_name: ((project))-((env))
            terraform_source: merged-stack/
          get_params:
            action: destroy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

How to run Terraform and Ansible using common variables for both

You can follow the example of variables sharing.