To make some pipeline sample easier to use, we usually create at the beginning of the yaml file of a pipeline a section named shared which will contain YAML Alias indicators.

# Produce a configured stack from stack and config (merge)

While building a pipeline which follows the Concept of a stack this is a way to implement the merge between the defined stack and the config to produce a configured stack

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.












 
 

shared:
  - &merge-stack-and-config
    platform: linux
    image_resource:
      type: docker-image
      source:
        repository: cycloid/cycloid-toolkit
        tag: latest
    run:
      path: /usr/bin/merge-stack-and-config
    outputs:
    - name: merged-stack
      path: "merged-stack"
1
2
3
4
5
6
7
8
9
10
11
12
13

Then add this new merge step into your job which needs the configured stack code (usually before a run of Ansible or terraform) :

Ansible example :










 
 
 
 
 
 
 
 
 
 
 





- name: run-ansible
...
  plan:
    - do:
      - get: git_config-ansible
        trigger: true
      - get: git_stack-ansible
        trigger: true

      - task: merge-stack-and-config
        config:
          <<: *merge-stack-and-config
          inputs:
          - name: git_config-ansible
            path: "config"
          - name: git_stack-ansible
            path: "stack"
        params:
          CONFIG_PATH: ((config_ansible_path))
          STACK_PATH: ansible
...
      - task: run-ansible
        <<: *run-ansible-from-bastion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

Terraform example :












 
 
 
 
 
 
 
 
 




 

- name: terraform-plan
...
  plan:
    - do:
      - get: git_stack-terraform
        trigger: true
      - get: git_config-terraform
        trigger: true

      - task: merge-stack-and-config
        config:
          <<: *merge-stack-and-config
          inputs:
          - name: git_config-terraform
            path: "config"
          - name: git_stack-terraform
            path: "stack"
        params:
          CONFIG_PATH: ((config_terraform_path))
          STACK_PATH: terraform

      - put: tfstate
        params:
          plan_only: true
          terraform_source: merged-stack/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

# How to run ansible-playbook on servers

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.













 

 
 

shared:
  - &run-ansible
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/ansible-runner
      caches:
        - path: ansible-playbook/roles
      inputs:
      - name: merged-stack
        path: ansible-playbook
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

And create an Ansible runner task using this sample of yaml just after getting a configured stack :

TIP

The ansible-runner script is able to get dependencies from Ansible Galaxy before running ansible, decrypt vaulted files using a password, and also contain dynamic inventory plugins/scripts to find automatically your cloud instances based, for example, on tags.

See more information about Ansible integration here

# How to cleanup old Amazon AMI after a build

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.














 
 
 
 
 
 
 

shared:
  - &aws-ami-cleaner
    task: aws-ami-cleaner
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/aws-ami-cleaner
      params:
        AWS_ACCESS_KEY_ID: ((aws_access_key))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_key))
        AWS_NAME_PATTERNS: >
                  [
                    "projcet1_front_prod",
                    "project1_batch_prod"
                  ]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Then create a dedicated job to call this new tasks or simply call it in your existing job after the build of the Amazon AMI :









 

- name: clean-worker-ami
  plan:
  - do:
    - get: ((project))-worker-build-ami
      passed:
        - deploy-((project))-((env))
      trigger: true

    - *aws-ami-cleaner
1
2
3
4
5
6
7
8
9

# How to cleanup old container images from Amazon ECR

Add the following sample on top of your pipeline, in the shared section filling the highlighted fields to match your needs.














 
 
 
 
 
 
 
 
 
 

shared:
  - &aws-ecr-cleaner
    task: aws-ecr-cleaner
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/aws-ecr-cleaner
      params:
        AWS_ACCESS_KEY_ID: ((aws_access_key))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_key))
        REGION: ((aws_default_region))
        DRYRUN: False
        IMAGES_TO_KEEP: 2
        REPOSITORIES_FILTER: 'foo bar'
        # For a global clean with exclude:
        IGNORE_TAGS_REGEX: 'dev|staging|prod|latest-'
        # For a clean on specific tag/env
        FILTER_TAGS_REGEX: '^dev-'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Then create a dedicated job to call this new tasks or simply call it in your existing job after the push of container image :



 

- name: clean-worker-ami
  plan:
  - *aws-ecr-cleaner
1
2
3

# How to run terraform

To run Terraform inside a pipeline, we usually do it on a configured stack. Terraform requires to define a new resource_types and configure it.

For the configuration we usually recommend on Amazon infrastructure to use S3 backend for Terraform tfstate file or equivalent for other cloud providers.

To provide a manual validation before applying any changes with Terraform, we usually create 2 jobs.

One for running the plan and let you review the changes. Then one to apply the changes on your infrastructure.














 
 
 
 





















 
 
 
 
jobs:
  - name: terraform-plan
    plan:
      - do:
        - get: git_stack-terraform
          trigger: true
        - get: git_config-terraform
          trigger: true

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            plan_only: true
            terraform_source: merged-stack/

  - name: terraform-apply
    plan:
      - do:
        - get: git_stack-terraform
          passed:
            - terraform-plan
        - get: git_config-terraform
          trigger: false
          passed:
            - terraform-plan
        - get: tfstate
          trigger: false
          passed:
            - terraform-plan

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            plan_run: true
            terraform_source: merged-stack/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

# How to destroy the infrastructure created by Terraform

To remove everything created by Terraform, create a dedicated job similar to the Terraform apply one but using Terraform destroy instead.











 
 
 
 
 
 

  - name: terraform-destroy
    plan:
      - do:
        - get: git_stack-terraform
        - get: git_config-terraform

        - task: merge-stack-and-config
          config:
            <<: *merge-stack-and-config
...
        - put: tfstate
          params:
            action: destroy
            terraform_source: merged-stack/
          get_params:
            action: destroy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

# How to run Terraform and Ansible using common variables for both

You can follow the example of variables sharing.

# Tips using Cycloid toolkit scripts

By using Cycloid, you are probably utilizing one of our useful Cycloid toolkit scripts. One of them is the ansible-runner which usage might look like this in your pipeline:










 

  - task: run-ansible
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /usr/bin/ansible-runner
1
2
3
4
5
6
7
8
9
10

At some point you might have the need to debug your ansible-playbook execution or have to execute pre or post actions before calling the Cycloid toolkit ansible-runner script. To achieve that and without building a new docker image, you could:

  • Change the path field of your task to another command like bash or sh
  • Add the args field just below the path to set up your custom shell script to execute

For example, the previous ansible-runner usage example could be changed into a bash script letting display a custom message before executing our /usr/bin/ansible-runner script:









 
 
 
 
 
 
 
 



  - task: run-ansible
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: cycloid/cycloid-toolkit
          tag: latest
      run:
        path: /bin/bash
        args:
        - -ec
        - |
          echo "Hello ${CUSTOM_MESSAGE}"
          # Execute ansible-runner afterward
          /usr/bin/ansible-runner
    params:
      CUSTOM_MESSAGE: "World !"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

This is of course an example and the same tips can be applied to every task using a docker image with a shell capability.