# Configure the pipeline to run Terraform plan

Pipeline step3

To complete this step, we will use what we previously saw in Step 2 by adding a new job to run Terraform if our unittest job succeed. The goal is to configure a new resource to run Terraform and use it in a dedicated job.

Additionally to that, the Terraform code will be stored in your Git repository, inside the stacks branch. This means we will also configure a git resource in the pipeline to get the content of the stacks branch.

Follow those steps to apply all changes described in this step

Starting with the Terraform code. Let's create a dummy Terraform configuration with the cloud provider we want to use for Terraform in a provider.tf file.

TIPS

If you are not comfortable with Terraform, we encourage you to follow Terraform getting started (opens new window) documentation.

The content of the provider.tf specifies the provider configuration (GCP (opens new window), AWS (opens new window), AzureRM (opens new window)) and variables (opens new window) to configure it. Those variables will be provided by the pipeline.

stack-sample/terraform/provider.tf

Several input variables have been defined to configure the cloud provider and two variables to pass the source code git repository URL that we will use and explain in the next step. The provider.tf usually also contain some generic variables like customer, project and env as most of our customer use them in Terraform to tag/name cloud resources.

Now add a Terraform resource into the pipeline. As Terraform is not part of the core resources type (opens new window) of Concourse, the first thing is to specify a new terraform resource type in the dedicated root section resource_types.

TIP

When to use a task or a resource in a pipeline?

In our case, we could have provided our container image with Terraform already installed on it and called the terraform command. A resource usually is here to simplify a task. It makes something more generic and easy to use with embedded scripts. A resource also creates versions that can be used between different jobs as triggers.

If you are interested in creating your own resource, follow the Concourse (opens new window) documentation.

stack-sample/pipeline/pipeline.yml





 


resource_types:
  - name: terraform
    type: docker-image
    source:
      repository: ljfranklin/terraform-resource
      tag: '0.12.24'
1
2
3
4
5
6

The source code of this resource can be found here (opens new window).

From there, a new terraform resource type is available and can be configured. The following sample configures a resource called tfstate of our new terraform type:

stack-sample/pipeline/pipeline.yml

Some explanations: Terraform stores the status of the infrastructure in a file called tfstate (opens new window). This file can be stored in different backends. In this example, we decided to use our cloud provider object store as backend type (opens new window) to store this file. The backend_type parameter defines this, then configured in the backend_config section, using pipeline variables defined below. Refer to the dedicated Terraform documentation to know more about your backend storage configuration.

The vars section defines a collection of Terraform input variables (opens new window) provided by the pipeline. Variables are given by the pipeline to Terraform. The most common ones are the cloud provider credentials that we defined in provider.tf and the extra customer/project/env that we previously saw.

Small detour by variables.sample.yml file to add our new pipeline variables:

stack-sample/pipeline/variables.sample.yml

We added several variables related to our pipeline changes.

  • git: to provide the SSH key to get Terraform code from your private git repository (git_ssh_key). As well as the branch and URL to use (stack_git_branch, git_repository)
  • cloud provider: to configure the cloud provider access and Terraform tfstate file storage.

As you can see, the value of some variables has a specific format ((myvalue)). For example, in line 1, the value of this pipeline variable will be provided by a Cycloid credential. The one you previously created in the section prepare Cycloid credentials. See pipeline variable documentation for more information to use Cycloid credentials into a pipeline.

Getting back on the pipeline file to configure a git resource to get the dummy Terraform source code from our stacks branch.

stack-sample/pipeline/pipeline.yml










 

resources:
...
- name: git_stack
  type: git
  source:
    uri: ((git_repository))
    branch: ((stack_git_branch))
    private_key: ((git_ssh_key))
    paths:
      - stack-sample/terraform/*
1
2
3
4
5
6
7
8
9
10

We also added a filter on Terraform path using paths to trigger terraform jobs only when commits change terraform files.

Standard pipeline behavior with Terraform is to run Terraform plan (opens new window). It's a convenient way to check the execution plan for a set of changes that matches your expectations without making any changes to real resources. Then create a second job to apply those changes by calling Terraform apply (opens new window).

Let's start and create the dedicated job to run terraform plan command just after unittest job (passed argument) and speak about terraform apply in the next step:

stack-sample/pipeline/pipeline.yml








 

 


 


 


jobs:
...
  - name: terraform-plan
    max_in_flight: 1
    build_logs_to_retain: 10
    plan:
      - do:
        - get: git_stack
          trigger: true
        - get: git_code
          trigger: true
          passed:
            - unittest
        - put: tfstate
          params:
            plan_only: true
            terraform_source: git_stack/stack-sample/terraform
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

An important point is the passed parameter that you can put on get action. This keyword links jobs together to ensure that we will get the code already processed by the unittest job.

This job gets Terraform code from git_stack resource, which is our git on the stacks branch. Then call put (opens new window) on Terraform resource (tfstate) using plan_only parameter to specify that we want only to run terraform plan.

Don't forget to add this job to a group:

stack-sample/pipeline/pipeline.yml





 

groups:
- name: overview
  jobs:
  - unittest
  - terraform-plan
1
2
3
4
5

Click on the terraform-plan job and see Terraform logs saying we successfully run it with 0 changes.

Terraform

Add and commit those changes in Git:

git add .
git commit -m "Step 3"
git push origin stacks
1
2
3

Get back to Cycloid's dashboard, and click on the Refresh pipeline button Refresh pipeline.


# Key points to remember