Sometimes it is useful to create some links between stacks. Going through a small example with the public Magento's stack (opens new window).

If we look at the Terraform sample file, one of the required args is vpc_id.

stack-magento/terraform/magento.tf.sample (opens new window)

module "magento" {
  source = "module-magento"
  env    = "($ environment $)"
  vpc_id = "<vpc-id>"
1
2
3
4

We can also see that the Magento stack expose several Terraform outputs (opens new window) in the outputs.tf (opens new window) file.

Each Terraform output is stored in the remote tfstate file as we saw in the Troubleshooting with a manual run of terraform plan section.

Getting back on our vpc_id parameter. Consider that we have an infrastructure stack dedicated to providing VPC and exposing VPC ID as a Terraform output.

We can rely on Terraform remote state provider to link our Magento stack to get a vpc_id from the infrastructure stack https://www.terraform.io/language/state/remote-state-data (opens new window).

To create this link, get the bucket name (infrastructure-terraform-remote-state) and the path (infrastructure/infra/infrastructure-infra.tfstate) of the remote tfstate file used by the infrastructure stack.

Then, create the following provider.tf file:

stack-magento/terraform/provider.tf

// Connect this stack to the infra stack to be able to use directly outputs
data "terraform_remote_state" "infrastructure" {
  backend = "s3"

  config {
    bucket = "infrastructure-terraform-remote-state"
    key    = "infrastructure/infra/infrastructure-infra.tfstate"
    region = "${var.aws_region}"
  }
}
1
2
3
4
5
6
7
8
9
10

With this new file, you should be able to use outputs of the infrastructure stack as Terraform data in the Magento stack or config like this:

stack-magento/terraform/magento.tf.sample

module "magento" {
  source = "module-magento"
  env    = "($ environment $)"
  vpc_id = “${data.terraform_remote_state.infrastructure.vpc_id}"
...
1
2
3
4
5

# Override a stack with config merge

A stack is usually designed to fit with the config and requires the minimum number of files. But Cycloid doesn’t impose a specific behavior. You can define or override everything with the config.

To give an example, we will use the public Magento stack and override a Terraform file ami.tf (opens new window).

As briefly described in Private Stack structure, the following behavior is not linked to Terraform or overriding files, you can create new files or use them with Ansible.

To achieve this, simply go to the config repository and create the same relative path as the stack.

  • Path in the stack: <stack repository>/terraform/module-magento/ami.tf
  • Expected path in config: <config repository>/terraform/<env>/module-magento/ami.tf

Sample of the new file <config repository>/terraform/<env>/module-magento/ami.tf

data "aws_ami" "debian_jessie" {
  most_recent = false
  filter {
    name   = "name"
    values = ["overrided …. "]
  }
1
2
3
4
5
6

To get more details of the merge, see the merge-catalog-and-config task in the pipeline.

The path could differ between stacks, as it’s related to the pipeline design. For private stack this path usually contains a sub-directory with the project name : ex with a stack called infrastructure implemented by an env called infra :

Stack path: <stack repository>/stack-infrastructure/terraform/foo.ft

Config path: <config repository>/infrastructure/terraform/infra/foo.ft

# Troubleshooting with a manual run of terraform plan

We don't recommend to manually run Terraform for syntax verification. You should rather run the 2 following commands:

terraform fmt
terraform validate
1
2

If you still need to run the terraform manually for debugging purposes, you have 2 steps to follow.

Merge: The merge is linked to the pipeline configuration and merge-catalog-and-config tasks. See this example (opens new window)

But you should usually be able to reproduce it with the following commands:

Example path of a private service catalog for the infrastructure stack

rsync -av --delete \
--exclude=".terraform" \
--exclude="provider.tmp.tf" \
--exclude="terraform.tfstate*" .../stack/stack-infrastructure/terraform/ ./
rsync -av .../config/infrastructure/terraform/infra/ ./
1
2
3
4
5

Configure: stacks use Terraform with a remote state file on Amazon S3. To be able to determine the path and the bucket name used to configure Terraform, have a look in the pipeline configuration for the Terraform resource (opens new window)

Sample of the pipeline.yml file:

terraform_storage_bucket_path: "($ project $)/($ environment $)"


resources:
- name: terraform-magento-((env))
  type: terraform
  icon: terraform
  source:
    env_name: ((env))
    backend_type: s3
    backend_config:
      bucket: ((terraform_storage_bucket_name))
      # When using a non-default workspace, the state path will be /workspace_key_prefix/env_name/key
      key: ((project))-((env)).tfstate
      workspace_key_prefix: ((project))
...
jobs:
        - put: terraform-magento-((env))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

The bucket name and path can be found in the declaration of the Terraform resource. And the tfstate file name could be find in jobs which use this resource as env_name.

If your stack follow our practices, the path of the final tfstate file should look like something close to: s3://<orgname>-terraform-remote-state/<project>/<env>/<project>-<env>.tfstate Then with those information you should be able to generate a temporary provider.tmp.tf file to configure your local terraform to use the remote tfstate file.

export BUCKET="example-terraform-remote-state"
export ENV="test"
export PROJECT="magento"

# Key you configured in Cycloid credentials  manager.
export AWS_ACCESS_KEY_ID="<Amazon accessKey>"
export AWS_SECRET_ACCESS_KEY="<Amazon secretKey>"
export AWS_REGION="eu-west-1"

echo "terraform {
  backend \"s3\" {
    bucket = \"$BUCKET\"
    key    = \"$PROJECT/$ENV/$PROJECT-$ENV.tfstate\"
    region = \"$AWS_REGION\"
  }
}" > provider.tmp.tf

# Run terraform
terraform init -backend=true
terraform plan -var access_key=$AWS_ACCESS_KEY_ID -var secret_key=$AWS_SECRET_ACCESS_KEY
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

# Guideline to upgrade terraform in cycloid stack

This tutorial gives you some tips on how to upgrade the Terraform used in cycloid's stacks, to the latest version (at the moment of writing terraform v1).

If you have any doubts don't hesitate to contact us 😃

# How to upgrade using Cycloid?

Upgrading a stack's terraform in Cycloid consists of 2 steps:

  • Upgrading the terraform version of the terraform concourse resource, that is used in the pipeline. You can check the available resource versions on the resource dockerhub (opens new window). The value is defined in the concourse file that defines the pipeline in the resource_types section, as follows:

     resource_types:
        - name: terraform
        type: docker-image
        source:
            repository: ljfranklin/terraform-resource
            tag: ${terraform_version}
    
    1
    2
    3
    4
    5
    6
    • You can change the value either by:
      • editing the running pipeline directly on the cycloid dashboard. Not recommended since this doesn't change the pipeline variables stored and thus the value is not stored;
      • or by editing the pipeline variables file and refresh the pipeline. This file is located in the config repository on the pipeline/ folder and you should change the terraform_version variable.
    • For more info regarding this step please refer to the pipeline section in our docs.
  • Upgrading the terraform version on the code, usually stored in the folder terraform/ in the stack branch. Performing this task will vary according to the stack's current terraform version, the version to upgrade to and the terraform code itself.
    To check the current version you can look into the versions.yml file. Please refer to the next sections depending on the version of your stack to see some tips on how to validate the upgrade using the cycloid dashboard.

# Terraform 0.13 and later

For Terraform 0.13 and later upgrading can be somewhat easy since there are no changes in the tfstate. You just need to update the stack's terraform code version and when you have the new code version commited into the stack/ git branch just relaunch the pipeline job terraform plan, without forgeting that it should match the pipeline's terraform resource version has specified above.
If it passes with no errors for that version upgrade, then you just need to lauch terraform apply to have the upgrade for that tf version finished.
Beware if you're changing in a production pipeline, to check if the terraform plan results in no changes on the resources since this can impact your service.
For tips on how to upgrade terraform code please check the procedure to upgrade TF locally section of this readme.

# Terraform <=0.12

For Terraform 0.12 and bellow upgrading the terraform upgrading may require in addiction of changing the code, to change the tfstate, as well. So after you finished with the terraform code update and you commited it into the stack/ git just relaunch the pipeline job terraform plan, without forgeting that it should match the pipeline's terraform resource version has specified above.
If you encountered some errors please check the section of this readme known errors which can give you some tips on how to solve some errors that you may encounter.
If on the other hand you didn't found any error during apply, then you just need to lauch terraform apply to have the upgrade for that tf version finished.
Beware if you're changing in a production pipeline, to check if the terraform plan results in no changes on the resources since this can impact your service.
For tips on how to upgrade terraform code please check the upgrade TF locally section of this readme.

# Known errors

  • Performing an upgrade from 0.12 to 0.13 requires some changes in tfstate. In fact if you change both the pipeline's terraform resource version and the terraform code version, you may encounter an error relating to the providers format in the tfstate,similar to as follows:

    - Failed to instantiate provider "registry.terraform.io/-/aws" to obtain schema: unknown provider "registry.terraform.io/-/aws"
    - Failed to instantiate provider "registry.terraform.io/-/random" to obtain schema: unknown provider "registry.terraform.io/-/random"
    
    1
    2
    • To solve this issue, you have to edit the tfstate stored in the remote bucket.You can dot this by editing:

      • Manually: by changing the providers that resulted in the bug in the yaml as follow:


         

         --- "provider": "provider[\"registry.terraform.io/-/aws\"]"
         +++ "provider": "provider[\"registry.terraform.io/hashicorp/aws\"]"
        
        1
        2
      • With terraform command: terraform state replace-provider that is used to replace the provider in the tfstate. This will change the selected tfstate file and create a copy of the original version. Here follows an example command:

        terraform state replace-provider -state=/$PATH_TO_TFSTATE/tfstate.json -- -/aws registry.terraform.io/hashicorp/aws
        
        1
    • After editing, you need to upload the changed file back to the bucket and relaunch the terraform plan job on the pipeline.
      Tip! Make sure that the upload tfstate has the same name has the remote stored one, since depending on your provider it migth be dowloaded with a different name.

# Procedure to upgrade locally

This section details a local procedure on how to upgrade the terraform code. We'll use our stack prometheus (opens new window) has an example of the files to change.

  1. Install terraform manager (opens new window) - to allow to easily switch between terraform versions easily

  2. Check the current version of the stack terraform. You can do this:

    • by checking in the cycloid console the tfstate of the pipeline to upgrade in the last succeffull terraform-apply job

    • OR by checking the terraform resource tag used in the concourse pipeline:

      grep -A1 'repository: ljfranklin/terraform-resource' pipeline/pipeline.yml

  3. Install and use the next minoir terraform version using the terraform manager:

    tfenv install $VERSION
    tfenv use $VERSION
    
    1
    2
    • Tip! to list all versions available you can run: tfenv list-remote
  4. Test locally for changes in terraform code.For that you can change the terraform version in terraform/versions.tf (opens new window). Some of the useful commands you may need.

    terraform init
    terraform ${TF_version}upgrade (if available)
    terraform validate
    terraform fmt
    
    1
    2
    3
    4
  5. Test remotely for changes/bugs using cycloid console. For that:

    • start by changing the terraform resource tag in pipeline/pipeline.yml (opens new window) and commit the changes

    • then on the pipeline check the state of the job terraform plan

    • finally launch terraform apply job

    • Tip! to get more verbose on the the terraform jobs results you can add the TF_LOG (opens new window) environment variable in the terraform resource in the pipeline. Jus edit the pipeline and add the following env variable in the tfstate resource in the section resources, as follows:

          - name: tfstate
              type: terraform
              (...)
              env:
                  TF_LOG: DEBUG
      
      1
      2
      3
      4
      5
  6. Repeat steps 3 to 5, for each minor version until arriving at the desired version.
    Example upgrade order: 0.12 → 0.13 → 0.14 → 1.0

Note!

For more details regarding how to upgrade terraform code please check the terraform official docs (opens new window) and the terraform changelog (opens new window) for more details.