Stack v4 migration guide
This guide is aimed toward stack maintainers. Cycloid released the Component layer feature that allows you and your users to instanciate multiple stacks in the same environment, enabling composability in you platform.
In order be able to use components, you need to update your stacks to version: 4 this guide
will walk you through all the steps required to easily migrate your stacks.
Why did we increment the stack versions ?
Stacks before v3 don't use the new special variable component to differenciate config paths
in the .cycloid.yml.
The standard way for most stacks in versions 1 to 3 to build config path was this convention:
config:
use-case:
name: 'My Use case'
pipeline:
pipeline:
path: 'pipeline/pipeline.yml'
variables:
path: 'pipeline/variables.sample.yml'
destination: '($ .project $)/pipeline/($ .env $)/variables.yml'
This causes an issue for using component since most stacks uses the same path that don't
include ($ .component $), creating two components with legacy stacks in the same env
will lead to config overlap and unexpected behavior !
Thus we incremented the version and added a check inside Cycloid. User can't create more than one component with a stack v1-3 in the same environment.
You can however add any number of components using stacks v4 alongside one v1-3 stack in the same environment.
This allows you to use the old stack the old way (one stack, one env) and migrate to components gradually.
We recommend that you migrate all your stacks to v4, as v1-3 stacks will be deprecated in the future.
Pre-requisites
First you need to think about you strategy for migrating stacks, you can either:
- Update stacks in-place
- Fork the stacks in the same catalog repository
- Fork the stacks in another catalog repository
If you go for the third strategy, you will have to create another catalog repository and add it to your organization.
Use our existing documentation on the subject.
Strategy
You will have to plan how to migrate you stacks in order to avoid disruption for you users.
We propose you 3 ways to do it. But depending on your use-case you can think of a strategy that fits your organizations.
Feel free to ask our team via our support page for advices on how to manage this migration.
Update stacks in-place
This strategy is the most simple but is also the most dangerous. Updating the stack in-place means that every user can get errors or issues while you migrate.
Only do it if you are aware of the impact on your users.
Fork the stacks in the same catalog repository
You can copy a current stack (for example stack-a) in the same catalog repository.
In that case you will have to rename the canonical of the stack in the .cycloid.yml to avoid
canonical collision. (we recommend that you also rename the stack to avoid confusion in the Cycloid Console.).
This approach is safer and allows you to ensure that existing components will still work with the old code.
Fork the stacks in another catalog repository
This is similar to the above, but forking the config repo can enable your team to work on stacks from a dedicated testing org with its dedicated catalog, and release stacks when you are ready.
No matter which strategy you choose, the migration process for a stack is the same.
Anticipate impact on terraform resources
While updating your stacks, you will add the ($ .component $) canonical in your code, most of the time,
project, env and component will be used to differenciate and identify resources.
Updating names and resources tags could lead to resources re-creation in terraform, or alter name / tags of existing resources.
Evaluate those changes before migrating. If adding the component canonical could break existing infrastructure, we recomment
to fork you stacks (using the 2nd or 3rd strategy described above) to keep old envs working as expected.
Also, if you let terraform apply change automatically on stack update, disable the trigger in the pipeline to avoid unintended breaking changes.
These changes can extend beyond terraform, it's up to your team to assess those changes.
Again, feel free to ask our support team to help you in this transition.
How to migrate
A. Config path in .cycloid.yml
First thing to attend is the config path of the .cycloid.yml.
The config path in the .cycloid.yml is defined by the destination key in each use_case.
It reference the path where the config will be writed in the config repository when a user instanciate the stack in a component.
---
version: '3'
name: 'My Stack'
canonical: 'my-stack'
author: 'Cycloid'
config:
# Define use-cases
default:
name: 'my default use-case'
pipeline:
pipeline:
path: 'pipeline/pipeline.yml'
variables:
path: 'pipeline/variables.sample.yml'
# The config path where the variables will be writed in the config repo
# for this usecase:
destination: '($ .project $)/pipeline/($ .env $)/variables.yml'
See the stack concept guide if you need a refresher.
To avoid config collision using components, you need to add the ($ .component $) special var in the destination attribute of all you use_cases.
You need to put each component in a separate folder, notably for terraform.
Our new convention will be to use ($ .org $)/($ .project $)/($ .env $)/($ .component $)/ as
a base path for each components.
That will organize your configuration in the same pattern as the UI, like this:
.
├── org1
│ ├── project-a
│ │ ├── dev
│ │ │ ├── component-1/pipeline/... // config files...
│ │ │ ├── component-1/terraform/...
│ │ ├── prod
│ │ │ ├── component-1
│ │ └── staging
│ │ ├── component-1
│ │ └─ ─ component-2
│ └── project-b
│ ├── prod
│ │ ├── component-1
│ │ └── component-2
├── org2
│ ├── project-a
│ │ ├── dev
│ │ │ ├── component-1
...
We added a cycloid special variable named config_path to simplify the use of the default config path convention.
That means that using ($ .config_root $) will be the same as putting ($ .org $)/($ .project $)/($ .env $)/($ .component $) in your configuration.
We updated our special variables to be more consistent, see the whole list here.
You can implement the path that fits your usecase, the bare minimum would be to use the project, environment and component special vars to avoid collision between components.
If you use another pattern, or if you deal with collisions in another way (maybe by using different config repo per project or so) you are free to do so. Cycloid won't enforce anything but requiring your stack to update to v4 if you want to use components.
We will update the version later in this guide, for now, keep it in the current state.
To take the previous example, here would be the updated .cycloid.yml:
---
version: '3'
name: 'My Stack'
canonical: 'my-stack'
author: 'Cycloid'
config:
use_case:
name: 'my default use-case'
pipeline:
pipeline:
path: 'pipeline/pipeline.yml'
variables:
path: 'pipeline/variables.sample.yml'
- destination: '($ .project $)/pipeline/($ .env $)/variables.yml'
+ destination: '($ .org $)/($ .project $)/($ .env $)/($ .component $)/pipeline/variables.yml'
Or better, using our config_path special var:
---
version: '3'
name: 'My Stack'
canonical: 'my-stack'
author: 'Cycloid'
config:
use_case:
name: 'my default use-case'
pipeline:
pipeline:
path: 'pipeline/pipeline.yml'
variables:
path: 'pipeline/variables.sample.yml'
+ destination: '($ .config_root $)/pipeline/variables.yml'
Note the added pipeline/ folder at the end. We will separate each technology per folder.
Also, we changed the minimum characters length on usecases key identifier, it goes from 2 to 3.
We will do the same for terraform, here is an extended example of a stack with multiple use cases and terraform paths:
---
version: '3'
name: 'My App deployment'
canonical: 'my-app'
author: 'Cycloid'
config:
deploy-k8s:
name: 'Deploy myApp on k8s'
pipeline:
pipeline:
path: 'pipeline/deploy-k8s.pipeline.yml' # each use case has its own pipeline and variable.sample file
variables:
path: 'pipeline/deploy-k8s.variable.sample.yml'
destination: '($ .config_root $)/pipeline/variables.yml'
terraform:
main:
path: 'terraform/deploy-vm.main.tf.sample'
destination: '($ .config_root $)/terraform/main.tf'
deploy-vm:
name: 'Deploy myApp in a VM'
pipeline:
pipeline:
path: 'pipeline/deploy-vm.pipeline.yml'
variables:
path: 'pipeline/deploy-vm.variable.sample.yml'
destination: '($ .config_root $)/pipeline/variables.yml'
ansible: # For ansible too
web:
path: 'ansible/environments/inventory.yml.sample'
destination: '($ .config_root $)/ansible/inventory.yml'
terraform:
main:
path: 'terraform/deploy-vm.main.tf.sample'
destination: '($ .config_root $)/terraform/main.tf'
Keep the version: 3, we will update it at the end when the stack is ready to be used as a component.
We still have work to do to make it compatible at this stage.
B. Update the pipeline
The configuration is pull via the pipeline using the git concourse resource
and then merged using our merge stack and config script.
So you need to update all the git resource that watch and fetch the config to use the new destination path in the .cycloid.yml.
We will use the following pipeline as an example:
---
# This is an usual pipeline using terraform before stack v4 update
definitions:
tasks:
- &task-merge-stack-and-config
task: merge-stack-and-config
config:
platform: linux
image_resource:
type: registry-image
source:
repository: cycloid/cycloid-toolkit
tag: latest
run:
path: /usr/bin/merge-stack-and-config
inputs:
- name: stack
path: "stack"
- name: config
path: "config"
outputs:
- name: merged_stack
path: "merged_stack"
params:
CONFIG_PATH: '($ .project $)/terraform/($ .env $)/terraform/'
STACK_PATH: '($ .stack_path $)/terraform'
MERGE_OUTPUT_PATH: "merged_stack/"
resource_types:
- name: terraform
type: registry-image
source:
repository: cycloid/terraform-resource
tag: '1.8.2'
resources:
- name: tfstate
type: terraform
icon: terraform
source:
backend_type: http
backend_config:
address: '($ .api_url $)/inventory?jwt=($ .inventory_jwt $)'
skip_cert_verification: true
env_name: ($ .env $)
- name: stack
type: git
icon: github-circle
source:
uri: ($ .scs_url $)
branch: ($ .scs_branch $)
private_key: ((($ .scs_cred_path $).ssh_key))
paths:
- ($ .stack_path $)/terraform/*
- name: config # this is where we declare the resource that will fetch the config
type: git
icon: github-circle
source:
uri: ($ .cr_url $)
branch: ($ .cr_branch $)
private_key: ((($ .cr_cred_path $).ssh_key))
paths:
- '/($ .project $)/terraform/($ .env $)/*'
# Define jobs that form the pipeline
jobs:
- name: terraform-plan
serial: true
serial_groups: [terraform]
max_in_flight: 1
build_logs_to_retain: 10
plan:
- do:
- get: stack
trigger: true
- get: config
trigger: true
- *task-merge-stack-and-config
- put: tfstate
params:
plan_only: true
terraform_source: merged_stack/
- name: terraform-apply
serial: true
serial_groups: [terraform]
build_logs_to_retain: 10
plan:
- do:
- get: stack
trigger: false
passed:
- terraform-plan
- get: config
trigger: false
passed:
- terraform-plan
- get: tfstate
trigger: false
passed:
- terraform-plan
- *task-merge-stack-and-config
- put: tfstate
params:
plan_run: true
terraform_source: merged_stack/
- name: terraform-destroy
serial: true
serial_groups: [terraform]
build_logs_to_retain: 10
plan:
- do:
- get: stack
trigger: false
- get: config
trigger: false
- *task-merge-stack-and-config
- put: tfstate
params:
action: destroy
terraform_source: merged_stack/
get_params:
action: destroy
groups:
- name: overview
jobs:
- terraform-plan
- terraform-apply
- name: destroy
jobs:
- terraform-destroy
You will have to update:
- The
CONFIG_PATHof themerge-stack-and-config. - The
source.pathsattribute in the git resource that manage the config. // TODO
---
# This is an usual pipeline using terraform before stack v4 update
definitions:
tasks:
# Merge stack and config anchor to use later.
- &task-merge-stack-and-config
task: merge-stack-and-config
config:
platform: linux
image_resource:
type: registry-image
source:
repository: cycloid/cycloid-toolkit
tag: latest
run:
path: /usr/bin/merge-stack-and-config
inputs:
- name: stack
path: "stack"
- name: config
path: "config"
outputs:
- name: merged_stack
path: "merged_stack"
params:
# Update the CONFIG_PATH of the merge-stack-and-config
CONFIG_PATH: '($ .config_root $)/terraform/'
STACK_PATH: '($ .stack_path $)/terraform'
MERGE_OUTPUT_PATH: "merged_stack/"
resource_types:
# Ellipsis...
- name: config # this is where we declare the resource that will fetch the config
type: git
icon: github-circle
source:
uri: ($ .cr_url $)
branch: ($ .cr_branch $)
private_key: ((($ .cr_cred_path $).ssh_key))
# Update the path that the resource will watch and clone
paths:
- '($ .config_root $)/terraform/*'
# The rest of this example file don't need to be updated
At this stage, you will have to check all the reference to the config path and update it.
Also, anything you do in your script that require separation between component will need to
handle get the component canonical using the ($ .component $) or ($ .config_root $) var.
C. Update the terraform code
The terraform code is the more tricky part.
Altering resources attributes like names, id, tags and other metadata could end up re-creating resources depending on the provider.
Be careful on stacks that will automatically trigger the terraform apply in your pipeline.
You will need to add the component variable in all your modules. Usually they already uses org, project, env
variable "org" {}
variable "project" {}
variable "env" {}
variable "component" {}
You can inject them in the modules by directly using the special vars in the module call inside you main.tf.sample defined in your .cycloid.yml.
module "my_module" {
source = "./some/module"
org = "($ .org $)"
project = "($ .project $)"
env = "($ .env $)"
component = "($ .component $)"
}
Then you will need to identify what resources uses the org, project, env variables to get identified in your code and add the component variable.
D. Update Ansible code
Following the upgrade to version 4, which introduced the component layer as shown above, this new layer has been added to Terraform. It is typically used as an additional tag on your cloud resources.
With this addition, we will also pass the component canonical name to Ansible. This allows you to use it within your Ansible code, as well as in the Ansible inventory for filtering by this additional tag.
To do this, add the component variable to the EXTRA_VARS list in your pipeline template:
params:
EXTRA_VARS:
organization: ((organization))
project: ((project))
env: ((environment))
component: ((component))
Here is an example of a host filter using AWS dynamic inventory, including the component tag:
- hosts: :&tag_org_{{ organization }}:&tag_project_{{ project }}:&tag_env_{{ env }}:&tag_component_{{ component }}
E. Update the version
Once you have updated all your code, you can now increment the .cycloid.yml version and test your changes:
---
version: '4'
name: 'My Stack'
canonical: 'my-stack'
author: 'Cycloid'
config:
use_case:
name: 'my default use-case'
pipeline:
pipeline:
path: 'pipeline/pipeline.yml'
variables:
path: 'pipeline/variables.sample.yml'
destination: '($ .config_root $)/pipeline/variables.yml'
Examples
To help you understand the changes, here is a list of public stacks we have that migrated to components, you can check the diff to better grasp how to do it: