In this article, we’ll explore moving resources from a single Terraform state file to three separate state files—Development, Preproduction, and Production. The goal is to manage resources across different environments effectively, using GitLab, HashiCorp Vault, and Infrastructure as Code (IaC) with Terraform.
We assume you currently have a single Terraform state file in GitLab that stores resources for all environments in a state file called production. Each of the resources are identical except for the names and are defined as separate files (development.tf
, preproduction.tf
, and production.tf)
. Examples of the files are below:-
# development.tf
resource "aws_vpc" "development_main" {
cidr_block = var.cidr_block
tags = {
Name = "development-main-vpc"
}
}
resource "aws_subnet" "development_main" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr_block
tags = {
Name = "development-main-subnet"
}
}
resource "aws_secretsmanager_secret" "development_backend" {
name = "development-backend-secret"
}
resource "aws_secretsmanager_secret" "development_access_key" {
name = "development-access-key-secret"
}
# preproduction.tf
resource "aws_vpc" "preproduction_main" {
cidr_block = var.cidr_block
tags = {
Name = "preproduction-main-vpc"
}
}
resource "aws_subnet" "preproduction_main" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr_block
tags = {
Name = "preproduction-main-subnet"
}
}
resource "aws_secretsmanager_secret" "preproduction_backend" {
name = "preproduction-backend-secret"
}
resource "aws_secretsmanager_secret" "preproduction_access_key" {
name = "preproduction-access-key-secret"
}
# production.tf
resource "aws_vpc" "production_main" {
cidr_block = var.cidr_block
tags = {
Name = "production-main-vpc"
}
}
resource "aws_subnet" "production_main" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr_block
tags = {
Name = "production-main-subnet"
}
}
resource "aws_secretsmanager_secret" "production_backend" {
name = "production-backend-secret"
}
resource "aws_secretsmanager_secret" "production_access_key" {
name = "production-access-key-secret"
}
And we will end up with one file instead called resources.tf which will look like this
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
tags = {
Name = "main-vpc"
}
}
resource "aws_subnet" "main" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr_block
tags = {
Name = "main-subnet"
}
}
resource "aws_secretsmanager_secret" "backend" {
name = "backend-secret"
}
resource "aws_secretsmanager_secret" "access_key" {
name = "access-key-secret"
}
I’d suggest using a docker-compose.yml file for ease as well, but you don’t have to. Here is an example of one if you want to use it.
services:
terraform:
image: www.whereever-your-images-are:terraform-1.5.7
cap_drop:
- NET_RAW
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
working_dir: /work/
entrypoint: terraform
networks:
- local
volumes:
- ${PWD}:/work
security_opt:
- label:disable
- no-new-privileges:true
environment:
- TF_VAR_environment
- TF_HTTP_USERNAME
- TF_HTTP_PASSWORD
- TF_HTTP_ADDRESS=https://gitlab.companyname.com/api/v4/projects/39/terraform/state/${TF_VAR_environment}
- TF_HTTP_LOCK_ADDRESS=https://gitlab.companyname.com/api/v4/projects/39/terraform/state/${TF_VAR_environment}/lock
- TF_HTTP_LOCK_METHOD=POST
- TF_HTTP_UNLOCK_ADDRESS=https://gitlab.companyname.com/api/v4/projects/39/terraform/state/${TF_VAR_environment}/lock
- TF_HTTP_UNLOCK_METHOD=DELETE
- GITLAB_TOKEN
- TERRAFORM_VAULT_AUTH_JWT=notarealtoken
- TF_VAR_vault_token=${TF_VAR_vaultnp_token}
Let’s break down the steps:
Prerequisites
- Tools Installed: Ensure you have the following installed:
- Terraform
- Git CLI
- HashiCorp Vault – not needed if you are not using Vault.
- Environment Variables Configured: You should already have a Vault token and GitLab credentials exported. For example:
export VAULT_TOKEN="<your-vault-token>" export GITLAB_TOKEN="<your-gitlab-token>"
- Access to State File: Make sure the current state file is accessible from GitLab, and you have permission to modify it.
- Set a variable called environment.
export TF_VAR_environment=production
. This will be used to point to each environment.
Step 1: Pull the Current State File
What and Why?
Pulling the current state file gives you a local copy of the state data. This file contains the resource IDs that will be required for re-importing the resources into their respective environment-specific state files later.
Start by pulling the current state file locally:
terraform state pull > current_state.tfstate
This file is critical for identifying resource IDs during the migration process.
Step 2: Identify Resources to Move
What and Why?
Identifying resources ensures you know which resources need to be migrated to the new state files. This step helps you map out which resources belong to each environment.
Examine the state file to list the resources you need to migrate. Use the following command to list all resources:
terraform state list
For example, you might see:
aws_vpc.development_main
aws_subnet.development_main
vault.aws_secretsmanager_secret.development_backend
vault.aws_secretsmanager_secret.development_access_key
aws_vpc.preproduction_main
aws_subnet.preproduction_main
vault.aws_secretsmanager_secret.preproduction_backend
vault.aws_secretsmanager_secret.preproduction_access_key
aws_vpc.production_main
aws_subnet.production_main
vault.aws_secretsmanager_secret.production_backend
vault.aws_secretsmanager_secret.production_access_key
We will move these resources into environment-specific state files. So we can leave production resources as they are already in the production state file, but we need to move any development and preproduction resources out. If you have any data blocks, they will get imported into each state so you shouldn’t need to remove them.
Step 3: Remove Resources from the Current State File
What and Why?
Removing resources from the current state file ensures they are no longer tracked in the production state file. This prevents conflicts when importing them into the new environment-specific state files.
Use terraform state rm
to remove resources from the current state file:
terraform state rm aws_vpc.development_main
terraform state rm aws_subnet.development_main
terraform state rm aws_vpc.preproduction_main
terraform state rm aws_subnet.preproduction_main
terraform state rm vault.aws_secretsmanager_secret.development_backend
terraform state rm vault.aws_secretsmanager_secret.development_access_key
terraform state rm vault.aws_secretsmanager_secret.preproduction_backend
terraform state rm vault.aws_secretsmanager_secret.preproduction_access_key
After removal, verify they no longer appear:
terraform state list
Step 4: Replace Environment-Specific Files with resources.tf
What and Why?
Instead of maintaining separate Terraform files for each environment with redundant resource definitions, you can consolidate the resources into a single resources.tf
file. This file contains all shared resource configurations, and Terraform will dynamically apply them based on the selected environment. With the old development.tf, preproduction.tf and production.tf rename them to development.tf.old, preproduction.tf.old and production.tf.old or comment out each resource. These will not be used going forward.
Here is an example of resources.tf
:
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
tags = {
Name = "main-vpc"
}
}
resource "aws_subnet" "main" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr_block
tags = {
Name = "main-subnet"
}
}
resource "aws_secretsmanager_secret" "backend" {
name = "backend-secret"
}
resource "aws_secretsmanager_secret" "access_key" {
name = "access-key-secret"
}
Step 5: Set the Target Environment
What and Why?
Setting the target environment ensures that Terraform operations are directed to the correct environment-specific state file. This is achieved by configuring the backend dynamically based on the environment.
Use an environment variable to specify the target environment for the migration. For example:
export TF_VAR_environment="development"
Update your docker-compose.yml file to use the correct vault token:
- TF_VAR_vault_token=${TF_VAR_vaultnp_token}
If you are not using Vault don’t bother with this line, comment it out.
Step 6: Import Resources into the new Development or Preproduction State Files
What and Why?
Importing resources into the new state file allows Terraform to track these resources under the new environment-specific configuration. Use import blocks for better readability and organization.
Create a separate import.tf file and add import blocks for the resources that you removed from the production state and want to import into the development state or preproduction depending on which one you are doing first. The process is the same, remember to change the TF_VAR_environment variable to the correct environment. Look up the ID’s in the state list file, so you would be looking for the resources with the development in the name, get the ID, and import to the new resources.
# Import block
import {
to = aws_vpc.main
id = "vpc-0abcd1234efgh5678"
}
import {
to = aws_subnet.main
id = "subnet-0abcd1234ijkl9012"
}
import {
to = vault.aws_secretsmanager_secret.backend
id = "backend-1234abcd"
}
import {
to = vault.aws_secretsmanager_secret.access_key
id = "accesskey-5678efgh"
}
Here is an example for the development
environment:
terraform init
terraform plan
Check that the plan looks right for the number to import, in this case 4. Then run:-
terraform apply
Once the apply is complete, you can check in gitlab to see the new state file listed in the Terraform state section and run a terraform state list
to see the resources are now in the development state file.
Repeat for the preproduction
environments, replacing resource IDs from the state file but look for the resources that have the preproduction names as the ID’s will change Save over the existing import.tf file as you don’t need to keep it for development:
export ENVIRONMENT="preproduction"
# Import block
import {
to = aws_vpc.main
id = "vpc-0zxyg5678ytre9012"
}
import {
to = aws_subnet.main
id = "subnet-0zxyg5678j90210"
}
import {
to = vault.aws_secretsmanager_secret.backend
id = "backend-5678zxyt"
}
import {
to = vault.aws_secretsmanager_secret.access_key
id = "accesskey-4325ezxy"
}
Once you have saved the import.tf file with the new ID’s, run the terraform init.
terraform init
terraform plan
Check that the plan looks right for the number to import, in this case 4. Then run:-
Terraform apply
Once the apply is complete, you can check in gitlab to see the new state file listed in the Terraform state section and run a terraform state list
to see the resources are now in the preproduction state file.
Step 7: Validate and Commit Changes
What and Why?
Validation ensures that the configuration is correct and that Terraform can manage the resources without issues. Committing changes updates your version control system with the new configuration and state file definitions.
After importing resources into each state file, validate the configuration:
terraform validate
Commit the updated Terraform configurations to GitLab:
git add .
git commit -m "Split state files into environment-specific states"
git push origin main
Summary
By splitting the Terraform state files into environment-specific states, you gain better control over resource management and reduce risks of accidental changes across environments. Using GitLab, HashiCorp Vault, and Terraform, this process ensures secure, consistent, and isolated infrastructure management for Development, Preproduction, and Production environments.