Skip to content

DevOps

terraform import block

Sometimes you need to import existing infrastructure into Terraform. This is useful when you have existing resources that you want to manage with Terraform, or when you want to migrate from another tool to Terraform.

Other times, you may need to import resources that were created outside of Terraform, such as manually created resources or resources created by another tool. For example:

"Error: unexpected status 409 (409 Conflict) with error: RoleDefinitionWithSameNameExists: A custom role with the same name already exists in this directory. Use a different name"

In my case, I had to import a custom role that was created outside of Terraform. Here's how I did it:

  1. Create a new Terraform configuration file for the resource you want to import. In my case, I created a new file called custom_role.tf with the following content:
resource "azurerm_role_definition" "custom_role" {
  name        = "CustomRole"
  scope       = "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"
  permissions {
    actions     = [
      "Microsoft.Storage/storageAccounts/listKeys/action",
      "Microsoft.Storage/storageAccounts/read"
    ]

    data_actions = []

    not_data_actions = []
  }
  assignable_scopes = [
    "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"
  ]
}
  1. Add a import block to the configuration file with the resource type and name you want to import. In my case, I added the following block to the custom_role.tf file:
import {
  to = azurerm_role_definition.custom_role
  id = "/providers/Microsoft.Authorization/roleDefinitions/11111111-1111-1111-1111-111111111111|/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"

}
  1. Run the terraformplan` command to see the changes that Terraform will make to the resource. In my case, the output looked like this:
.
.
.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
.
.
.
  1. Run the terraform apply command to import the resource into Terraform. In my case, the output looked like this after a long 9 minutes:
...
Apply complete! Resources: 1 imported, 0 added, 1 changed, 0 destroyed.
  1. Verify that the resource was imported successfully by running the terraform show command. In my case, the output looked like this:
terraform show

You can use the terraform import command to import existing infrastructure into Terraform too but I prefer to use the import block because it's more readable and easier to manage.

With terraform import the command would look like this:

terraform import azurerm_role_definition.custom_role "/providers/Microsoft.Authorization/roleDefinitions/11111111-1111-1111-1111-111111111111|/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"

Conclusion

That's it! You've successfully imported an existing resource into Terraform. Now you can manage it with Terraform just like any other resource.

Happy coding! 🚀

markmap

markmap is a visualisation tool that allows you to create mindmaps from markdown files. It is based on the mermaid library and can be used to create a visual representation of a markdown file.

Installation in mkdocs

To install markmap in mkdocs, you need install the plugin using pip:

pip install mkdocs-markmap

Then, you need to add the following lines to your mkdocs.yml file:

plugins:
  - markmap

Usage

To use markmap, you need to add the following code block to your markdown file:

```markmap  
# Root

## Branch 1

* Branchlet 1a
* Branchlet 1b

## Branch 2

* Branchlet 2a
* Branchlet 2b
```

And this will generate the following mindmap:

alt text

That is for the future, because in my mkdocs not work as expected:

# Root

## Branch 1

* Branchlet 1a
* Branchlet 1b

## Branch 2

* Branchlet 2a
* Branchlet 2b

Visual Studio Code Extension

There is also a Visual Studio Code extension that allows you to create mindmaps from markdown files. You can install it from the Visual Studio Code marketplace.

    Name: Markdown Preview Markmap Support
    Id: phoihos.markdown-markmap
    Description: Visualize Markdown as Mindmap (A.K.A Markmap) to VS Code's built-in markdown preview
    Version: 1.4.6
    Publisher: phoihos
    VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=phoihos.markdown-markmap
VS Marketplace Link

Conclusion

I don't like too much this plugin because it not work as expected in my mkdocs but it's a good tool for documentation.

References

Implementing policy as code with Open Policy Agent

In this post, I will show you how to implement policy as code with Open Policy Agent (OPA) and Azure.

What is Open Policy Agent?

Open Policy Agent (OPA) is an open-source, general-purpose policy engine that enables you to define and enforce policies across your cloud-native stack. OPA provides a high-level declarative language called Rego that you can use to write policies that are easy to understand and maintain.

Why use Open Policy Agent?

There are several reasons to use OPA:

  • Consistency: OPA allows you to define policies in a single place and enforce them consistently across your cloud-native stack.
  • Flexibility: OPA provides a flexible policy language that allows you to define policies that are tailored to your specific requirements.
  • Auditability: OPA provides a transparent and auditable way to enforce policies, making it easy to understand why a policy decision was made.
  • Integration: OPA integrates with a wide range of cloud-native tools and platforms, making it easy to enforce policies across your entire stack.

Getting started with Open Policy Agent

To get started with OPA, you need to install the OPA CLI and write some policies in Rego.

You can install the OPA CLI by downloading the binary from the OPA GitHub releases page, you can check the installation guide for more details.

Once you have installed the OPA CLI, you can write policies in Rego. Rego is a high-level declarative language that allows you to define policies in a clear and concise way.

Here's a simple example of a policy that enforces a naming convention for Azure resources:

package azure.resources

default allow = false

allow {
    input.resource.type == "Microsoft.Compute/virtualMachines"
    input.resource.name == "my-vm"
}

This policy allows resources of type Microsoft.Compute/virtualMachines with the name my-vm. You can write more complex policies that enforce a wide range of requirements, such as resource tagging, network security, and access control.

Integrating Open Policy Agent with Azure

To integrate OPA with Azure, you can use the Azure Policy service, which allows you to define and enforce policies across your Azure resources. You can use OPA to define custom policies that are not supported by Azure Policy out of the box, or to enforce policies across multiple cloud providers.

Conclusion

Open Policy Agent is a powerful tool that allows you to define and enforce policies across your cloud-native stack. By using OPA, you can ensure that your infrastructure is secure, compliant, and consistent, and that your policies are easy to understand and maintain. I hope this post has given you a good introduction to OPA and how you can use it to implement policy as code in your cloud-native environment.

Additional resources

I have created a GitHub repository with some examples of policies written in Rego that you can use as a starting point for your own policies.

References

Repository Strategy: How to test Branching Strategy in local repository

If you don't want to test in github, gitlab, or azure devops, you can test in your local desktop.

Step 1: Create a new local bare repository

To create a new local bare repository, open a terminal window and run the following command:

mkdir localrepo
cd localrepo
git init --bare my-repo.git

This command creates a new directory called localrepo and initializes a new bare repository called my-repo.git inside it.

Step 2: Create a local repository

To create a new local repository, open a terminal window and run the following command:

mkdir my-repo
cd my-repo
git init

This command creates a new directory called my-repo and initializes a new repository inside it.

Step 3: Add the remote repository

To add the remote repository to your local repository, run the following command:

git remote add origin ../my-repo.git

In mi case, I have used absolute path, c:\users\myuser\localrepo\my-repo.git:

git remote add origin c:\users\myuser\localrepo\my-repo.git

This command adds the remote repository as the origin remote.

*Step 4: Create a new file, make first commit and push

To create a new file in your local repository, run the following command:

echo "Hello, World!" > hello.txt

This command creates a new file called hello.txt with the content Hello, World!.

To make the first commit to your local repository, run the following command:

git add hello.txt
git commit -m "Initial commit"

This command stages the hello.txt file and commits it to the repository with the message Initial commit.

To push the changes to the remote repository, run the following command:

git push -u origin master

This command pushes the changes to the master branch of the remote repository.

Step 5: Create a new branch and push it to the remote repository

To create a new branch in your local repository, run the following command:

git checkout -b feature-branch

This command creates a new branch called feature-branch and switches to it.

Conclusion

By following these steps, you can test your branching strategy in a local repository before pushing changes to a remote repository. This allows you to experiment with different branching strategies and workflows without affecting your production codebase.

Nested Structures with Optional Attributes and Dynamic Blocks

In this post, I will explain how to define nested structures with optional attributes and it's use dynamic blocks in Terraform.

Nested Structures

Terraform allows you to define nested structures to represent complex data types in your configuration. Nested structures are useful when you need to group related attributes together or define a data structure that contains multiple fields.

For example, you might define a nested structure to represent a virtual machine with multiple attributes, such as size, image, and network configuration:

variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = string
    })
  })
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the vm variable defines a nested structure with three attributes: size, image, and network. The network attribute is itself a nested structure with two attributes: subnet and security_group.

Optional Attributes

If you have an attribute that is optional, you can define it as an optional attribute in your configuration. Optional attributes are useful when you want to provide a default value for an attribute but allow users to override it if needed.

If optional doesn't works, Terraform allows you to define your variables as any type, but it's not recommended because it can lead to errors and make your configuration less maintainable. It's better to define your variables with the correct type and use optional attributes when needed, but some cases it's necessary to use any and null values, you can minimize the risk of errors by providing a good description of the variable and its expected values.

For example, you might define an optional attribute for the security_group in the network structure:

variable "vm" {
  description = <<DESCRIPTION
  Virtual machine configuration.
  The following attributes are required:
  - size: The size of the virtual machine.
  - image: The image for the virtual machine.
  - network: The network configuration for the virtual machine.
  The network configuration should have the following attributes:
  - subnet: The subnet for the virtual machine.
  - security_group: The security group for the virtual machine.
  DESCRIPTION
  type = any
  default = null
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}
variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = optional(string)
    })
  })
}


resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the security_group attribute in the network structure is defined as an optional attribute with a default value of null. This allows users to provide a custom security group if needed, or use the default value if no value is provided.

Dynamic Blocks

Terraform allows you to use dynamic blocks to define multiple instances of a block within a resource or module. Dynamic blocks are useful when you need to create multiple instances of a block based on a list or map of values.

For example, you might use a dynamic block to define multiple network interfaces for a virtual machine:

variable "network_interfaces" {
  type = list(object({
    name    = string
    subnet  = string
    security_group = string
  }))
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = "Standard_DS1_v2"
  image    = "UbuntuServer"

  dynamic "network_interface" {
    for_each = var.network_interfaces
    content {
      name            = network_interface.value.name
      subnet          = network_interface.value.subnet
      security_group  = network_interface.value.security_group
    }
  }
}

In this example, the network_interfaces variable defines a list of objects representing network interfaces with three attributes: name, subnet, and security_group. The dynamic block iterates over the list of network interfaces and creates a network interface block for each object in the list.

Conclusion

In this post, I explained how to define nested structures with optional attributes and use dynamic blocks in Terraform. Nested structures are useful for representing complex data types, while optional attributes allow you to provide default values for attributes that can be overridden by users. Dynamic blocks are useful for creating multiple instances of a block based on a list or map of values. By combining these features, you can create flexible and reusable configurations in Terraform.

Repository Strategy: Branching Strategy

Once we have decided that we will use branches, it is time to define a strategy, so that all developers can work in a coordinated way.

Some of the best known strategies are:

  • Gitflow: It is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.
  • Feature Branch: It is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.
  • Trunk-Based Development: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitHub Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitLab Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • Microsoft Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

Gitflow

Gitflow is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.

The Gitflow strategy is based on the following branches:

  • main: It is the main branch of the project. It contains the code that is in production.
  • develop: It is the branch where the code is integrated before being released to production.

The Gitflow strategy is based on the following types of branches:

  • feature: It is the branch where the code for a new feature is developed.
  • release: It is the branch where the code is prepared for release.
  • hotfix: It is the branch where the code is developed to fix a bug in production.

The Gitflow strategy is based on the following rules:

  • Feature branches are created from the develop branch.
  • Feature branches are merged into the develop branch.
  • Release branches are created from the develop branch.
  • Release branches are merged into the main and develop branches.
  • Hotfix branches are created from the main branch.
  • Hotfix branches are merged into the main and develop branches.

The Gitflow strategy is based on the following workflow:

  1. Developers create a feature branch from the develop branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the develop branch.
  4. Developers create a release branch from the develop branch.
  5. Developers prepare the code for release in the release branch.
  6. Developers merge the release branch into the main and develop branches.
  7. Developers create a hotfix branch from the main branch.
  8. Developers fix the bug in the hotfix branch.
  9. Developers merge the hotfix branch into the main and develop branches.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
branch develop
checkout develop
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout develop
merge feature_branch
commit
branch release_branch
checkout release_branch
commit
commit
checkout develop
merge release_branch
checkout main
merge release_branch
commit
branch hotfix_branch
checkout hotfix_branch
commit
commit
checkout develop
merge hotfix_branch
checkout main
merge hotfix_branch

The Gitflow strategy have the following advantages:

  • Isolation: Each feature is developed in a separate branch, isolating it from other features.
  • Collaboration: Developers can work on different features at the same time without affecting each other.
  • Stability: The main branch contains the code that is in production, ensuring that it is stable and reliable.

The Gitflow strategy is based on the following disadvantages:

  • Complexity: The Gitflow strategy is complex and can be difficult to understand for new developers.
  • Overhead: The Gitflow strategy requires developers to create and manage multiple branches, which can be time-consuming.

Feature Branch Workflow

Feature Branch is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.

The Feature Branch strategy is based on the following rules:

  • Developers create a feature branch from the main branch.
  • Developers develop the code for the new feature in the feature branch.
  • Developers merge the feature branch into the main branch.

The Feature Branch strategy is based on the following workflow:

  1. Developers create a feature branch from the main branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

The Feature Branch strategy is based on the following advantages:

  • Simplicity: The Feature Branch strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in feature branches are visible to other developers, making it easier to review and merge changes.

The Feature Branch strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same feature can lead to conflicts and merge issues.
  • Complexity: Managing multiple feature branches can be challenging, especially in large projects.

Trunk-Based Development

The Trunk-Based Development strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature flags to hide unfinished features.
  • Developers merge the code into the main branch when it is ready.

The Trunk-Based Development strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create feature flags to hide unfinished features.
  3. Developers merge the code into the main branch when it is ready.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit tag:"v1.0.0"
commit
commit
commit tag:"v2.0.0"

The Trunk-Based Development strategy is based on the following advantages:

  • Simplicity: The Trunk-Based Development strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in the main branch are visible to other developers, making it easier to review and merge changes.

The Trunk-Based Development strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.
  • Complexity: Managing feature flags can be challenging, especially in large projects.

GitHub Flow

GitHub Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

The GitHub Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.

The GitHub Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

GitLab Flow

GitLab Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand. GitLab Flow is often used with release branches.

The GitLab Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.
  • Developers create a pre-production branch to make bug fixes before merging changes back to the main branch.
  • Developers merge the pre-production branch into the main branch before going to production.
  • Developers can add as many pre-production branches as needed.
  • Developers can maintain different versions of the production branch.

The GitLab Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
  5. Developers create a pre-production branch from the main branch.
  6. Developers make bug fixes in the pre-production branch.
  7. Developers merge the pre-production branch into the main branch.
  8. Developers create a production branch from the main branch.
  9. Developers merge the production branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch
commit
branch pre-production
checkout pre-production
commit
commit
checkout main
merge pre-production
commit
branch production
checkout production
commit
commit
checkout main
merge production

Repository Strategy: Fork vs Branch

In this post, I will explain the different ways to contribute to a Git repository: Fork vs Branch.

Fork

A fork is a copy of a repository that you can make changes to without affecting the original repository. When you fork a repository, you create a new repository in your GitHub account that is a copy of the original repository.

The benefits of forking a repository include:

  • Isolation: You can work on your changes without affecting the original repository.
  • Collaboration: You can make changes to the forked repository and submit a pull request to the original repository to merge your changes.
  • Ownership: You have full control over the forked repository and can manage it as you see fit.

The challenges of forking a repository include:

  • Synchronization: Keeping the forked repository up to date with the original repository can be challenging.
  • Conflicts: Multiple contributors working on the same codebase can lead to conflicts and merge issues.
  • Visibility: Changes made to the forked repository are not visible in the original repository until they are merged.

Branch

A branch is a parallel version of a repository that allows you to work on changes without affecting the main codebase. When you create a branch, you can make changes to the code and submit a pull request to merge the changes back into the main branch.

The benefits of using branches include:

  • Flexibility: You can work on different features or bug fixes in separate branches without affecting each other.
  • Collaboration: You can work with other developers on the same codebase by creating branches and submitting pull requests.
  • Visibility: Changes made in branches are visible to other developers, making it easier to review and merge changes.

The challenges of using branches include:

  • Conflicts: Multiple developers working on the same branch can lead to conflicts and merge issues.
  • Complexity: Managing multiple branches can be challenging, especially in large projects.
  • Versioning: Branches are versioned separately, making it harder to track changes across the project.

Fork vs Branch

The decision to fork or branch a repository depends on the project's requirements and the collaboration model.

  • Fork: Use a fork when you want to work on changes independently of the original repository or when you want to contribute to a project that you do not have write access to.

  • Branch: Use a branch when you want to work on changes that will be merged back into the main codebase or when you want to collaborate with other developers on the same codebase.

For my IaC project with Terraform, I will use branches to work on different features and bug fixes and submit pull requests to merge the changes back into the main branch. This approach will allow me to collaborate with other developers and keep the codebase clean and organized.

Repository Strategy: Monorepo vs Multi-repo

In this post, I will explain the repository strategy that I will use for my Infrastructure as Code (IaC) project with Terraform.

Monorepo

A monorepo is a single repository that contains all the code for a project.

The benefits of using a monorepo include:

  • Simplicity: All the code is in one place, making it easier to manage and maintain.
  • Consistency: Developers can easily see all the code related to a project and ensure that it follows the same standards and conventions.
  • Reusability: Code can be shared across different parts of the project, reducing duplication and improving consistency.
  • Versioning: All the code is versioned together, making it easier to track changes and roll back if necessary.

The challenges of using a monorepo include:

  • Complexity: A monorepo can become large and complex, making it harder to navigate and understand.
  • Build times: Building and testing a monorepo can take longer than building and testing smaller repositories.
  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.

Multi-repo

A multi-repo is a set of separate repositories that contain the code for different parts of a project.

The benefits of using a multi-repo include:

  • Isolation: Each repository is independent, making it easier to manage and maintain.
  • Flexibility: Developers can work on different parts of the project without affecting each other.
  • Scalability: As the project grows, new repositories can be added to manage the code more effectively.

The challenges of using a multi-repo include:

  • Complexity: Managing multiple repositories can be more challenging than managing a single repository.
  • Consistency: Ensuring that all the repositories follow the same standards and conventions can be difficult.
  • Versioning: Each repository is versioned separately, making it harder to track changes across the project.

Conclusion

For my IaC project with Terraform, I will use a monorepo approach to manage all the Terraform modules and configurations for my project.

Terraform: Configuration Language

After deciding to use Terraform for my Infrastructure as Code (IaC) project, Terraform configuration language must be understanded to define the desired state of my infrastructure.

Info

I will update this post with more information about Terraform configuration language in the future.

Terraform uses a declarative configuration language to define the desired state of your infrastructure. This configuration language is designed to be human-readable and easy to understand, making it accessible to both developers and operations teams.

Declarative vs. Imperative

Terraform's configuration language is declarative, meaning that you define the desired state of your infrastructure without specifying the exact steps needed to achieve that state. This is in contrast to imperative languages, where you specify the exact sequence of steps needed to achieve a desired outcome.

For example, in an imperative language, you might write a script that creates a virtual machine by executing a series of commands to provision the necessary resources. In a declarative language like Terraform, you would simply define the desired state of the virtual machine (e.g., its size, image, and network configuration) and let Terraform figure out the steps needed to achieve that state.

Configuration Blocks

Terraform uses configuration blocks to define different aspects of your infrastructure. Each block has a specific purpose and contains configuration settings that define how that aspect of your infrastructure should be provisioned.

For example, you might use a provider block to define the cloud provider you want to use, a resource block to define a specific resource (e.g., a virtual machine or storage account), or a variable block to define input variables that can be passed to your configuration.

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "East US"
}

variable "location" {
  type    = string
  default = "East US"
}

Variables

Terraform allows you to define variables that can be used to parameterize your configuration. Variables can be used to pass values into your configuration, making it easier to reuse and customize your infrastructure definitions.

variable "location" {
  type    = string
  default = "East US"
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = var.location
}

locals

Terraform allows you to define local values that can be used within your configuration. Locals are similar to variables but are only available within the current module or configuration block.

variable "location" {
  type    = string
  default = "East US"
}

locals {
  resource_group_name = "example-resources"
}

resource "azurerm_resource_group" "example" {
  name     = local.resource_group_name
  location = var.location
}

Data Sources

Terraform allows you to define data sources that can be used to query external resources and retrieve information that can be used in your configuration. Data sources are read-only and can be used to fetch information about existing resources, such as virtual networks, storage accounts, or database instances.

data "azurerm_resource_group" "example" {
  name = "example-resources"
}

resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name
}

Functions

try

The try function allows you to provide a default value in case an expression returns an error. This can be useful when working with optional values that may or may not be present.

variable "optional_value" {
  type    = string
  default = null
}

locals {
  value = try(var.optional_value, "default_value")
}

Debugging Terraform

You can use the TF_LOG environment variable to enable debug logging in Terraform. This can be useful when troubleshooting issues with your infrastructure or understanding how Terraform is executing your configuration.

export TF_LOG=DEBUG
terraform plan

TOu can use the following decreasing verbosity levels log: TRACE, DEBUG, INFO, WARN or ERROR

To persist logged output logs in a file:

export TF_LOG_PATH="terraform.log"

To separare logs for Terraform and provider, you can use the following environment variables TF_LOG_CORE and TF_LOG_PROVIDER respectively. For example, to enable debug logging for both Terraform and the Azure provider, you can use the following environment variables:

export TF_LOG_CORE=DEBUG
export TF_LOG_PATH="terraform.log"

or

export TF_LOG_PROVIDER=DEBUG
export TF_LOG_PATH="provider.log"

To disable debug logging, you can unset the TF_LOG environment variable:

unset TF_LOG

References

Terraform: Set your local environment developer

I wil use Ubuntu in WSL v2 as my local environment for my IaC project with Terraform. I will install the following tools:

  • vscode
  • Trunk
  • tenv
  • az cli

az cli

I will use the Azure CLI to interact with Azure resources from the command line. I will install the Azure CLI using the following commands:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

vscode

I will use Visual Studio Code as my code editor for my IaC project with Terraform. I will install the following extensions:

  • Terraform
  • Azure Terraform
  • Azure Account
  • Trunk

tenv

tenv is a tool that allows you to manage multiple Terraform environments with ease. It provides a simple way to switch between different environments, such as development, staging, and production, by managing environment variables and state files for each environment.

Installation

You can install tenv using go get:

LATEST_VERSION=$(curl --silent https://api.github.com/repos/tofuutils/tenv/releases/latest | jq -r .tag_name)
curl -O -L "https://github.com/tofuutils/tenv/releases/latest/download/tenv_${LATEST_VERSION}_amd64.deb"
sudo dpkg -i "tenv_${LATEST_VERSION}_amd64.deb"

Usage

To create a new environment, you can use the tenv create command:

# to install the latest version of Terraform
tenv tf install 

References