Skip to content

Blog

Configure your Teams meetings options to improve your collaboration experience

Microsoft Teams is a powerful collaboration tool that allows you to communicate and work with your team members from anywhere in the world. One of the key features of Teams is the ability to schedule and host meetings with your colleagues, clients, or partners.

In this blog post, we will show you how to configure your Teams meetings options to improve your collaboration experience and make your meetings more productive.

What are Teams meetings options?

Teams meetings options are settings that allow you to control various aspects of your meetings, such as who can bypass the lobby, who can present, and who can record the meeting. By configuring these options, you can ensure that your meetings run smoothly and securely.

How to configure your Teams meetings options

To configure your Teams meetings options, follow these steps:

  1. Open the Teams app on your computer or mobile device.
  2. Click on the calendar icon in the left-hand menu to view your calendar.
  3. Click on the "Meetings" tab at the top of the screen to view your upcoming meetings.
  4. Click on the meeting you want to configure to open the meeting details.
  5. Click on the "Meeting options" link at the bottom of the meeting details window.
  6. Configure the meeting options according to your preferences. You can control who can bypass the lobby, who can present, who can record the meeting, and other settings.

Key settings to configure

Here are some key settings that you may want to configure in your Teams meetings options:

  • Who can bypass the lobby: By default, all participants are sent to the lobby when they join a meeting. You can configure this setting to allow participants to bypass the lobby and join the meeting directly.

  • Who can present: By default, only the meeting organizer can present in a meeting. You can configure this setting to allow other participants to present as well.

  • Who can record the meeting: By default, only the meeting organizer can record a meeting. You can configure this setting to allow other participants to record the meeting as well.

  • Automatically admit people: By default, participants need to be admitted to the meeting by the organizer or a presenter. You can configure this setting to automatically admit people to the meeting.

  • Allow attendees to unmute: By default, attendees are muted when they join a meeting. You can configure this setting to allow attendees to unmute themselves.

Conclusion

Configuring your Teams meetings options is a great way to improve your collaboration experience and make your meetings more productive. By configuring these settings, you can ensure that your meetings run smoothly and securely, allowing you to focus on the task at hand.

Reduce your attack surface in Snowflake when using from Azure

When it comes to data security, reducing your attack surface is a crucial step. This post will guide you on how to minimize your attack surface when using Snowflake with Azure or Power BI.

What is Snowflake?

Snowflake is a cloud-based data warehousing platform that allows you to store and analyze large amounts of data. It is known for its scalability, performance, and ease of use. Snowflake is popular among organizations that need to process large volumes of data quickly and efficiently.

What is an Attack Surface?

An attack surface refers to the number of possible ways an attacker can get into a system and potentially extract data. The larger the attack surface, the more opportunities there are for attackers. Therefore, reducing the attack surface is a key aspect of securing your systems.

How to Reduce Your Attack Surface in Snowflake:

  1. Use Azure Private Link Azure Private Link provides private connectivity from a virtual network to Snowflake, isolating your traffic from the public internet. It significantly reduces the attack surface by ensuring that traffic between Azure and Snowflake doesn't traverse over the public internet.

  2. Implement Network Policies Snowflake allows you to define network policies that restrict access based on IP addresses. By limiting access to trusted IP ranges, you can reduce the potential points of entry for an attacker.

  3. Enable Multi-Factor Authentication (MFA) MFA adds an extra layer of security by requiring users to provide at least two forms of identification before accessing the Snowflake account. This makes it harder for attackers to gain unauthorized access, even if they have your password.

In this blog post, we will show you how to reduce your attack surface when using Snowflake from Azure or Power BI by using Azure Private Link

Default Snowflake architecture

By default, Snowflake is accessible from the public internet. This means that anyone with the right credentials can access your Snowflake account from anywhere in the world, you can limit access to specific IP addresses, but this still exposes your Snowflake account to potential attackers.

This is a security risk because it exposes your Snowflake account to potential attackers.

The architecture of using Azure Private Link with Snowflake is as follows:

    graph TD
    A[Virtual Network] -->|Private Endpoint| B[Snowflake]
    B -->|Private Link Service| C[Private Link Resource]
    C -->|Private Connection| D[Virtual Network]
    D -->|Subnet| E[Private Endpoint]    

Architecture Components

Requirements

Before you can use Azure Private Link with Snowflake, you need to have the following requirements in place:

  • A Snowflake account with ACCOUNTADMIN privileges
  • Business Critical or higher Snowflake edition
  • An Azure subscription with a Resource Group and privileges to create:
    • Virtual Network
    • Subnet
    • Private Endpoint

Step-by-Step Guide

Note

Replace the placeholders with your actual values, that is a orientation guide.

Step 1: Retrieve Dedailts of your Snowflake Account
USE ROLE ACCOUNTADMIN;
select select SYSTEM$GET_PRIVATELINK_CONFIG();
Step 2: Create a Virtual Network

A virtual network is a private network that allows you to securely connect your resources in Azure. To create a virtual network with azcli, follow these steps:

az network vnet create \
  --name myVnet \
  --resource-group myResourceGroup \
  --address-prefix
  --subnet-name mySubnet \
  --subnet-prefix
  --enable-private-endpoint-network-policies true
Step 3: Create a Private Endpoint

The first step is to create a private endpoint in Azure. A private endpoint is a network interface that connects your virtual network to the Snowflake service. This allows you to access Snowflake using a private IP address, rather than a public one.

To create a private endpoint with azcli, follow these steps:

az network private-endpoint create \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup \
  --vnet-name myVnet \
  --subnet mySubnet \
  --private-connection-resource-id /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Network/privateLinkServices/<Snowflake-service-name> \
  --connection-name mySnowflakeConnection

Check the status of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup
Step 4: Authorize the Private Endpoint

The next step is to authorize the private endpoint to access the Snowflake service.

Retrieve the Resource ID of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup

Create a temporary access token that Snowflake can use to authorize the private endpoint:

az account get-access-token --subscription <subscription-id>

Authorize the private endpoint in Snowflake:

USE ROLE ACCOUNTADMIN;
select SYSTEM$AUTHORIZE_PRIVATELINK('<resource-id>', '<access-token>');

Step 5: Block Public Access

To further reduce your attack surface, you can block public access to your Snowflake account. This ensures that all traffic to and from Snowflake goes through the private endpoint, reducing the risk of unauthorized access.

To block public access to your Snowflake account, you need to use Network Policy, follow these steps:

USE ROLE ACCOUNTADMIN;
CREATE NETWORK RULE allow_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('192.168.1.99/24');

CREATE NETWORK RULE block_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('0.0.0.0/0');

CREATE NETWORK POLICY public_network_policy
  ALLOWED_NETWORK_RULE_LIST = ('allow_access_rule')
  BLOCKED_NETWORK_RULE_LIST=('block_access_rule');

It's highly recommended to follow the best practices for network policies in Snowflake. You can find more information here: https://docs.snowflake.com/en/user-guide/network-policies#best-practices

Step 6: Test the Connection

To test the connection between your virtual network and Snowflake, you can use the SnowSQL client.

snowsql -a <account_name> -u <username> -r <role> -w <warehouse> -d <database> -s <schema>

Internal Stages with Azure Blob Private Endpoints

If you are using Azure Blob Storage as an internal stage in Snowflake, you can also use Azure Private Link to secure the connection between Snowflake and Azure Blob Storage.

It's recommended to use Azure Blob Storage with Private Endpoints to ensure that your data is secure and that you are reducing your attack surface, you can check the following documentation for more information: Azure Private Endpoints for Internal Stages to learn how to configure Azure Blob Storage with Private Endpoints in Snowflake.

Conclusion

Reducing your attack surface is a critical aspect of securing your systems. By using Azure Private Link with Snowflake, you can significantly reduce the risk of unauthorized access and data breaches. Follow the steps outlined in this blog post to set up Azure Private Link with Snowflake and start securing your data today.

References

Nested Structures with Optional Attributes and Dynamic Blocks

In this post, I will explain how to define nested structures with optional attributes and it's use dynamic blocks in Terraform.

Nested Structures

Terraform allows you to define nested structures to represent complex data types in your configuration. Nested structures are useful when you need to group related attributes together or define a data structure that contains multiple fields.

For example, you might define a nested structure to represent a virtual machine with multiple attributes, such as size, image, and network configuration:

variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = string
    })
  })
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the vm variable defines a nested structure with three attributes: size, image, and network. The network attribute is itself a nested structure with two attributes: subnet and security_group.

Optional Attributes

If you have an attribute that is optional, you can define it as an optional attribute in your configuration. Optional attributes are useful when you want to provide a default value for an attribute but allow users to override it if needed.

If optional doesn't works, Terraform allows you to define your variables as any type, but it's not recommended because it can lead to errors and make your configuration less maintainable. It's better to define your variables with the correct type and use optional attributes when needed, but some cases it's necessary to use any and null values, you can minimize the risk of errors by providing a good description of the variable and its expected values.

For example, you might define an optional attribute for the security_group in the network structure:

variable "vm" {
  description = <<DESCRIPTION
  Virtual machine configuration.
  The following attributes are required:
  - size: The size of the virtual machine.
  - image: The image for the virtual machine.
  - network: The network configuration for the virtual machine.
  The network configuration should have the following attributes:
  - subnet: The subnet for the virtual machine.
  - security_group: The security group for the virtual machine.
  DESCRIPTION
  type = any
  default = null
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}
variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = optional(string)
    })
  })
}


resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the security_group attribute in the network structure is defined as an optional attribute with a default value of null. This allows users to provide a custom security group if needed, or use the default value if no value is provided.

Dynamic Blocks

Terraform allows you to use dynamic blocks to define multiple instances of a block within a resource or module. Dynamic blocks are useful when you need to create multiple instances of a block based on a list or map of values.

For example, you might use a dynamic block to define multiple network interfaces for a virtual machine:

variable "network_interfaces" {
  type = list(object({
    name    = string
    subnet  = string
    security_group = string
  }))
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = "Standard_DS1_v2"
  image    = "UbuntuServer"

  dynamic "network_interface" {
    for_each = var.network_interfaces
    content {
      name            = network_interface.value.name
      subnet          = network_interface.value.subnet
      security_group  = network_interface.value.security_group
    }
  }
}

In this example, the network_interfaces variable defines a list of objects representing network interfaces with three attributes: name, subnet, and security_group. The dynamic block iterates over the list of network interfaces and creates a network interface block for each object in the list.

Conclusion

In this post, I explained how to define nested structures with optional attributes and use dynamic blocks in Terraform. Nested structures are useful for representing complex data types, while optional attributes allow you to provide default values for attributes that can be overridden by users. Dynamic blocks are useful for creating multiple instances of a block based on a list or map of values. By combining these features, you can create flexible and reusable configurations in Terraform.

Azure Container Registry: Artifact Cache

Azure Container Registry is a managed, private Docker registry service provided by Microsoft. It allows you to build, store, and manage container images and artifacts in a secure environment.

What is Artifact Caching?

Artifact Cache is a feature in Azure Container Registry that allows users to cache container images in a private container registry. It is available in Basic, Standard, and Premium service tiers.

Benefits of Artifact Cache

  • Reliable pull operations: Faster pulls of container images are achievable by caching the container images in ACR.
  • Private networks: Cached registries are available on private networks.
  • Ensuring upstream content is delivered: Artifact Cache allows users to pull images from the local ACR instead of the upstream registry.

Limitations

Cache will only occur after at least one image pull is complete on the available container image

How to Use Artifact Cache in Azure Container Registry without credential?

Let's take a look at how you can implement artifact caching in Azure Container Registry.

Step 1: Create a Cache Rule

The first step is to create a cache rule in your Azure Container Registry. This rule specifies the source image that should be cached and the target image that will be stored in the cache.

az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu

Check the cache rule:

az acr cache show -r MyRegistry -n MyRule

Step 2: Pull the Image

Next, you need to pull the image from the source registry to the cache. This will download the image and store it in the cache for future use.

docker pull myregistry.azurecr.io/hello-world:latest

Step 3: Clean up the resources

Finally, you can clean up the resources by deleting the cache rule.

az acr cache delete -r MyRegistry -n MyRule

If you need to check other rules, you can use the following command:

az acr cache list -r MyRegistry

Conclusion

Azure Container Registry's Artifact Cache feature provides a convenient way to cache container images in a private registry, improving pull performance and reducing network traffic. By following the steps outlined in this article, you can easily set up and use artifact caching in your Azure Container Registry.

If you need to use the cache with authentication, you can use the following article: Enable Artifact Cache with authentication.

For more detailed information, please visit the official tutorial on the Microsoft Azure website.

References

Repository Strategy: Branching Strategy

Once we have decided that we will use branches, it is time to define a strategy, so that all developers can work in a coordinated way.

Some of the best known strategies are:

  • Gitflow: It is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.
  • Feature Branch: It is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.
  • Trunk-Based Development: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitHub Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitLab Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • Microsoft Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

Gitflow

Gitflow is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.

The Gitflow strategy is based on the following branches:

  • main: It is the main branch of the project. It contains the code that is in production.
  • develop: It is the branch where the code is integrated before being released to production.

The Gitflow strategy is based on the following types of branches:

  • feature: It is the branch where the code for a new feature is developed.
  • release: It is the branch where the code is prepared for release.
  • hotfix: It is the branch where the code is developed to fix a bug in production.

The Gitflow strategy is based on the following rules:

  • Feature branches are created from the develop branch.
  • Feature branches are merged into the develop branch.
  • Release branches are created from the develop branch.
  • Release branches are merged into the main and develop branches.
  • Hotfix branches are created from the main branch.
  • Hotfix branches are merged into the main and develop branches.

The Gitflow strategy is based on the following workflow:

  1. Developers create a feature branch from the develop branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the develop branch.
  4. Developers create a release branch from the develop branch.
  5. Developers prepare the code for release in the release branch.
  6. Developers merge the release branch into the main and develop branches.
  7. Developers create a hotfix branch from the main branch.
  8. Developers fix the bug in the hotfix branch.
  9. Developers merge the hotfix branch into the main and develop branches.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
branch develop
checkout develop
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout develop
merge feature_branch
commit
branch release_branch
checkout release_branch
commit
commit
checkout develop
merge release_branch
checkout main
merge release_branch
commit
branch hotfix_branch
checkout hotfix_branch
commit
commit
checkout develop
merge hotfix_branch
checkout main
merge hotfix_branch

The Gitflow strategy have the following advantages:

  • Isolation: Each feature is developed in a separate branch, isolating it from other features.
  • Collaboration: Developers can work on different features at the same time without affecting each other.
  • Stability: The main branch contains the code that is in production, ensuring that it is stable and reliable.

The Gitflow strategy is based on the following disadvantages:

  • Complexity: The Gitflow strategy is complex and can be difficult to understand for new developers.
  • Overhead: The Gitflow strategy requires developers to create and manage multiple branches, which can be time-consuming.

Feature Branch Workflow

Feature Branch is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.

The Feature Branch strategy is based on the following rules:

  • Developers create a feature branch from the main branch.
  • Developers develop the code for the new feature in the feature branch.
  • Developers merge the feature branch into the main branch.

The Feature Branch strategy is based on the following workflow:

  1. Developers create a feature branch from the main branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

The Feature Branch strategy is based on the following advantages:

  • Simplicity: The Feature Branch strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in feature branches are visible to other developers, making it easier to review and merge changes.

The Feature Branch strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same feature can lead to conflicts and merge issues.
  • Complexity: Managing multiple feature branches can be challenging, especially in large projects.

Trunk-Based Development

The Trunk-Based Development strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature flags to hide unfinished features.
  • Developers merge the code into the main branch when it is ready.

The Trunk-Based Development strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create feature flags to hide unfinished features.
  3. Developers merge the code into the main branch when it is ready.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit tag:"v1.0.0"
commit
commit
commit tag:"v2.0.0"

The Trunk-Based Development strategy is based on the following advantages:

  • Simplicity: The Trunk-Based Development strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in the main branch are visible to other developers, making it easier to review and merge changes.

The Trunk-Based Development strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.
  • Complexity: Managing feature flags can be challenging, especially in large projects.

GitHub Flow

GitHub Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

The GitHub Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.

The GitHub Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

GitLab Flow

GitLab Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand. GitLab Flow is often used with release branches.

The GitLab Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.
  • Developers create a pre-production branch to make bug fixes before merging changes back to the main branch.
  • Developers merge the pre-production branch into the main branch before going to production.
  • Developers can add as many pre-production branches as needed.
  • Developers can maintain different versions of the production branch.

The GitLab Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
  5. Developers create a pre-production branch from the main branch.
  6. Developers make bug fixes in the pre-production branch.
  7. Developers merge the pre-production branch into the main branch.
  8. Developers create a production branch from the main branch.
  9. Developers merge the production branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch
commit
branch pre-production
checkout pre-production
commit
commit
checkout main
merge pre-production
commit
branch production
checkout production
commit
commit
checkout main
merge production

Repository Strategy: Fork vs Branch

In this post, I will explain the different ways to contribute to a Git repository: Fork vs Branch.

Fork

A fork is a copy of a repository that you can make changes to without affecting the original repository. When you fork a repository, you create a new repository in your GitHub account that is a copy of the original repository.

The benefits of forking a repository include:

  • Isolation: You can work on your changes without affecting the original repository.
  • Collaboration: You can make changes to the forked repository and submit a pull request to the original repository to merge your changes.
  • Ownership: You have full control over the forked repository and can manage it as you see fit.

The challenges of forking a repository include:

  • Synchronization: Keeping the forked repository up to date with the original repository can be challenging.
  • Conflicts: Multiple contributors working on the same codebase can lead to conflicts and merge issues.
  • Visibility: Changes made to the forked repository are not visible in the original repository until they are merged.

Branch

A branch is a parallel version of a repository that allows you to work on changes without affecting the main codebase. When you create a branch, you can make changes to the code and submit a pull request to merge the changes back into the main branch.

The benefits of using branches include:

  • Flexibility: You can work on different features or bug fixes in separate branches without affecting each other.
  • Collaboration: You can work with other developers on the same codebase by creating branches and submitting pull requests.
  • Visibility: Changes made in branches are visible to other developers, making it easier to review and merge changes.

The challenges of using branches include:

  • Conflicts: Multiple developers working on the same branch can lead to conflicts and merge issues.
  • Complexity: Managing multiple branches can be challenging, especially in large projects.
  • Versioning: Branches are versioned separately, making it harder to track changes across the project.

Fork vs Branch

The decision to fork or branch a repository depends on the project's requirements and the collaboration model.

  • Fork: Use a fork when you want to work on changes independently of the original repository or when you want to contribute to a project that you do not have write access to.

  • Branch: Use a branch when you want to work on changes that will be merged back into the main codebase or when you want to collaborate with other developers on the same codebase.

For my IaC project with Terraform, I will use branches to work on different features and bug fixes and submit pull requests to merge the changes back into the main branch. This approach will allow me to collaborate with other developers and keep the codebase clean and organized.

Repository Strategy: Monorepo vs Multi-repo

In this post, I will explain the repository strategy that I will use for my Infrastructure as Code (IaC) project with Terraform.

Monorepo

A monorepo is a single repository that contains all the code for a project.

The benefits of using a monorepo include:

  • Simplicity: All the code is in one place, making it easier to manage and maintain.
  • Consistency: Developers can easily see all the code related to a project and ensure that it follows the same standards and conventions.
  • Reusability: Code can be shared across different parts of the project, reducing duplication and improving consistency.
  • Versioning: All the code is versioned together, making it easier to track changes and roll back if necessary.

The challenges of using a monorepo include:

  • Complexity: A monorepo can become large and complex, making it harder to navigate and understand.
  • Build times: Building and testing a monorepo can take longer than building and testing smaller repositories.
  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.

Multi-repo

A multi-repo is a set of separate repositories that contain the code for different parts of a project.

The benefits of using a multi-repo include:

  • Isolation: Each repository is independent, making it easier to manage and maintain.
  • Flexibility: Developers can work on different parts of the project without affecting each other.
  • Scalability: As the project grows, new repositories can be added to manage the code more effectively.

The challenges of using a multi-repo include:

  • Complexity: Managing multiple repositories can be more challenging than managing a single repository.
  • Consistency: Ensuring that all the repositories follow the same standards and conventions can be difficult.
  • Versioning: Each repository is versioned separately, making it harder to track changes across the project.

Conclusion

For my IaC project with Terraform, I will use a monorepo approach to manage all the Terraform modules and configurations for my project.

Terraform: Configuration Language

After deciding to use Terraform for my Infrastructure as Code (IaC) project, Terraform configuration language must be understanded to define the desired state of my infrastructure.

Info

I will update this post with more information about Terraform configuration language in the future.

Terraform uses a declarative configuration language to define the desired state of your infrastructure. This configuration language is designed to be human-readable and easy to understand, making it accessible to both developers and operations teams.

Declarative vs. Imperative

Terraform's configuration language is declarative, meaning that you define the desired state of your infrastructure without specifying the exact steps needed to achieve that state. This is in contrast to imperative languages, where you specify the exact sequence of steps needed to achieve a desired outcome.

For example, in an imperative language, you might write a script that creates a virtual machine by executing a series of commands to provision the necessary resources. In a declarative language like Terraform, you would simply define the desired state of the virtual machine (e.g., its size, image, and network configuration) and let Terraform figure out the steps needed to achieve that state.

Configuration Blocks

Terraform uses configuration blocks to define different aspects of your infrastructure. Each block has a specific purpose and contains configuration settings that define how that aspect of your infrastructure should be provisioned.

For example, you might use a provider block to define the cloud provider you want to use, a resource block to define a specific resource (e.g., a virtual machine or storage account), or a variable block to define input variables that can be passed to your configuration.

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "East US"
}

variable "location" {
  type    = string
  default = "East US"
}

Variables

Terraform allows you to define variables that can be used to parameterize your configuration. Variables can be used to pass values into your configuration, making it easier to reuse and customize your infrastructure definitions.

variable "location" {
  type    = string
  default = "East US"
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = var.location
}

locals

Terraform allows you to define local values that can be used within your configuration. Locals are similar to variables but are only available within the current module or configuration block.

variable "location" {
  type    = string
  default = "East US"
}

locals {
  resource_group_name = "example-resources"
}

resource "azurerm_resource_group" "example" {
  name     = local.resource_group_name
  location = var.location
}

Data Sources

Terraform allows you to define data sources that can be used to query external resources and retrieve information that can be used in your configuration. Data sources are read-only and can be used to fetch information about existing resources, such as virtual networks, storage accounts, or database instances.

data "azurerm_resource_group" "example" {
  name = "example-resources"
}

resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name
}

Functions

try

The try function allows you to provide a default value in case an expression returns an error. This can be useful when working with optional values that may or may not be present.

variable "optional_value" {
  type    = string
  default = null
}

locals {
  value = try(var.optional_value, "default_value")
}

Debugging Terraform

You can use the TF_LOG environment variable to enable debug logging in Terraform. This can be useful when troubleshooting issues with your infrastructure or understanding how Terraform is executing your configuration.

export TF_LOG=DEBUG
terraform plan

TOu can use the following decreasing verbosity levels log: TRACE, DEBUG, INFO, WARN or ERROR

To persist logged output logs in a file:

export TF_LOG_PATH="terraform.log"

To separare logs for Terraform and provider, you can use the following environment variables TF_LOG_CORE and TF_LOG_PROVIDER respectively. For example, to enable debug logging for both Terraform and the Azure provider, you can use the following environment variables:

export TF_LOG_CORE=DEBUG
export TF_LOG_PATH="terraform.log"

or

export TF_LOG_PROVIDER=DEBUG
export TF_LOG_PATH="provider.log"

To disable debug logging, you can unset the TF_LOG environment variable:

unset TF_LOG

References

Terraform: Set your local environment developer

I wil use Ubuntu in WSL v2 as my local environment for my IaC project with Terraform. I will install the following tools:

  • vscode
  • Trunk
  • tenv
  • az cli

az cli

I will use the Azure CLI to interact with Azure resources from the command line. I will install the Azure CLI using the following commands:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

vscode

I will use Visual Studio Code as my code editor for my IaC project with Terraform. I will install the following extensions:

  • Terraform
  • Azure Terraform
  • Azure Account
  • Trunk

tenv

tenv is a tool that allows you to manage multiple Terraform environments with ease. It provides a simple way to switch between different environments, such as development, staging, and production, by managing environment variables and state files for each environment.

Installation

You can install tenv using go get:

LATEST_VERSION=$(curl --silent https://api.github.com/repos/tofuutils/tenv/releases/latest | jq -r .tag_name)
curl -O -L "https://github.com/tofuutils/tenv/releases/latest/download/tenv_${LATEST_VERSION}_amd64.deb"
sudo dpkg -i "tenv_${LATEST_VERSION}_amd64.deb"

Usage

To create a new environment, you can use the tenv create command:

# to install the latest version of Terraform
tenv tf install 

References

Starting my IaC project with terraform

This is not my first IaC project but I want to share with you some key considerations that I have in mind to start a personal IaC project with Terraform based on post What you need to think about when starting an IaC project ?

1. Define Your Goals

These are my goals:

  • Automate the provisioning of infrastructure
  • Improve consistency and repeatability
  • Reduce manual effort
  • Enable faster deployments

2. Select the Right Tools

For my project I will use Terraform because I am familiar with it and I like its declarative configuration language.

3. Design Your Infrastructure

In my project, I will use a modular design that separates my infrastructure into different modules, such as networking, compute, and storage. This will allow me to reuse code across different projects and make my infrastructure more maintainable.

4. Version Control Your Code

I will use Git for version control and follow best practices for version control, such as using descriptive commit messages and branching strategies.

5. Automate Testing

Like appear in Implement compliance testing with Terraform and Azure, I'd like to implement:

  • Compliance testing
  • End-to-end testing
  • Integration testing

6. Implement Continuous Integration/Continuous Deployment (CI/CD)

I set up my CI/CD pipelines with Github Actions.

7. Monitor and Maintain Your Infrastructure

I will use Azure Monitor to monitor my infrastructure and set up alerts to notify me of any issues. I will also regularly review and update my infrastructure code to ensure that it remains up to date and secure.

Of course I don't have all the answers yet, but I will keep you updated on my progress and share my learnings along the way. Stay tuned for more updates on my IaC project with Terraform!