Skip to content

Blog

Securely connect Power BI to data sources with a VNet data gateway

In this post, I will show you how to securely connect Power BI to your Azure data services using a Virtual Network (VNet) data gateway.

What is a Virtual Network (VNet) data gateway?

The virtual network (VNet) data gateway helps you to connect from Microsoft Cloud services to your Azure data services within a VNet without the need of an on-premises data gateway. The VNet data gateway securely communicates with the data source, executes queries, and transmits results back to the service.

The Role of a VNet Data Gateway

A VNet data gateway acts as a bridge that allows for the secure flow of data between the Power Platform and external data sources that reside within a virtual network. This includes services such as SQL databases, file storage solutions, and other cloud-based resources. The gateway ensures that data can be transferred securely and reliably, without exposing the network to potential threats or breaches.

How It Works

graph LR
    User([User]) -->|Semantic Models| SM[Semantic Models]
    User -->|"Dataflows (Gen2)"| DF["Dataflows (Gen2)"]
    User -->|Paginated Reports| PR[Paginated Reports]
    SM --> PPVS[Power Platform VNET Service]
    DF --> PPVS
    PR --> PPVS
    PPVS --> MCVG[Managed Container
for VNet Gateway] MCVG -->|Interfaces with| SQLDB[(SQL Databases)] MCVG -->|Interfaces with| FS[(File Storage)] MCVG -->|Interfaces with| CS[(Cloud Services)] MCVG -.->|Secured by| SEC{{Security Features}} subgraph VNET_123 SQLDB FS CS SEC end classDef filled fill:#f96,stroke:#333,stroke-width:2px; classDef user fill:#bbf,stroke:#f66,stroke-width:2px,stroke-dasharray: 5, 5; class User user class SM,DF,PR,PPVS,MCVG,SQLDB,FS,CS,SEC filled

The process begins with a user leveraging Power Platform services like Semantic Models, Dataflows (Gen2), and Paginated Reports. These services are designed to handle various data-related tasks, from analysis to visualization. They connect to the Power Platform VNET Service, which is the heart of the operation, orchestrating the flow of data through the managed container for the VNet gateway.

This managed container is a secure environment specifically designed for the VNet gateway’s operations. It’s isolated from the public internet, ensuring that the data remains protected within the confines of the virtual network. Within this secure environment, the VNet gateway interfaces with the necessary external resources, such as SQL databases and cloud storage, all while maintaining strict security protocols symbolized by the padlock icon in our diagram.

If you need to connect to services on others VNets, you can use VNet peering to connect the VNets, and maybe access to on-premises resources using ExpressRoute or other VPN solutions.

The Benefits

By utilizing a VNet data gateway, organizations can enjoy several benefits:

  • Enhanced Security: The gateway provides a secure path for data, safeguarding sensitive information and complying with organizational security policies.
  • Network Isolation: The managed container and the virtual network setup ensure that the data does not traverse public internet spaces, reducing exposure to vulnerabilities.
  • Seamless Integration: The gateway facilitates smooth integration between Power Platform services and external data sources, enabling efficient data processing and analysis.

Getting Started

To set up a VNet data gateway, follow these steps:

Register Microsoft.PowerPlatform as a resource provider

Before you can create a VNet data gateway, you need to register the Microsoft.PowerPlatform resource provider. This can be done using the Azure portal or the Azure CLI.

az provider register --namespace Microsoft.PowerPlatform

Associate the subnet to Microsoft Power Platform

Create a VNet in your Azure subscription and a subnet where the VNet data gateway will be deployed. Next, you need to delegate subnet to service Microsoft.PowerPlatform/vnetaccesslinks.

Note

  • This subnet can't be shared with other services.
  • Five IP addresses are reserved in the subnet for basic functionality. You need to reserve additional IP addresses for the VNet data gateway, add more IPs for future gateways.
  • You need a role with the Microsoft.Network/virtualNetworks/subnets/join/action permission

This can be done using the Azure portal or the Azure CLI.

# Create a VNet and address prefix 10.0.0.0/24
az network vnet create --name MyVNet --resource-group MyResourceGroup --location eastus --address-prefixes 10.0.0.0/24


# Create a Netwrok Security Group
az network nsg create --name MyNsg --resource-group MyResourceGroup --location eastus

# Create a subnet with delegation to Microsoft.PowerPlatform/vnetaccesslinks and associate the NSG
az network vnet subnet create --name MySubnet --vnet-name MyVNet --resource-group MyResourceGroup --address-prefixes 10.0.0.1/27 --network-security-group MyNsg --delegations Microsoft.PowerPlatform/vnetaccesslinks

Create a VNet data gateway

Note

A Microsoft Power Platform User with with Microsoft.Network/virtualNetworks/subnets/join/action permission on the VNet is required. Network Contributor role is not necessary.

  1. Sign in to the Power BI homepage.
  2. In the top navigation bar, select the settings gear icon on the right.
  3. From the drop down, select the Manage connections and gateways page, in Resources and extensions.
  4. Select Create a virtual network data gateway..
  5. Select the license capacity, subscription, resource group, VNet and the Subnet. Only subnets that are delegated to Microsoft Power Platform are displayed in the drop-down list. VNET data gateways require a Power BI Premium capacity license (A4 SKU or higher or any P SKU) or Fabric license to be used (any SKU).
  6. By default, we provide a unique name for this data gateway, but you could optionally update it.
  7. Select Save. This VNet data gateway is now displayed in your Virtual network data gateways tab. A VNet data gateway is a managed gateway that could be used for controlling access to this resource for Power platform users.

Conclusion

The VNet data gateway is a powerful tool that enables secure data transfer between the Power Platform and external data sources residing within a virtual network. By leveraging this gateway, organizations can ensure that their data remains protected and compliant with security standards, all while facilitating seamless integration and analysis of data. If you are looking to enhance the security and reliability of your data connections, consider implementing a VNet data gateway in your environment.

Implementing policy as code with Open Policy Agent

In this post, I will show you how to implement policy as code with Open Policy Agent (OPA) and Azure.

What is Open Policy Agent?

Open Policy Agent (OPA) is an open-source, general-purpose policy engine that enables you to define and enforce policies across your cloud-native stack. OPA provides a high-level declarative language called Rego that you can use to write policies that are easy to understand and maintain.

Why use Open Policy Agent?

There are several reasons to use OPA:

  • Consistency: OPA allows you to define policies in a single place and enforce them consistently across your cloud-native stack.
  • Flexibility: OPA provides a flexible policy language that allows you to define policies that are tailored to your specific requirements.
  • Auditability: OPA provides a transparent and auditable way to enforce policies, making it easy to understand why a policy decision was made.
  • Integration: OPA integrates with a wide range of cloud-native tools and platforms, making it easy to enforce policies across your entire stack.

Getting started with Open Policy Agent

To get started with OPA, you need to install the OPA CLI and write some policies in Rego.

You can install the OPA CLI by downloading the binary from the OPA GitHub releases page, you can check the installation guide for more details.

Once you have installed the OPA CLI, you can write policies in Rego. Rego is a high-level declarative language that allows you to define policies in a clear and concise way.

Here's a simple example of a policy that enforces a naming convention for Azure resources:

package azure.resources

default allow = false

allow {
    input.resource.type == "Microsoft.Compute/virtualMachines"
    input.resource.name == "my-vm"
}

This policy allows resources of type Microsoft.Compute/virtualMachines with the name my-vm. You can write more complex policies that enforce a wide range of requirements, such as resource tagging, network security, and access control.

Integrating Open Policy Agent with Azure

To integrate OPA with Azure, you can use the Azure Policy service, which allows you to define and enforce policies across your Azure resources. You can use OPA to define custom policies that are not supported by Azure Policy out of the box, or to enforce policies across multiple cloud providers.

Conclusion

Open Policy Agent is a powerful tool that allows you to define and enforce policies across your cloud-native stack. By using OPA, you can ensure that your infrastructure is secure, compliant, and consistent, and that your policies are easy to understand and maintain. I hope this post has given you a good introduction to OPA and how you can use it to implement policy as code in your cloud-native environment.

Additional resources

I have created a GitHub repository with some examples of policies written in Rego that you can use as a starting point for your own policies.

References

FinOps for Azure: Optimizing Your Cloud Spend

In the cloud era, optimizing cloud spend has become a critical aspect of financial management. FinOps, a set of financial operations practices, empowers organizations to get the most out of their Azure investment. This blog post dives into the core principles of FinOps, explores the benefits it offers, and outlines practical strategies for implementing FinOps in your Azure environment.

Understanding the Cloud Cost Challenge

Traditional IT expenditure followed a capital expenditure (capex) model, where businesses purchased hardware and software upfront. Cloud computing introduces a paradigm shift with the operational expenditure (opex) model. Here, businesses pay for resources as they consume them, leading to variable and unpredictable costs.

FinOps tackles this challenge by providing a framework for managing cloud finances. It encompasses three key pillars:

  • People: Establish a FinOps team or designate individuals responsible for overseeing cloud costs. This team should possess a blend of cloud technical expertise, financial acumen, and business process knowledge.
  • Process: Define processes for budgeting, forecasting, and monitoring cloud expenses. This involves setting spending limits, creating chargeback models for different departments, and regularly reviewing cost reports.
  • Tools: Leverage Azure Cost Management, a suite of tools that provides granular insights into your Azure spending. It enables cost allocation by resource, service, department, or any other relevant dimension.

It's essential to adopt a FinOps mindset that encourages collaboration between finance, IT, and business teams to drive cost efficiency and value realization in the cloud.

It's important to note that FinOps is not just about cost-cutting; it's about optimizing cloud spending to align with business objectives and maximize ROI.

Azure Cost Management: Optimizing Your Azure Spending

Azure Cost Management empowers you to analyze your Azure spending patterns and identify cost-saving opportunities. Here's a glimpse into its key functionalities:

  • Cost Views: Generate comprehensive reports that categorize your Azure spending by various attributes like resource group, service, or department.
  • Cost Alerts: Set up proactive alerts to notify you when your spending exceeds predefined thresholds.
  • Reservations: Purchase reserved instances of frequently used Azure resources for significant upfront discounts.
  • Recommendations: Azure Cost Management analyzes your usage patterns and recommends potential cost-saving measures, such as rightsizing resources or leveraging spot instances.

The Power of Tags and Azure Policy

Tags are metadata labels that you can attach to your Azure resources to categorize and track them effectively. They play a pivotal role in FinOps by enabling you to:

  • Associate costs with specific departments, projects, or applications.
  • Identify unused or underutilized resources for potential cost savings.
  • Simplify cost allocation and chargeback processes.

Azure Policy helps enforce tagging standards and governance rules across your Azure environment. You can define policies that mandate specific tags for all resources, ensuring consistent cost allocation and data accuracy.

Benefits of Implementing FinOps

A well-defined FinOps strategy empowers you to:

  • Gain Visibility into Cloud Spending: Obtain a clear picture of your Azure expenditures, enabling informed budgeting and cost control decisions.
  • Optimize Cloud Costs: Identify and eliminate wasteful spending through cost-saving recommendations and proactive measures.
  • Improve Cloud Governance: Enforce tagging policies and spending limits to ensure responsible cloud resource utilization.
  • Align Cloud Spending with Business Value: Make data-driven decisions about cloud investments that support your business objectives.

Getting Started with FinOps in Azure

Implementing FinOps doesn't necessitate a complex overhaul. Here's a recommended approach:

  1. Establish a FinOps Team: Assemble a cross-functional team comprising representatives from finance, IT, and business departments.
  2. Set Clear Goals and Objectives: Define your FinOps goals, whether it's reducing costs by a specific percentage or improving budget forecasting accuracy.
  3. Leverage Azure Cost Management: Start by exploring Azure Cost Management to understand your current spending patterns.
  4. Implement Basic Tagging Standards: Enforce basic tagging policies to categorize your Azure resources for cost allocation purposes.
  5. Continuously Monitor and Refine: Regularly review your cloud cost reports and identify areas for further optimization.

By following these steps and embracing a FinOps culture, you can effectively manage your Azure expenses and derive maximum value from your cloud investment.

Toolchain for FinOps in Azure

To streamline your FinOps practices in Azure, consider leveraging the following tools:

graph LR
A[Financial Operations Practices] --> B{Cloud Spend Optimization}
B --> C{Cost Visibility}
B --> D{Cost Optimization}
B --> E{Governance}
C --> F{Azure Cost Management}
D --> G{Azure Advisor}
E --> H{Azure Policy}
F --> I{Cost Views}
F --> J{Cost Alerts}
G --> K{Cost Recommendations}
H --> L{Tag Enforcement}

This toolchain combines Azure Cost Management, Azure Advisor, and Azure Policy to provide a comprehensive suite of capabilities for managing your Azure spending effectively.

Highly recommended, you can check FinOps toolkit, it's a set of tools and best practices to help you implement FinOps in your organization, it includes tools for cost allocation, budgeting, and forecasting, as well as best practices for FinOps implementation.

Conclusion

FinOps is an essential practice for organizations leveraging Azure. It empowers you to make informed decisions about your cloud finances, optimize spending, and achieve your business goals. As an Azure Solutions Architect, I recommend that you establish a FinOps practice within your organization to unlock the full potential of Azure and achieve financial efficiency in the cloud.

This blog post provides a foundational understanding of FinOps in Azure.

By embracing FinOps, you can transform your cloud cost management practices and drive business success in the cloud era.

References

Repository Strategy: How to test Branching Strategy in local repository

If you don't want to test in github, gitlab, or azure devops, you can test in your local desktop.

Step 1: Create a new local bare repository

To create a new local bare repository, open a terminal window and run the following command:

mkdir localrepo
cd localrepo
git init --bare my-repo.git

This command creates a new directory called localrepo and initializes a new bare repository called my-repo.git inside it.

Step 2: Create a local repository

To create a new local repository, open a terminal window and run the following command:

mkdir my-repo
cd my-repo
git init

This command creates a new directory called my-repo and initializes a new repository inside it.

Step 3: Add the remote repository

To add the remote repository to your local repository, run the following command:

git remote add origin ../my-repo.git

In mi case, I have used absolute path, c:\users\myuser\localrepo\my-repo.git:

git remote add origin c:\users\myuser\localrepo\my-repo.git

This command adds the remote repository as the origin remote.

*Step 4: Create a new file, make first commit and push

To create a new file in your local repository, run the following command:

echo "Hello, World!" > hello.txt

This command creates a new file called hello.txt with the content Hello, World!.

To make the first commit to your local repository, run the following command:

git add hello.txt
git commit -m "Initial commit"

This command stages the hello.txt file and commits it to the repository with the message Initial commit.

To push the changes to the remote repository, run the following command:

git push -u origin master

This command pushes the changes to the master branch of the remote repository.

Step 5: Create a new branch and push it to the remote repository

To create a new branch in your local repository, run the following command:

git checkout -b feature-branch

This command creates a new branch called feature-branch and switches to it.

Conclusion

By following these steps, you can test your branching strategy in a local repository before pushing changes to a remote repository. This allows you to experiment with different branching strategies and workflows without affecting your production codebase.

Configure your Teams meetings options to improve your collaboration experience

Microsoft Teams is a powerful collaboration tool that allows you to communicate and work with your team members from anywhere in the world. One of the key features of Teams is the ability to schedule and host meetings with your colleagues, clients, or partners.

In this blog post, we will show you how to configure your Teams meetings options to improve your collaboration experience and make your meetings more productive.

What are Teams meetings options?

Teams meetings options are settings that allow you to control various aspects of your meetings, such as who can bypass the lobby, who can present, and who can record the meeting. By configuring these options, you can ensure that your meetings run smoothly and securely.

How to configure your Teams meetings options

To configure your Teams meetings options, follow these steps:

  1. Open the Teams app on your computer or mobile device.
  2. Click on the calendar icon in the left-hand menu to view your calendar.
  3. Click on the "Meetings" tab at the top of the screen to view your upcoming meetings.
  4. Click on the meeting you want to configure to open the meeting details.
  5. Click on the "Meeting options" link at the bottom of the meeting details window.
  6. Configure the meeting options according to your preferences. You can control who can bypass the lobby, who can present, who can record the meeting, and other settings.

Key settings to configure

Here are some key settings that you may want to configure in your Teams meetings options:

  • Who can bypass the lobby: By default, all participants are sent to the lobby when they join a meeting. You can configure this setting to allow participants to bypass the lobby and join the meeting directly.

  • Who can present: By default, only the meeting organizer can present in a meeting. You can configure this setting to allow other participants to present as well.

  • Who can record the meeting: By default, only the meeting organizer can record a meeting. You can configure this setting to allow other participants to record the meeting as well.

  • Automatically admit people: By default, participants need to be admitted to the meeting by the organizer or a presenter. You can configure this setting to automatically admit people to the meeting.

  • Allow attendees to unmute: By default, attendees are muted when they join a meeting. You can configure this setting to allow attendees to unmute themselves.

Conclusion

Configuring your Teams meetings options is a great way to improve your collaboration experience and make your meetings more productive. By configuring these settings, you can ensure that your meetings run smoothly and securely, allowing you to focus on the task at hand.

Reduce your attack surface in Snowflake when using from Azure

When it comes to data security, reducing your attack surface is a crucial step. This post will guide you on how to minimize your attack surface when using Snowflake with Azure or Power BI.

What is Snowflake?

Snowflake is a cloud-based data warehousing platform that allows you to store and analyze large amounts of data. It is known for its scalability, performance, and ease of use. Snowflake is popular among organizations that need to process large volumes of data quickly and efficiently.

What is an Attack Surface?

An attack surface refers to the number of possible ways an attacker can get into a system and potentially extract data. The larger the attack surface, the more opportunities there are for attackers. Therefore, reducing the attack surface is a key aspect of securing your systems.

How to Reduce Your Attack Surface in Snowflake:

  1. Use Azure Private Link Azure Private Link provides private connectivity from a virtual network to Snowflake, isolating your traffic from the public internet. It significantly reduces the attack surface by ensuring that traffic between Azure and Snowflake doesn't traverse over the public internet.

  2. Implement Network Policies Snowflake allows you to define network policies that restrict access based on IP addresses. By limiting access to trusted IP ranges, you can reduce the potential points of entry for an attacker.

  3. Enable Multi-Factor Authentication (MFA) MFA adds an extra layer of security by requiring users to provide at least two forms of identification before accessing the Snowflake account. This makes it harder for attackers to gain unauthorized access, even if they have your password.

In this blog post, we will show you how to reduce your attack surface when using Snowflake from Azure or Power BI by using Azure Private Link

Default Snowflake architecture

By default, Snowflake is accessible from the public internet. This means that anyone with the right credentials can access your Snowflake account from anywhere in the world, you can limit access to specific IP addresses, but this still exposes your Snowflake account to potential attackers.

This is a security risk because it exposes your Snowflake account to potential attackers.

The architecture of using Azure Private Link with Snowflake is as follows:

    graph TD
    A[Virtual Network] -->|Private Endpoint| B[Snowflake]
    B -->|Private Link Service| C[Private Link Resource]
    C -->|Private Connection| D[Virtual Network]
    D -->|Subnet| E[Private Endpoint]    

Architecture Components

Requirements

Before you can use Azure Private Link with Snowflake, you need to have the following requirements in place:

  • A Snowflake account with ACCOUNTADMIN privileges
  • Business Critical or higher Snowflake edition
  • An Azure subscription with a Resource Group and privileges to create:
    • Virtual Network
    • Subnet
    • Private Endpoint

Step-by-Step Guide

Note

Replace the placeholders with your actual values, that is a orientation guide.

Step 1: Retrieve Dedailts of your Snowflake Account
USE ROLE ACCOUNTADMIN;
select select SYSTEM$GET_PRIVATELINK_CONFIG();
Step 2: Create a Virtual Network

A virtual network is a private network that allows you to securely connect your resources in Azure. To create a virtual network with azcli, follow these steps:

az network vnet create \
  --name myVnet \
  --resource-group myResourceGroup \
  --address-prefix
  --subnet-name mySubnet \
  --subnet-prefix
  --enable-private-endpoint-network-policies true
Step 3: Create a Private Endpoint

The first step is to create a private endpoint in Azure. A private endpoint is a network interface that connects your virtual network to the Snowflake service. This allows you to access Snowflake using a private IP address, rather than a public one.

To create a private endpoint with azcli, follow these steps:

az network private-endpoint create \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup \
  --vnet-name myVnet \
  --subnet mySubnet \
  --private-connection-resource-id /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Network/privateLinkServices/<Snowflake-service-name> \
  --connection-name mySnowflakeConnection

Check the status of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup
Step 4: Authorize the Private Endpoint

The next step is to authorize the private endpoint to access the Snowflake service.

Retrieve the Resource ID of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup

Create a temporary access token that Snowflake can use to authorize the private endpoint:

az account get-access-token --subscription <subscription-id>

Authorize the private endpoint in Snowflake:

USE ROLE ACCOUNTADMIN;
select SYSTEM$AUTHORIZE_PRIVATELINK('<resource-id>', '<access-token>');

Step 5: Block Public Access

To further reduce your attack surface, you can block public access to your Snowflake account. This ensures that all traffic to and from Snowflake goes through the private endpoint, reducing the risk of unauthorized access.

To block public access to your Snowflake account, you need to use Network Policy, follow these steps:

USE ROLE ACCOUNTADMIN;
CREATE NETWORK RULE allow_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('192.168.1.99/24');

CREATE NETWORK RULE block_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('0.0.0.0/0');

CREATE NETWORK POLICY public_network_policy
  ALLOWED_NETWORK_RULE_LIST = ('allow_access_rule')
  BLOCKED_NETWORK_RULE_LIST=('block_access_rule');

It's highly recommended to follow the best practices for network policies in Snowflake. You can find more information here: https://docs.snowflake.com/en/user-guide/network-policies#best-practices

Step 6: Test the Connection

To test the connection between your virtual network and Snowflake, you can use the SnowSQL client.

snowsql -a <account_name> -u <username> -r <role> -w <warehouse> -d <database> -s <schema>

Internal Stages with Azure Blob Private Endpoints

If you are using Azure Blob Storage as an internal stage in Snowflake, you can also use Azure Private Link to secure the connection between Snowflake and Azure Blob Storage.

It's recommended to use Azure Blob Storage with Private Endpoints to ensure that your data is secure and that you are reducing your attack surface, you can check the following documentation for more information: Azure Private Endpoints for Internal Stages to learn how to configure Azure Blob Storage with Private Endpoints in Snowflake.

Conclusion

Reducing your attack surface is a critical aspect of securing your systems. By using Azure Private Link with Snowflake, you can significantly reduce the risk of unauthorized access and data breaches. Follow the steps outlined in this blog post to set up Azure Private Link with Snowflake and start securing your data today.

References

Nested Structures with Optional Attributes and Dynamic Blocks

In this post, I will explain how to define nested structures with optional attributes and it's use dynamic blocks in Terraform.

Nested Structures

Terraform allows you to define nested structures to represent complex data types in your configuration. Nested structures are useful when you need to group related attributes together or define a data structure that contains multiple fields.

For example, you might define a nested structure to represent a virtual machine with multiple attributes, such as size, image, and network configuration:

variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = string
    })
  })
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the vm variable defines a nested structure with three attributes: size, image, and network. The network attribute is itself a nested structure with two attributes: subnet and security_group.

Optional Attributes

If you have an attribute that is optional, you can define it as an optional attribute in your configuration. Optional attributes are useful when you want to provide a default value for an attribute but allow users to override it if needed.

If optional doesn't works, Terraform allows you to define your variables as any type, but it's not recommended because it can lead to errors and make your configuration less maintainable. It's better to define your variables with the correct type and use optional attributes when needed, but some cases it's necessary to use any and null values, you can minimize the risk of errors by providing a good description of the variable and its expected values.

For example, you might define an optional attribute for the security_group in the network structure:

variable "vm" {
  description = <<DESCRIPTION
  Virtual machine configuration.
  The following attributes are required:
  - size: The size of the virtual machine.
  - image: The image for the virtual machine.
  - network: The network configuration for the virtual machine.
  The network configuration should have the following attributes:
  - subnet: The subnet for the virtual machine.
  - security_group: The security group for the virtual machine.
  DESCRIPTION
  type = any
  default = null
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}
variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = optional(string)
    })
  })
}


resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the security_group attribute in the network structure is defined as an optional attribute with a default value of null. This allows users to provide a custom security group if needed, or use the default value if no value is provided.

Dynamic Blocks

Terraform allows you to use dynamic blocks to define multiple instances of a block within a resource or module. Dynamic blocks are useful when you need to create multiple instances of a block based on a list or map of values.

For example, you might use a dynamic block to define multiple network interfaces for a virtual machine:

variable "network_interfaces" {
  type = list(object({
    name    = string
    subnet  = string
    security_group = string
  }))
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = "Standard_DS1_v2"
  image    = "UbuntuServer"

  dynamic "network_interface" {
    for_each = var.network_interfaces
    content {
      name            = network_interface.value.name
      subnet          = network_interface.value.subnet
      security_group  = network_interface.value.security_group
    }
  }
}

In this example, the network_interfaces variable defines a list of objects representing network interfaces with three attributes: name, subnet, and security_group. The dynamic block iterates over the list of network interfaces and creates a network interface block for each object in the list.

Conclusion

In this post, I explained how to define nested structures with optional attributes and use dynamic blocks in Terraform. Nested structures are useful for representing complex data types, while optional attributes allow you to provide default values for attributes that can be overridden by users. Dynamic blocks are useful for creating multiple instances of a block based on a list or map of values. By combining these features, you can create flexible and reusable configurations in Terraform.

Azure Container Registry: Artifact Cache

Azure Container Registry is a managed, private Docker registry service provided by Microsoft. It allows you to build, store, and manage container images and artifacts in a secure environment.

What is Artifact Caching?

Artifact Cache is a feature in Azure Container Registry that allows users to cache container images in a private container registry. It is available in Basic, Standard, and Premium service tiers.

Benefits of Artifact Cache

  • Reliable pull operations: Faster pulls of container images are achievable by caching the container images in ACR.
  • Private networks: Cached registries are available on private networks.
  • Ensuring upstream content is delivered: Artifact Cache allows users to pull images from the local ACR instead of the upstream registry.

Limitations

Cache will only occur after at least one image pull is complete on the available container image

How to Use Artifact Cache in Azure Container Registry without credential?

Let's take a look at how you can implement artifact caching in Azure Container Registry.

Step 1: Create a Cache Rule

The first step is to create a cache rule in your Azure Container Registry. This rule specifies the source image that should be cached and the target image that will be stored in the cache.

az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu

Check the cache rule:

az acr cache show -r MyRegistry -n MyRule

Step 2: Pull the Image

Next, you need to pull the image from the source registry to the cache. This will download the image and store it in the cache for future use.

docker pull myregistry.azurecr.io/hello-world:latest

Step 3: Clean up the resources

Finally, you can clean up the resources by deleting the cache rule.

az acr cache delete -r MyRegistry -n MyRule

If you need to check other rules, you can use the following command:

az acr cache list -r MyRegistry

Conclusion

Azure Container Registry's Artifact Cache feature provides a convenient way to cache container images in a private registry, improving pull performance and reducing network traffic. By following the steps outlined in this article, you can easily set up and use artifact caching in your Azure Container Registry.

If you need to use the cache with authentication, you can use the following article: Enable Artifact Cache with authentication.

For more detailed information, please visit the official tutorial on the Microsoft Azure website.

References

Repository Strategy: Branching Strategy

Once we have decided that we will use branches, it is time to define a strategy, so that all developers can work in a coordinated way.

Some of the best known strategies are:

  • Gitflow: It is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.
  • Feature Branch: It is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.
  • Trunk-Based Development: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitHub Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitLab Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • Microsoft Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

Gitflow

Gitflow is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.

The Gitflow strategy is based on the following branches:

  • main: It is the main branch of the project. It contains the code that is in production.
  • develop: It is the branch where the code is integrated before being released to production.

The Gitflow strategy is based on the following types of branches:

  • feature: It is the branch where the code for a new feature is developed.
  • release: It is the branch where the code is prepared for release.
  • hotfix: It is the branch where the code is developed to fix a bug in production.

The Gitflow strategy is based on the following rules:

  • Feature branches are created from the develop branch.
  • Feature branches are merged into the develop branch.
  • Release branches are created from the develop branch.
  • Release branches are merged into the main and develop branches.
  • Hotfix branches are created from the main branch.
  • Hotfix branches are merged into the main and develop branches.

The Gitflow strategy is based on the following workflow:

  1. Developers create a feature branch from the develop branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the develop branch.
  4. Developers create a release branch from the develop branch.
  5. Developers prepare the code for release in the release branch.
  6. Developers merge the release branch into the main and develop branches.
  7. Developers create a hotfix branch from the main branch.
  8. Developers fix the bug in the hotfix branch.
  9. Developers merge the hotfix branch into the main and develop branches.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
branch develop
checkout develop
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout develop
merge feature_branch
commit
branch release_branch
checkout release_branch
commit
commit
checkout develop
merge release_branch
checkout main
merge release_branch
commit
branch hotfix_branch
checkout hotfix_branch
commit
commit
checkout develop
merge hotfix_branch
checkout main
merge hotfix_branch

The Gitflow strategy have the following advantages:

  • Isolation: Each feature is developed in a separate branch, isolating it from other features.
  • Collaboration: Developers can work on different features at the same time without affecting each other.
  • Stability: The main branch contains the code that is in production, ensuring that it is stable and reliable.

The Gitflow strategy is based on the following disadvantages:

  • Complexity: The Gitflow strategy is complex and can be difficult to understand for new developers.
  • Overhead: The Gitflow strategy requires developers to create and manage multiple branches, which can be time-consuming.

Feature Branch Workflow

Feature Branch is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.

The Feature Branch strategy is based on the following rules:

  • Developers create a feature branch from the main branch.
  • Developers develop the code for the new feature in the feature branch.
  • Developers merge the feature branch into the main branch.

The Feature Branch strategy is based on the following workflow:

  1. Developers create a feature branch from the main branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

The Feature Branch strategy is based on the following advantages:

  • Simplicity: The Feature Branch strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in feature branches are visible to other developers, making it easier to review and merge changes.

The Feature Branch strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same feature can lead to conflicts and merge issues.
  • Complexity: Managing multiple feature branches can be challenging, especially in large projects.

Trunk-Based Development

The Trunk-Based Development strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature flags to hide unfinished features.
  • Developers merge the code into the main branch when it is ready.

The Trunk-Based Development strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create feature flags to hide unfinished features.
  3. Developers merge the code into the main branch when it is ready.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit tag:"v1.0.0"
commit
commit
commit tag:"v2.0.0"

The Trunk-Based Development strategy is based on the following advantages:

  • Simplicity: The Trunk-Based Development strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in the main branch are visible to other developers, making it easier to review and merge changes.

The Trunk-Based Development strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.
  • Complexity: Managing feature flags can be challenging, especially in large projects.

GitHub Flow

GitHub Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

The GitHub Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.

The GitHub Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

GitLab Flow

GitLab Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand. GitLab Flow is often used with release branches.

The GitLab Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.
  • Developers create a pre-production branch to make bug fixes before merging changes back to the main branch.
  • Developers merge the pre-production branch into the main branch before going to production.
  • Developers can add as many pre-production branches as needed.
  • Developers can maintain different versions of the production branch.

The GitLab Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
  5. Developers create a pre-production branch from the main branch.
  6. Developers make bug fixes in the pre-production branch.
  7. Developers merge the pre-production branch into the main branch.
  8. Developers create a production branch from the main branch.
  9. Developers merge the production branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch
commit
branch pre-production
checkout pre-production
commit
commit
checkout main
merge pre-production
commit
branch production
checkout production
commit
commit
checkout main
merge production

Repository Strategy: Fork vs Branch

In this post, I will explain the different ways to contribute to a Git repository: Fork vs Branch.

Fork

A fork is a copy of a repository that you can make changes to without affecting the original repository. When you fork a repository, you create a new repository in your GitHub account that is a copy of the original repository.

The benefits of forking a repository include:

  • Isolation: You can work on your changes without affecting the original repository.
  • Collaboration: You can make changes to the forked repository and submit a pull request to the original repository to merge your changes.
  • Ownership: You have full control over the forked repository and can manage it as you see fit.

The challenges of forking a repository include:

  • Synchronization: Keeping the forked repository up to date with the original repository can be challenging.
  • Conflicts: Multiple contributors working on the same codebase can lead to conflicts and merge issues.
  • Visibility: Changes made to the forked repository are not visible in the original repository until they are merged.

Branch

A branch is a parallel version of a repository that allows you to work on changes without affecting the main codebase. When you create a branch, you can make changes to the code and submit a pull request to merge the changes back into the main branch.

The benefits of using branches include:

  • Flexibility: You can work on different features or bug fixes in separate branches without affecting each other.
  • Collaboration: You can work with other developers on the same codebase by creating branches and submitting pull requests.
  • Visibility: Changes made in branches are visible to other developers, making it easier to review and merge changes.

The challenges of using branches include:

  • Conflicts: Multiple developers working on the same branch can lead to conflicts and merge issues.
  • Complexity: Managing multiple branches can be challenging, especially in large projects.
  • Versioning: Branches are versioned separately, making it harder to track changes across the project.

Fork vs Branch

The decision to fork or branch a repository depends on the project's requirements and the collaboration model.

  • Fork: Use a fork when you want to work on changes independently of the original repository or when you want to contribute to a project that you do not have write access to.

  • Branch: Use a branch when you want to work on changes that will be merged back into the main codebase or when you want to collaborate with other developers on the same codebase.

For my IaC project with Terraform, I will use branches to work on different features and bug fixes and submit pull requests to merge the changes back into the main branch. This approach will allow me to collaborate with other developers and keep the codebase clean and organized.