Skip to content

Blog

How to set consistent naming in cloud with Azure Naming Tool and Terraform

Consistent naming across cloud resources is crucial for maintainability, clarity, and compliance. The Azure Naming Tool is a powerful tool that helps you define and enforce naming conventions for Azure resources. In this post, we will explore how to use the Azure Naming Tool in conjunction with Terraform to ensure consistent naming across your cloud infrastructure.

What is the Azure Naming Tool?

The Azure Naming Tool is an open-source project that provides a framework for defining and enforcing naming conventions for Azure resources. It allows you to create a set of rules that can be applied to various resource types, ensuring that all resources follow a consistent naming pattern. This is particularly useful in large organizations where multiple teams may be creating resources independently.

Really, the Azure Naming Tool is not only for Azure, it can be used for any cloud provider, as it is a generic tool that can be adapted to different environments if you define the rules correctly. You have the option Resource Type Editing documentation that allows you to define the rules for each resource type or create your own custom resource types, for AWS, GCP, or any other cloud provider.

Why Use Consistent Naming?

Consistent naming is essential for several reasons:

  • Clarity: Clear and consistent names make it easier to understand the purpose of each resource.
  • Maintainability: When resources are named consistently, it simplifies management and reduces the risk of errors.
  • Compliance: Many organizations have compliance requirements that mandate specific naming conventions for resources.
  • Automation: Consistent naming allows for easier automation and scripting, as you can predict resource names based on their types and roles.

What is the problem?

The problem with naming is the interfaces that we use to create resources, Azure Naming Tool portal is a great tool to define the rules, but it is not integrated with Terraform, so we need another way to apply the rules defined in the Azure Naming Tool to our Terraform code.

Solution

To solve the problem of integrating the Azure Naming Tool with Terraform, I created a Terraform module that allows you to use the naming rules defined in the Azure Naming Tool directly in your Terraform code. This module post a request to the Azure Naming Tool API to get the name of the resource based on the rules defined in the Azure Naming Tool.

This example shows how to use the module to create a resource group with a name generated by the Azure Naming Tool based on the rules defined in the Azure Naming Tool portal. The module uses the data.external and data.http resources to call the Azure Naming Tool API and get the name of the resource.

terraform apply -auto-approve
module.azurenamingtool.data.external.aad_token: Reading...
module.azurenamingtool.data.external.aad_token: Read complete after 0s [id=-]
module.azurenamingtool.data.http.name_generation_post: Reading...
module.azurenamingtool.data.http.name_generation_post: Read complete after 1s [id=https://azurenamingtool.azurewebsites.net/api/ResourceNamingRequests/RequestName]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_resource_group.example will be created
  + resource "azurerm_resource_group" "example" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "rg-spa-dev-auc-025"
      + tags     = {
          + "environment" = "example"
          + "project"     = "azurenamingtool"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + generated_name = "rg-spa-dev-auc-025"
azurerm_resource_group.example: Creating...
azurerm_resource_group.example: Still creating... [00m10s elapsed]
azurerm_resource_group.example: Creation complete after 16s [id=/subscriptions/000000-0000-0000-0000-000000000000/resourceGroups/rg-spa-dev-auc-025]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

generated_name = "rg-spa-dev-auc-025"

You can check the module in the Terraform Registry.

Conclusion

Using the Azure Naming Tool in conjunction with Terraform allows you to enforce consistent naming conventions across your cloud resources. By integrating the Azure Naming Tool API into your Terraform workflows, you can automate the naming process and ensure that all resources follow the defined rules. This not only improves clarity and maintainability but also helps meet compliance requirements.

Enjoy the benefits of consistent naming in your cloud infrastructure with the Azure Naming Tool and Terraform!

Azure Virtual Network Manager: Illustrative relationship between components

graph TD
    subgraph "Azure Scope"
        MG["Management Group / Subscription"]
    end
    subgraph "Azure Virtual Network Manager (AVNM)"
        AVNM_Instance["AVNM Instance"] -- Manages --> MG
        AVNM_Instance -- Contains --> NG1["Network Group A (e.g., Production)"]
        AVNM_Instance -- Contains --> NG2["Network Group B (e.g., Test)"]
    end
    subgraph "Virtual Networks (VNets)"
        VNet1["VNet 1"]
        VNet2["VNet 2"]
        VNet3["VNet 3"]
        VNet4["VNet 4"]
    end
    subgraph "Membership"
        Static["Static Membership"] -- Manually Adds --> NG1
        Static -- Manually Adds --> NG2
        VNet1 -- Member Of --> Static
        VNet4 -- Member Of --> Static
        Dynamic["Dynamic Membership (Azure Policy)"] -- Automatically Adds --> NG1
        Dynamic -- Automatically Adds --> NG2
        Policy["Azure Policy Definition (e.g., 'tag=prod')"] -- Defines --> Dynamic
        VNet2 -- Meets Policy --> Dynamic
        VNet3 -- Meets Policy --> Dynamic
    end
    subgraph "Configurations"
        ConnConfig["Connectivity Config (Mesh / Hub-Spoke)"] -- Targets --> NG1
        SecConfig["Security Admin Config (Rules)"] -- Targets --> NG1
        SecConfig2["Security Admin Config (Rules)"] -- Targets --> NG2
    end
    subgraph "Deployment & Enforcement"
        Deploy["Deployment"] -- Applies --> ConnConfig
        Deploy -- Applies --> SecConfig
        Deploy -- Applies --> SecConfig2
        NG1 -- Receives --> ConnConfig
        NG1 -- Receives --> SecConfig
        NG2 -- Receives --> SecConfig2
        VNet1 -- Enforced --> NG1
        VNet2 -- Enforced --> NG1
        VNet3 -- Enforced --> NG1
        VNet4 -- Enforced --> NG2
    end
    %% Removed 'fill' for better dark mode compatibility, kept colored strokes
    classDef avnm stroke:#f9f,stroke-width:2px;
    classDef ng stroke:#ccf,stroke-width:1px;
    classDef vnet stroke:#cfc,stroke-width:1px;
    classDef config stroke:#ffc,stroke-width:1px;
    classDef policy stroke:#fcc,stroke-width:1px;
    class AVNM_Instance avnm;
    class NG1,NG2 ng;
    class VNet1,VNet2,VNet3,VNet4 vnet;
    class ConnConfig,SecConfig,SecConfig2 config;
    class Policy,Dynamic,Static policy;

Diagram Explanation

  1. Azure Scope: Azure Virtual Network Manager (AVNM) operates within a defined scope, which can be a Management Group or a Subscription. This determines which VNets AVNM can "see" and manage.
  2. AVNM Instance: This is the main Azure Virtual Network Manager resource. Network groups and configurations are created and managed from here.
  3. Network Groups:
    • These are logical containers for your Virtual Networks (VNets).
    • They allow you to group VNets with common characteristics (environment, region, etc.).
    • A VNet can belong to multiple network groups.
  4. Membership: How VNets are added to Network Groups:
    • Static Membership: You add VNets manually, selecting them one by one.
    • Dynamic Membership: Uses Azure Policy to automatically add VNets that meet certain criteria (like tags, names, locations). VNets matching the policy are dynamically added (and removed) from the group.
  5. Virtual Networks (VNets): These are the Azure virtual networks that are being managed.
  6. Configurations: AVNM allows you to apply two main types of configurations to Network Groups:
    • Connectivity Config: Defines how VNets connect within a group (or between groups). You can create topologies like Mesh (all connected to each other) or Hub-and-Spoke (a central VNet connected to several "spoke" VNets).
    • Security Admin Config: Allows you to define high-level security rules that apply to the VNets in a group. These rules can override Network Security Group (NSG) rules, enabling centralized and mandatory security policies.
  7. Deployment & Enforcement:

    • The created configurations (connectivity and security) must be Deployed.
    • During deployment, AVNM translates these configurations and applies them to the VNets that are members of the target network groups in the selected regions.
    • Once deployed, the VNets within the groups receive and apply (Enforced) these configurations, establishing the defined connections and security rules.

    And maybe this post will be published in the official documentation of Azure Virtual Network Manager, who knows? 😉

Test the proxy configuration in an AKS cluster

Variables

export RESOURCE_GROUP=<your-resource-group>
export AKS_CLUSTER_NAME=<your-aks-cluster-name>
export MC_RESOURCE_GROUP_NAME=<your-mc-resource-group-name>
export VMSS_NAME=<your-vmss-name>
export HTTP_PROXYCONFIGURED="http://<your-http-proxy>:8080/"
export HTTPS_PROXYCONFIGURED="https://<your-https-proxy>:8080/"

Get the VMSS instance IDs

# To get the instance IDs of all the instances in the VMSS, use the following command:
az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId"

# For one instance, you can use the following command to get the instance ID:
VMSS_INSTANCE_IDS=$(az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId" | tail -1)

Use an instance ID to test connectivity from the HTTP proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTP_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test connectivity from the HTTPS proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTPS_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test DNS functionality

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "dig mcr.microsoft.com 443"

Test wagent logs

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "cat /var/log/waagent.log"

Test wagent status

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "systemctl status waagent"

Update the proxy configuration

az aks update -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP --http-proxy-config aks-proxy-config.json
az aks update --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --http-proxy-config aks-proxy-config.json

Azure Virtual Network Manager: A Comprehensive Guide

Azure Virtual Network Manager is a powerful management service that allows you to group, configure, deploy, and manage virtual networks across subscriptions on a global scale. It provides the ability to define network groups for logical segmentation of your virtual networks. You can then establish connectivity and security configurations and apply them across all selected virtual networks in network groups simultaneously.

How Does Azure Virtual Network Manager Work?

The functionality of Azure Virtual Network Manager revolves around a well-defined process:

  1. Scope Definition: During the creation process, you determine the scope of what your Azure Virtual Network Manager will manage. The Network Manager only has delegated access to apply configurations within this defined scope boundary. Although you can directly define a scope on a list of subscriptions, it's recommended to use management groups for scope definition as they provide hierarchical organization to your subscriptions.

  2. Deployment of Configuration Types: After defining the scope, you deploy configuration types including Connectivity and SecurityAdmin rules for your Virtual Network Manager.

  3. Creation of Network Group: Post-deployment, you create a network group which acts as a logical container of networking resources for applying configurations at scale. You can manually select individual virtual networks to be added to your network group (static membership) or use Azure Policy to define conditions for dynamic group membership.

  4. Connectivity and Security Configurations: Next, you create connectivity and/or security configurations to be applied to those network groups based on your topology and security requirements. A connectivity configuration enables you to create a mesh or a hub-and-spoke network topology, while a security configuration lets you define a collection of rules that can be applied globally to one or more network groups.

  5. Deployment of Configurations: Once you've created your desired network groups and configurations, you can deploy the configurations to any region of your choosing.

Azure Virtual Network Manager can be deployed and managed through various platforms such as the Azure portal, Azure CLI, Azure PowerShell, or Terraform.

Key Benefits of Azure Virtual Network Manager

  • Centralized management of connectivity and security policies globally across regions and subscriptions.
  • Direct connectivity between spokes in a hub-and-spoke configuration without the complexity of managing a mesh network.
  • Highly scalable and highly available service with redundancy and replication across the globe.
  • Ability to create network security rules that override network security group rules.
  • Low latency and high bandwidth between resources in different virtual networks using virtual network peering.
  • Roll out network changes through a specific region sequence and frequency of your choosing.

Use Cases for Azure Virtual Network Manager

Azure Virtual Network Manager is a versatile tool that can be used in a variety of scenarios:

  1. Hub-and-Spoke Network Topology: Azure Virtual Network Manager is ideal for managing hub-and-spoke network topologies where you have a central hub virtual network that connects to multiple spoke virtual networks. You can easily create and manage these configurations at scale using Azure Virtual Network Manager.

  2. Global Connectivity and Security Policies: If you have a global presence with virtual networks deployed across multiple regions, Azure Virtual Network Manager allows you to define and apply connectivity and security policies globally, ensuring consistent network configurations across all regions.

  3. Network Segmentation and Isolation: Azure Virtual Network Manager enables you to segment and isolate your virtual networks based on your organizational requirements. You can create network groups and apply security configurations to enforce network isolation and access control.

  4. Centralized Network Management: For organizations with multiple subscriptions and virtual networks, Azure Virtual Network Manager provides a centralized management solution to manage network configurations, connectivity, and security policies across all subscriptions.

  5. Automated Network Configuration Deployment: By leveraging Azure Policy and Azure Virtual Network Manager, you can automate the deployment of network configurations based on predefined conditions, ensuring consistent network configurations and compliance across your Azure environment.

Example connectivity and security configurations forcing a Network Security Rule

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Example Connectivity Configuration forcing a Hub-and-Spoke Network Topology

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Preview Features

At the time of writing, Azure Virtual Network Manager has some features in preview and may not be available in all regions. Some of the preview features include:

  • IP address management: allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks.
  • Virtual Network verifier: Enables you to check if your network policies allow or disallow traffic between your Azure network resources.

  • Configurations, creation of a routing is in preview, very interesting to manage the traffic between the different networks.

Conclusion

Azure Virtual Network Manager is a powerful service that simplifies the management of virtual networks in Azure. By providing a centralized platform to define and apply connectivity and security configurations at scale, Azure Virtual Network Manager streamlines network management tasks and ensures consistent network configurations across your Azure environment.

For up-to-date information on the regions where Azure Virtual Network Manager is available, refer to Products available by region.

GTD in Outlook with Microsoft To Do

In this post, we will see how to use GTD in Outlook.

Firs of all, let's define what GTD is. GTD stands for Getting Things Done, a productivity method created by David Allen. The GTD method is based on the idea that you should get things out of your head and into a trusted system, so you can focus on what you need to do.

Important

The GTD method is not about doing more things faster. It's about doing the right things at the right time. This method needs to be aligned with your purpose, objectives, and goals, so you can focus on what really matters to you.

The GTD workflow

The GTD workflow consists of five steps:

  1. Capture: Collect everything that has your attention.
  2. Clarify: Process what you've captured.
  3. Organize: Put everything in its place.
  4. Reflect: Review your system regularly.
  5. Engage: Choose what to do and do it.

The detailed flowchart of the GTD method is shown below:

graph TD;
    subgraph Capture    
    subgraph for_each_item_not_in_Inbox
    AA[Collect everything that has your attention in the Inbox];
    end    
    A[Inbox]
    AA-->A
    end
    subgraph Clarify        
    subgraph for_each_item_in_Inbox
    BB[  What is this?
            What should I do?
            Is this really worth doing?
            How does it fit in with all the other things I have to do?
            Is there any action that needs to be done about it or because of it?]
    end
    end
    A --> BB
    subgraph Organize
    BB --> B{Is it actionable?}
    B -- no --> C{Is it reference material?}
    C -- yes --> D[Archive]
    C -- no --> E{Is it trash?}
    E -- yes --> FF[Trash it]
    E -- no --> GG[Incubate it]
    GG --> HH[Review it weekly]
    B -- yes --> F{Can it be done in one step?}
    F -- no --> G[Project]
    G --> HHH[Define a next action at least]
    HHH --> H[Project Actions]    
    end
    subgraph Engage
    H --> I
    F -- yes --> I{Does it take more than 2 minutes?}
    I -- no --> J[DO IT]
    I -- yes --> K{Is it my responsibility?}
    K -- no --> L[Delegate]
    L --> M[Waiting For]    
    K -- yes --> N{Does it have a specific date?}
    N -- yes --> O[Add to Calendar]
    N -- no --> P[Next Actions]
    O --> Q[Schedule. Do it at a specific time]
    P --> R[Do it as soon as possible]
    end
graph TD;
subgraph Reflect
    S[**Daily Review**
            - Review your tasks daily
            - Calendar
            - Next  actions
            - Urgent tasks]
    T[**Weekly Review**
            - Review your projects weekly, make sure you're making progress
            - Next actions
            - Waiting for
            - Someday/Maybe
            - Calendar]
    U[**Monthly Review**
            - Focus on your goals
            - Reference archive
            - Someday/Maybe
            - Completed projects]
    V[**Review your purpose annually**
            - Goals and purposes
            - Big projects
            - Historical archive]
    end

It's important to note that the GTD method is not a one-size-fits-all solution. You can adapt it to your needs and preferences. The key is to find a system that works for you and stick to it.

And now, let's see how to use GTD in Outlook and Microsoft To Do.

How to use GTD in Outlook with Microsoft To Do

When it comes to implementing the GTD method in Outlook, the key is to use the right tools and techniques. Microsoft To Do is a great tool for managing your tasks and projects, and it integrates seamlessly with Outlook.

You can use Outlook to implement the GTD method by following these steps:

  1. Capture:
    • emails: Use the Inbox to collect everything that has your attention.
    • Other Things: Use Microsoft To Do Taks default list to capture tasks and projects.
  2. Clarify: Process what you've captured by asking yourself the following questions:
    • What is this?
    • What should I do?
    • Is this really worth doing?
    • How does it fit in with all the other things I have to do?
    • Is there any action that needs to be done about it or because of it?
  3. Organize: Put everything in its place by following these steps:
    • Inbox:
      • Move emails to the appropriate folder or delete them.
      • Categories: Use categories to organize your emails by context and folder to organize your emails by project or client.
      • Use search folders to find emails quickly by category or categories, you can clear categories after processing.
      • Flags emails to add to To Do.
      • Create rules to automate repetitive tasks when clarifying one type of email has allways the same action.
    • Tasks: Organize your tasks and projects in Microsoft To Do.
      • Lists: Create lists for different types of tasks, one by context or use #tags for that in one lists. For example:
        • In the case of lists: Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, Someday/Maybe.
        • In the case of tags, one list with: #Agendas, #Anywhere, #Calls, #Computed, #Errands, #Home, #Office, #WaitingFor, #SomedayMaybe.
      • Use tag #nextaction to identify the next task to do.
      • Use tag #urgent to identify urgent tasks.
    • Projects
      • Group Lists: Group lists by category of projects or client.
      • One list per project: Create a list for each project and add tasks to it.
      • Use #nextaction tag to identify the next task in each project.
    • Reference Material:
      • Store reference material in folders, better in OneDrive or SharePoint.
      • Use a folder structure to organize your reference material
      • Use search folders to find it quickly.
      • Use tags to identify the context of the reference material. You can use FileMeta to add tags to files in Windows for non-taggeable files.
  4. Reflect: Review your system regularly to make sure it's up to date.
    • Daily Review
    • Weekly Review
    • Monthly Review
    • Annual Review
  5. Engage: Choose what to do and do it.
    • Use the My Day Bar to see your tasks and events at a glance or in search bar type #nextaction to see all your next actions.

These are just some ideas to get you started. You can adapt the GTD method to your needs and preferences. The key is to find a system that works for you and stick to it.

My example of GTD in Outlook with Microsoft To Do:

Outlook:

alt text

To Do:

alt text

I'm using a mix of lists and tags with the same name to organize my tasks and projects. I have lists for different types of tasks, such as Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, and Someday/Maybe. I also use tags to identify the next action, urgent tasks and context of the task in projects.

In the case of emails, I use categories to organize them by context and folders to organize them by project or client. I also use search folders to find emails quickly by category or categories and filter by unreads. The reason for this is that I can clear categories after processing and in the mayorie of cases, I only need a quick review of the emails without the need to convert them into tasks.

By following these steps, you can implement the GTD method in Outlook and Microsoft To Do and improve your productivity and focus.

Good luck! 🍀

References

How to sign Git commits in Visual Studio Code in Windows Subsystem for Linux (WSL)

In this post, we will see how to sign Git commits in Visual Studio Code.

Prerequisites

  • Visual Studio Code
  • Git
  • gpg
  • gpg-agent
  • gpgconf
  • pinentry-gtk-2
  • Windows Subsystem for Linux (WSL) with Ubuntu 20.04

Steps

1. Install GPG

First, you need to install GPG ans agents. You can do this by running the following command:

sudo apt install gpg gpg-agent gpgconf pinentry-gtk2 -y

2. Generate a GPG key

To generate a GPG key, run the following command:

gpg --full-generate-key

You will be asked to enter your name, email, and passphrase. After that, the key will be generated.

3. List your GPG keys

To list your GPG keys, run the following command:

gpg --list-secret-keys --keyid-format LONG

You will see a list of your GPG keys. Copy the key ID of the key you want to use.

4. Configure Git to use your GPG key

To configure Git to use your GPG key, run the following command:

git config --global user.signingkey YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

5. Configure Git to sign commits by default

To configure Git to sign commits by default, run the following command:

git config --global commit.gpgsign true
git config --global gpg.program (which gpg)

6. EXport the GPG key

To export the GPG key, run the following command:

gpg --armor --export YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

7. Import to github

Go to your github account and add the exported GPG key in GPG keys section, create a new GPG key and paste the exported key.

Configure Visual Studio Code to use GPG

1. Configure gpg-agent

To configure gpg-agent, run the following command:

echo "default-cache-ttl" >> ~/.gnupg/gpg-agent.conf
echo "pinentry-program /usr/bin/pinentry-gtk-2" >> ~/.gnupg/gpg-agent.conf
echo "allow-preset-passphrase" >> ~/.gnupg/gpg-agent.conf

2. Restart the gpg-agent

To restart the gpg-agent, run the following command:

gpgconf --kill gpg-agent
gpgconf --launch gpg-agent

3. Sign a commit

To sign a commit, run the following command:

git commit -S -m "Your commit message"

4. Verify the signature

To verify the signature of a commit, run the following command:

git verify-commit HEAD

5. Configure Visual Studio Code to use GPG

To configure Visual Studio Code to use GPG, open the settings by pressing Ctrl + , and search for git.enableCommitSigning. Set the value to true.

6. Sign a commit

Make a commit in Visual Studio Code, and you will see a prompt asking you introduce your GPG passphrase. Enter your passphrase, and the commit will be signed.

That's it! Now you know how to sign Git commits in Visual Studio Code.

Some tips

For all repositories

  • Establish your email in git configuration:
git config --global user.email "petete@something.es"
  • Establish your name in git configuration:
git config --global user.name "Petete"
  • Establish your GPG key in git configuration:
git config --global user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config --global gpg.program (which gpg)

For a specific repository

  • Establish your email in git configuration:
git config user.email "petete@something.es"
  • Establish your name in git configuration:
git config user.name "Petete"
  • Establish your GPG key in git configuration:
git config user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config gpg.program (which gpg)

Conclusion

In this post, we saw how to sign Git commits in Visual Studio Code. This is useful if you want to verify the authenticity of your commits. I hope you found this post helpful. If you have any questions or comments, please let me know. Thank you for reading!

Ejecutar Terraform con archivos de variables

En ocasiones, cuando trabajamos con Terraform, necesitamos gestionar múltiples archivos de variables para diferentes entornos o configuraciones. En este post, os muestro cómo ejecutar Terraform con archivos de variables de forma sencilla.

Terraform y archivos de variables

Terraform permite cargar variables desde archivos .tfvars mediante la opción --var-file. Por ejemplo, si tenemos un archivo variables.tf con la siguiente definición:

variable "region" {
  type    = string
  default = "westeurope"
}

variable "resource_group_name" {
  type = string
}

Podemos crear un archivo variables.tfvars con los valores de las variables:

region = "westeurope"
resource_group_name = "my-rg"

Y ejecutar Terraform con el archivo de variables:

terraform plan --var-file variables.tfvars

Ejecutar Terraform con múltiples archivos de variables

Si tenemos múltiples archivos de variables, podemos ejecutar Terraform con todos ellos de forma sencilla. Para ello, podemos crear un script que busque los archivos .tfvars en un directorio y ejecute Terraform con ellos.

El problema de ejecutar Terraform con múltiples archivos de variables es que la opción --var-file no admite un array de archivos. Por lo tanto, necesitamos construir el comando de Terraform con todos los archivos de variables, lo cual puede ser un poco tedioso.

A continuación, os muestro un ejemplo de cómo crear una función en Bash/pwsh que ejecuta Terraform con archivos de variables:

Example

terraform_with_var_files.sh
  function terraform_with_var_files() {
  if [[ "$1" == "--help" || "$1" == "-h" ]]; then
    echo "Usage: terraform_with_var_files [OPTIONS]"
    echo "Options:"
    echo "  --dir DIR                Specify the directory containing .tfvars files"
    echo "  --action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
    echo "  --auto AUTO              Specify 'auto' for auto-approve (optional)"
    echo "  --resource_address ADDR  Specify the resource address for import action (optional)"
    echo "  --resource_id ID         Specify the resource ID for import action (optional)"
    echo "  --workspace WORKSPACE    Specify the Terraform workspace (default: default)"
    echo "  --help, -h               Show this help message"
    return 0
  fi

  local dir=""
  local action=""
  local auto=""
  local resource_address=""
  local resource_id=""
  local workspace="default"

  while [[ "$#" -gt 0 ]]; do
    case "$1" in
      --dir) dir="$2"; shift ;;
      --action) action="$2"; shift ;;
      --auto) auto="$2"; shift ;;
      --resource_address) resource_address="$2"; shift ;;
      --resource_id) resource_id="$2"; shift ;;
      --workspace) workspace="$2"; shift ;;
      *) echo "Unknown parameter passed: $1"; return 1 ;;
    esac
    shift
  done

  if [[ ! -d "$dir" ]]; then
    echo "El directorio especificado no existe."
    return 1
  fi

  if [[ "$action" != "plan" && "$action" != "apply" && "$action" != "destroy" && "$action" != "import" ]]; then
    echo "Acción no válida. Usa 'plan', 'apply', 'destroy' o 'import'."
    return 1
  fi

  local var_files=()
  for file in "$dir"/*.tfvars; do
    if [[ -f "$file" ]]; then
      var_files+=("--var-file $file")
    fi
  done

  if [[ ${#var_files[@]} -eq 0 ]]; then
    echo "No se encontraron archivos .tfvars en el directorio especificado."
    return 1
  fi

  echo "Inicializando Terraform..."
  eval terraform init
  if [[ $? -ne 0 ]]; then
    echo "La inicialización de Terraform falló."
    return 1
  fi

  echo "Seleccionando el workspace: $workspace"
  eval terraform workspace select "$workspace" || eval terraform workspace new "$workspace"
  if [[ $? -ne 0 ]]; then
    echo "La selección del workspace falló."
    return 1
  fi

  echo "Validando la configuración de Terraform..."
  eval terraform validate
  if [[ $? -ne 0 ]]; then
    echo "La validación de Terraform falló."
    return 1
  fi

  local command="terraform $action ${var_files[@]}"

  if [[ "$action" == "import" ]]; then
    if [[ -z "$resource_address" || -z "$resource_id" ]]; then
      echo "Para la acción 'import', se deben proporcionar la dirección del recurso y el ID del recurso."
      return 1
    fi
    command="terraform $action ${var_files[@]} $resource_address $resource_id"
  elif [[ "$auto" == "auto" && ( "$action" == "apply" || "$action" == "destroy" ) ]]; then
    command="$command -auto-approve"
  fi

  echo "Ejecutando: $command"
  eval "$command"
}

# Uso de la función
# terraform_with_var_files --dir "/ruta/al/directorio" --action "plan" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "apply" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "destroy" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "import" --resource_address "resource_address" --resource_id "resource_id" --workspace "workspace"
terraform_with_var_files.ps1
function Terraform-WithVarFiles {
    param (
        [Parameter(Mandatory=$false)]
        [string]$Dir,

        [Parameter(Mandatory=$false)]
        [string]$Action,

        [Parameter(Mandatory=$false)]
        [string]$Auto,

        [Parameter(Mandatory=$false)]
        [string]$ResourceAddress,

        [Parameter(Mandatory=$false)]
        [string]$ResourceId,

        [Parameter(Mandatory=$false)]
        [string]$Workspace = "default",

        [switch]$Help
    )

    if ($Help) {
        Write-Output "Usage: Terraform-WithVarFiles [OPTIONS]"
        Write-Output "Options:"
        Write-Output "  -Dir DIR                Specify the directory containing .tfvars files"
        Write-Output "  -Action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
        Write-Output "  -Auto AUTO              Specify 'auto' for auto-approve (optional)"
        Write-Output "  -ResourceAddress ADDR   Specify the resource address for import action (optional)"
        Write-Output "  -ResourceId ID          Specify the resource ID for import action (optional)"
        Write-Output "  -Workspace WORKSPACE    Specify the Terraform workspace (default: default)"
        Write-Output "  -Help                   Show this help message"
        return
    }

    if (-not (Test-Path -Path $Dir -PathType Container)) {
        Write-Error "The specified directory does not exist."
        return
    }

    if ($Action -notin @("plan", "apply", "destroy", "import")) {
        Write-Error "Invalid action. Use 'plan', 'apply', 'destroy', or 'import'."
        return
    }

    $varFiles = Get-ChildItem -Path $Dir -Filter *.tfvars | ForEach-Object { "--var-file $_.FullName" }

    if ($varFiles.Count -eq 0) {
        Write-Error "No .tfvars files found in the specified directory."
        return
    }

    Write-Output "Initializing Terraform..."
    terraform init
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform initialization failed."
        return
    }

    Write-Output "Selecting the workspace: $Workspace"
    terraform workspace select -or-create $Workspace
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Workspace selection failed."
        return
    }

    Write-Output "Validating Terraform configuration..."
    terraform validate
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform validation failed."
        return
    }

    $command = "terraform $Action $($varFiles -join ' ')"

    if ($Action -eq "import") {
        if (-not $ResourceAddress -or -not $ResourceId) {
            Write-Error "For 'import' action, both resource address and resource ID must be provided."
            return
        }
        $command = "$command $ResourceAddress $ResourceId"
    } elseif ($Auto -eq "auto" -and ($Action -eq "apply" -or $Action -eq "destroy")) {
        $command = "$command -auto-approve"
    }

    Write-Output "Executing: $command"
    Invoke-Expression $command
}

# Usage examples:
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "plan" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "apply" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "destroy" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "import" -ResourceAddress "resource_address" -ResourceId "resource_id" -Workspace "workspace"

Para cargar la función en tu terminal bash, copia y pega el script en tu archivo .bashrc, .zshrc o el que toque y recarga tu terminal.

Para cargar la función en tu terminal pwsh, sigue este artículo: Personalización del entorno de shell

Espero que os sea de utilidad. ¡Saludos!

Develop my firts policy for Kubernetes with minikube and gatekeeper

Now that we have our development environment, we can start developing our first policy for Kubernetes with minikube and gatekeeper.

First at all, we need some visual code editor to write our policy. I recommend using Visual Studio Code, but you can use any other editor. Exists a plugin for Visual Studio Code that helps you to write policies for gatekeeper. You can install it from the marketplace: Open Policy Agent.

Once you have your editor ready, you can start writing your policy. In this example, we will create a policy that denies the creation of pods with the image nginx:latest.

For that we need two files:

  • constraint.yaml: This file defines the constraint that we want to apply.
  • constraint_template.yaml: This file defines the template that we will use to create the constraint.

Let's start with the constraint_template.yaml file:

constraint_template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenypodswithnginxlatest
spec:
  crd:
    spec:
      names:
        kind: K8sDenyPodsWithNginxLatest
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenypodswithnginxlatest

        violation[{"msg": msg}] {
          input.review.object.spec.containers[_].image == "nginx:latest"
          msg := "Containers cannot use the nginx:latest image"
        }

Now, let's create the constraint.yaml file:

constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithNginxLatest
metadata:
  name: deny-pods-with-nginx-latest
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    msg: "Containers cannot use the nginx:latest image"

Now, we can apply the files to our cluster:

# Create the constraint template
kubectl apply -f constraint_template.yaml

# Create the constraint
kubectl apply -f constraint.yaml

Now, we can test the constraint. Let's create a pod with the image nginx:latest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
EOF

We must see an error message like this:

Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [k8sdenypodswithnginxlatest] Containers cannot use the nginx:latest image

Now, let's create a pod with a different image:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25.5
EOF

We must see a message like this:

pod/nginx-pod created

For cleaning up, you can delete pod,the constraint and the constraint template:

# Delete the pod
kubectl delete pod nginx-pod
# Delete the constraint
kubectl delete -f constraint.yaml

# Delete the constraint template
kubectl delete -f constraint_template.yaml

And that's it! We have developed our first policy for Kubernetes with minikube and gatekeeper. Now you can start developing more complex policies and test them in your cluster.

Happy coding!

How to create a local environment to write policies for Kubernetes with minikube and gatekeeper

minikube in wsl2

Enable systemd in WSL2

sudo nano /etc/wsl.conf

Add the following:

[boot]
systemd=true

Restart WSL2 in command:

wsl --shutdown
wsl

Install docker

Install docker using repository

Minikube

Install minikube

# Download the latest Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# Make it executable
chmod +x ./minikube

# Move it to your user's executable PATH
sudo mv ./minikube /usr/local/bin/

#Set the driver version to Docker
minikube config set driver docker

Test minikube

# Enable completion
source <(minikube completion bash)
# Start minikube
minikube start
# Check the status
minikube status
# set context
kubectl config use-context minikube
# get pods
kubectl get pods --all-namespaces

Install OPA Gatekeeper

# Install OPA Gatekeeper
# check version in https://open-policy-agent.github.io/gatekeeper/website/docs/install#deploying-a-release-using-prebuilt-image
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

# wait and check the status
sleep 60
kubectl get pods -n gatekeeper-system

Test constraints

First, we need to create a constraint template and a constraint.

# Create a constraint template
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/templates/k8srequiredlabels_template.yaml

# Create a constraint
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/constraints/k8srequiredlabels_constraint.yaml

Now, we can test the constraint.

# Create a deployment without the required label
kubectl create namespace petete 
We must see an error message like this:

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}

# Create a deployment with the required label
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: petete
  labels:
    gatekeeper: "true"
EOF
kubectl get namespaces petete
We must see a message like this:

NAME     STATUS   AGE
petete   Active   3s

Conclusion

We have created a local environment to write policies for Kubernetes with minikube and gatekeeper. We have tested the environment with a simple constraint. Now we can write our own policies and test them in our local environment.

References

Trigger an on-demand Azure Policy compliance evaluation scan

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to trigger a scan with Azure Policy.

What is a scan in Azure Policy

A scan in Azure Policy is a process that evaluates your resources against a set of policies to determine if they are compliant. When you trigger a scan, Azure Policy evaluates your resources and generates a compliance report that shows the results of the evaluation. The compliance report includes information about the policies that were evaluated, the resources that were scanned, and the compliance status of each resource.

You can trigger a scan in Azure Policy using the Azure CLI, PowerShell, or the Azure portal. When you trigger a scan, you can specify the scope of the scan, the policies to evaluate, and other parameters that control the behavior of the scan.

Trigger a scan with the Azure CLI

To trigger a scan with the Azure CLI, you can use the az policy state trigger-scan command. This command triggers a policy compliance evaluation for a scope

How to trigger a scan with the Azure CLI for active subscription:

az policy state trigger-scan 

How to trigger a scan with the Azure CLI for a specific resource group:

az policy state trigger-scan --resource-group myResourceGroup

Trigger a scan with PowerShell

To trigger a scan with PowerShell, you can use the Start-AzPolicyComplianceScan cmdlet. This cmdlet triggers a policy compliance evaluation for a scope.

How to trigger a scan with PowerShell for active subscription:

Start-AzPolicyComplianceScan
$job = Start-AzPolicyComplianceScan -AsJob

How to trigger a scan with PowerShell for a specific resource group:

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

Conclusion

In this article, we discussed how to trigger a scan with Azure Policy. We covered how to trigger a scan using the Azure CLI and PowerShell. By triggering a scan, you can evaluate your resources against a set of policies to determine if they are compliant. This can help you ensure that your resources are compliant with your organization's standards and best practices.

References