Skip to content

Blog

Test the proxy configuration in an AKS cluster

Variables

export RESOURCE_GROUP=<your-resource-group>
export AKS_CLUSTER_NAME=<your-aks-cluster-name>
export MC_RESOURCE_GROUP_NAME=<your-mc-resource-group-name>
export VMSS_NAME=<your-vmss-name>
export HTTP_PROXYCONFIGURED="http://<your-http-proxy>:8080/"
export HTTPS_PROXYCONFIGURED="https://<your-https-proxy>:8080/"

Get the VMSS instance IDs

# To get the instance IDs of all the instances in the VMSS, use the following command:
az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId"

# For one instance, you can use the following command to get the instance ID:
VMSS_INSTANCE_IDS=$(az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId" | tail -1)

Use an instance ID to test connectivity from the HTTP proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTP_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test connectivity from the HTTPS proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTPS_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test DNS functionality

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "dig mcr.microsoft.com 443"

Test wagent logs

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "cat /var/log/waagent.log"

Test wagent status

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "systemctl status waagent"

Update the proxy configuration

az aks update -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP --http-proxy-config aks-proxy-config.json
az aks update --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --http-proxy-config aks-proxy-config.json

Azure Virtual Network Manager: A Comprehensive Guide

Azure Virtual Network Manager is a powerful management service that allows you to group, configure, deploy, and manage virtual networks across subscriptions on a global scale. It provides the ability to define network groups for logical segmentation of your virtual networks. You can then establish connectivity and security configurations and apply them across all selected virtual networks in network groups simultaneously.

How Does Azure Virtual Network Manager Work?

The functionality of Azure Virtual Network Manager revolves around a well-defined process:

  1. Scope Definition: During the creation process, you determine the scope of what your Azure Virtual Network Manager will manage. The Network Manager only has delegated access to apply configurations within this defined scope boundary. Although you can directly define a scope on a list of subscriptions, it's recommended to use management groups for scope definition as they provide hierarchical organization to your subscriptions.

  2. Deployment of Configuration Types: After defining the scope, you deploy configuration types including Connectivity and SecurityAdmin rules for your Virtual Network Manager.

  3. Creation of Network Group: Post-deployment, you create a network group which acts as a logical container of networking resources for applying configurations at scale. You can manually select individual virtual networks to be added to your network group (static membership) or use Azure Policy to define conditions for dynamic group membership.

  4. Connectivity and Security Configurations: Next, you create connectivity and/or security configurations to be applied to those network groups based on your topology and security requirements. A connectivity configuration enables you to create a mesh or a hub-and-spoke network topology, while a security configuration lets you define a collection of rules that can be applied globally to one or more network groups.

  5. Deployment of Configurations: Once you've created your desired network groups and configurations, you can deploy the configurations to any region of your choosing.

Azure Virtual Network Manager can be deployed and managed through various platforms such as the Azure portal, Azure CLI, Azure PowerShell, or Terraform.

Key Benefits of Azure Virtual Network Manager

  • Centralized management of connectivity and security policies globally across regions and subscriptions.
  • Direct connectivity between spokes in a hub-and-spoke configuration without the complexity of managing a mesh network.
  • Highly scalable and highly available service with redundancy and replication across the globe.
  • Ability to create network security rules that override network security group rules.
  • Low latency and high bandwidth between resources in different virtual networks using virtual network peering.
  • Roll out network changes through a specific region sequence and frequency of your choosing.

Use Cases for Azure Virtual Network Manager

Azure Virtual Network Manager is a versatile tool that can be used in a variety of scenarios:

  1. Hub-and-Spoke Network Topology: Azure Virtual Network Manager is ideal for managing hub-and-spoke network topologies where you have a central hub virtual network that connects to multiple spoke virtual networks. You can easily create and manage these configurations at scale using Azure Virtual Network Manager.

  2. Global Connectivity and Security Policies: If you have a global presence with virtual networks deployed across multiple regions, Azure Virtual Network Manager allows you to define and apply connectivity and security policies globally, ensuring consistent network configurations across all regions.

  3. Network Segmentation and Isolation: Azure Virtual Network Manager enables you to segment and isolate your virtual networks based on your organizational requirements. You can create network groups and apply security configurations to enforce network isolation and access control.

  4. Centralized Network Management: For organizations with multiple subscriptions and virtual networks, Azure Virtual Network Manager provides a centralized management solution to manage network configurations, connectivity, and security policies across all subscriptions.

  5. Automated Network Configuration Deployment: By leveraging Azure Policy and Azure Virtual Network Manager, you can automate the deployment of network configurations based on predefined conditions, ensuring consistent network configurations and compliance across your Azure environment.

Example connectivity and security configurations forcing a Network Security Rule

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Example Connectivity Configuration forcing a Hub-and-Spoke Network Topology

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Preview Features

At the time of writing, Azure Virtual Network Manager has some features in preview and may not be available in all regions. Some of the preview features include:

  • IP address management: allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks.
  • Virtual Network verifier: Enables you to check if your network policies allow or disallow traffic between your Azure network resources.

  • Configurations, creation of a routing is in preview, very interesting to manage the traffic between the different networks.

Conclusion

Azure Virtual Network Manager is a powerful service that simplifies the management of virtual networks in Azure. By providing a centralized platform to define and apply connectivity and security configurations at scale, Azure Virtual Network Manager streamlines network management tasks and ensures consistent network configurations across your Azure environment.

For up-to-date information on the regions where Azure Virtual Network Manager is available, refer to Products available by region.

GTD in Outlook with Microsoft To Do

In this post, we will see how to use GTD in Outlook.

Firs of all, let's define what GTD is. GTD stands for Getting Things Done, a productivity method created by David Allen. The GTD method is based on the idea that you should get things out of your head and into a trusted system, so you can focus on what you need to do.

Important

The GTD method is not about doing more things faster. It's about doing the right things at the right time. This method needs to be aligned with your purpose, objectives, and goals, so you can focus on what really matters to you.

The GTD workflow

The GTD workflow consists of five steps:

  1. Capture: Collect everything that has your attention.
  2. Clarify: Process what you've captured.
  3. Organize: Put everything in its place.
  4. Reflect: Review your system regularly.
  5. Engage: Choose what to do and do it.

The detailed flowchart of the GTD method is shown below:

graph TD;
    subgraph Capture    
    subgraph for_each_item_not_in_Inbox
    AA[Collect everything that has your attention in the Inbox];
    end    
    A[Inbox]
    AA-->A
    end
    subgraph Clarify        
    subgraph for_each_item_in_Inbox
    BB[  What is this?
            What should I do?
            Is this really worth doing?
            How does it fit in with all the other things I have to do?
            Is there any action that needs to be done about it or because of it?]
    end
    end
    A --> BB
    subgraph Organize
    BB --> B{Is it actionable?}
    B -- no --> C{Is it reference material?}
    C -- yes --> D[Archive]
    C -- no --> E{Is it trash?}
    E -- yes --> FF[Trash it]
    E -- no --> GG[Incubate it]
    GG --> HH[Review it weekly]
    B -- yes --> F{Can it be done in one step?}
    F -- no --> G[Project]
    G --> HHH[Define a next action at least]
    HHH --> H[Project Actions]    
    end
    subgraph Engage
    H --> I
    F -- yes --> I{Does it take more than 2 minutes?}
    I -- no --> J[DO IT]
    I -- yes --> K{Is it my responsibility?}
    K -- no --> L[Delegate]
    L --> M[Waiting For]    
    K -- yes --> N{Does it have a specific date?}
    N -- yes --> O[Add to Calendar]
    N -- no --> P[Next Actions]
    O --> Q[Schedule. Do it at a specific time]
    P --> R[Do it as soon as possible]
    end
graph TD;
subgraph Reflect
    S[**Daily Review**
            - Review your tasks daily
            - Calendar
            - Next  actions
            - Urgent tasks]
    T[**Weekly Review**
            - Review your projects weekly, make sure you're making progress
            - Next actions
            - Waiting for
            - Someday/Maybe
            - Calendar]
    U[**Monthly Review**
            - Focus on your goals
            - Reference archive
            - Someday/Maybe
            - Completed projects]
    V[**Review your purpose annually**
            - Goals and purposes
            - Big projects
            - Historical archive]
    end

It's important to note that the GTD method is not a one-size-fits-all solution. You can adapt it to your needs and preferences. The key is to find a system that works for you and stick to it.

And now, let's see how to use GTD in Outlook and Microsoft To Do.

How to use GTD in Outlook with Microsoft To Do

When it comes to implementing the GTD method in Outlook, the key is to use the right tools and techniques. Microsoft To Do is a great tool for managing your tasks and projects, and it integrates seamlessly with Outlook.

You can use Outlook to implement the GTD method by following these steps:

  1. Capture:
    • emails: Use the Inbox to collect everything that has your attention.
    • Other Things: Use Microsoft To Do Taks default list to capture tasks and projects.
  2. Clarify: Process what you've captured by asking yourself the following questions:
    • What is this?
    • What should I do?
    • Is this really worth doing?
    • How does it fit in with all the other things I have to do?
    • Is there any action that needs to be done about it or because of it?
  3. Organize: Put everything in its place by following these steps:
    • Inbox:
      • Move emails to the appropriate folder or delete them.
      • Categories: Use categories to organize your emails by context and folder to organize your emails by project or client.
      • Use search folders to find emails quickly by category or categories, you can clear categories after processing.
      • Flags emails to add to To Do.
      • Create rules to automate repetitive tasks when clarifying one type of email has allways the same action.
    • Tasks: Organize your tasks and projects in Microsoft To Do.
      • Lists: Create lists for different types of tasks, one by context or use #tags for that in one lists. For example:
        • In the case of lists: Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, Someday/Maybe.
        • In the case of tags, one list with: #Agendas, #Anywhere, #Calls, #Computed, #Errands, #Home, #Office, #WaitingFor, #SomedayMaybe.
      • Use tag #nextaction to identify the next task to do.
      • Use tag #urgent to identify urgent tasks.
    • Projects
      • Group Lists: Group lists by category of projects or client.
      • One list per project: Create a list for each project and add tasks to it.
      • Use #nextaction tag to identify the next task in each project.
    • Reference Material:
      • Store reference material in folders, better in OneDrive or SharePoint.
      • Use a folder structure to organize your reference material
      • Use search folders to find it quickly.
      • Use tags to identify the context of the reference material. You can use FileMeta to add tags to files in Windows for non-taggeable files.
  4. Reflect: Review your system regularly to make sure it's up to date.
    • Daily Review
    • Weekly Review
    • Monthly Review
    • Annual Review
  5. Engage: Choose what to do and do it.
    • Use the My Day Bar to see your tasks and events at a glance or in search bar type #nextaction to see all your next actions.

These are just some ideas to get you started. You can adapt the GTD method to your needs and preferences. The key is to find a system that works for you and stick to it.

My example of GTD in Outlook with Microsoft To Do:

Outlook:

alt text

To Do:

alt text

I'm using a mix of lists and tags with the same name to organize my tasks and projects. I have lists for different types of tasks, such as Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, and Someday/Maybe. I also use tags to identify the next action, urgent tasks and context of the task in projects.

In the case of emails, I use categories to organize them by context and folders to organize them by project or client. I also use search folders to find emails quickly by category or categories and filter by unreads. The reason for this is that I can clear categories after processing and in the mayorie of cases, I only need a quick review of the emails without the need to convert them into tasks.

By following these steps, you can implement the GTD method in Outlook and Microsoft To Do and improve your productivity and focus.

Good luck! 🍀

References

How to sign Git commits in Visual Studio Code in Windows Subsystem for Linux (WSL)

In this post, we will see how to sign Git commits in Visual Studio Code.

Prerequisites

  • Visual Studio Code
  • Git
  • gpg
  • gpg-agent
  • gpgconf
  • pinentry-gtk-2
  • Windows Subsystem for Linux (WSL) with Ubuntu 20.04

Steps

1. Install GPG

First, you need to install GPG ans agents. You can do this by running the following command:

sudo apt install gpg gpg-agent gpgconf pinentry-gtk2 -y

2. Generate a GPG key

To generate a GPG key, run the following command:

gpg --full-generate-key

You will be asked to enter your name, email, and passphrase. After that, the key will be generated.

3. List your GPG keys

To list your GPG keys, run the following command:

gpg --list-secret-keys --keyid-format LONG

You will see a list of your GPG keys. Copy the key ID of the key you want to use.

4. Configure Git to use your GPG key

To configure Git to use your GPG key, run the following command:

git config --global user.signingkey YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

5. Configure Git to sign commits by default

To configure Git to sign commits by default, run the following command:

git config --global commit.gpgsign true
git config --global gpg.program (which gpg)

6. EXport the GPG key

To export the GPG key, run the following command:

gpg --armor --export YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

7. Import to github

Go to your github account and add the exported GPG key in GPG keys section, create a new GPG key and paste the exported key.

Configure Visual Studio Code to use GPG

1. Configure gpg-agent

To configure gpg-agent, run the following command:

echo "default-cache-ttl" >> ~/.gnupg/gpg-agent.conf
echo "pinentry-program /usr/bin/pinentry-gtk-2" >> ~/.gnupg/gpg-agent.conf
echo "allow-preset-passphrase" >> ~/.gnupg/gpg-agent.conf

2. Restart the gpg-agent

To restart the gpg-agent, run the following command:

gpgconf --kill gpg-agent
gpgconf --launch gpg-agent

3. Sign a commit

To sign a commit, run the following command:

git commit -S -m "Your commit message"

4. Verify the signature

To verify the signature of a commit, run the following command:

git verify-commit HEAD

5. Configure Visual Studio Code to use GPG

To configure Visual Studio Code to use GPG, open the settings by pressing Ctrl + , and search for git.enableCommitSigning. Set the value to true.

6. Sign a commit

Make a commit in Visual Studio Code, and you will see a prompt asking you introduce your GPG passphrase. Enter your passphrase, and the commit will be signed.

That's it! Now you know how to sign Git commits in Visual Studio Code.

Some tips

For all repositories

  • Establish your email in git configuration:
git config --global user.email "petete@something.es"
  • Establish your name in git configuration:
git config --global user.name "Petete"
  • Establish your GPG key in git configuration:
git config --global user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config --global gpg.program (which gpg)

For a specific repository

  • Establish your email in git configuration:
git config user.email "petete@something.es"
  • Establish your name in git configuration:
git config user.name "Petete"
  • Establish your GPG key in git configuration:
git config user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config gpg.program (which gpg)

Conclusion

In this post, we saw how to sign Git commits in Visual Studio Code. This is useful if you want to verify the authenticity of your commits. I hope you found this post helpful. If you have any questions or comments, please let me know. Thank you for reading!

Ejecutar Terraform con archivos de variables

En ocasiones, cuando trabajamos con Terraform, necesitamos gestionar múltiples archivos de variables para diferentes entornos o configuraciones. En este post, os muestro cómo ejecutar Terraform con archivos de variables de forma sencilla.

Terraform y archivos de variables

Terraform permite cargar variables desde archivos .tfvars mediante la opción --var-file. Por ejemplo, si tenemos un archivo variables.tf con la siguiente definición:

variable "region" {
  type    = string
  default = "westeurope"
}

variable "resource_group_name" {
  type = string
}

Podemos crear un archivo variables.tfvars con los valores de las variables:

region = "westeurope"
resource_group_name = "my-rg"

Y ejecutar Terraform con el archivo de variables:

terraform plan --var-file variables.tfvars

Ejecutar Terraform con múltiples archivos de variables

Si tenemos múltiples archivos de variables, podemos ejecutar Terraform con todos ellos de forma sencilla. Para ello, podemos crear un script que busque los archivos .tfvars en un directorio y ejecute Terraform con ellos.

El problema de ejecutar Terraform con múltiples archivos de variables es que la opción --var-file no admite un array de archivos. Por lo tanto, necesitamos construir el comando de Terraform con todos los archivos de variables, lo cual puede ser un poco tedioso.

A continuación, os muestro un ejemplo de cómo crear una función en Bash/pwsh que ejecuta Terraform con archivos de variables:

Example

terraform_with_var_files.sh
  function terraform_with_var_files() {
  if [[ "$1" == "--help" || "$1" == "-h" ]]; then
    echo "Usage: terraform_with_var_files [OPTIONS]"
    echo "Options:"
    echo "  --dir DIR                Specify the directory containing .tfvars files"
    echo "  --action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
    echo "  --auto AUTO              Specify 'auto' for auto-approve (optional)"
    echo "  --resource_address ADDR  Specify the resource address for import action (optional)"
    echo "  --resource_id ID         Specify the resource ID for import action (optional)"
    echo "  --workspace WORKSPACE    Specify the Terraform workspace (default: default)"
    echo "  --help, -h               Show this help message"
    return 0
  fi

  local dir=""
  local action=""
  local auto=""
  local resource_address=""
  local resource_id=""
  local workspace="default"

  while [[ "$#" -gt 0 ]]; do
    case "$1" in
      --dir) dir="$2"; shift ;;
      --action) action="$2"; shift ;;
      --auto) auto="$2"; shift ;;
      --resource_address) resource_address="$2"; shift ;;
      --resource_id) resource_id="$2"; shift ;;
      --workspace) workspace="$2"; shift ;;
      *) echo "Unknown parameter passed: $1"; return 1 ;;
    esac
    shift
  done

  if [[ ! -d "$dir" ]]; then
    echo "El directorio especificado no existe."
    return 1
  fi

  if [[ "$action" != "plan" && "$action" != "apply" && "$action" != "destroy" && "$action" != "import" ]]; then
    echo "Acción no válida. Usa 'plan', 'apply', 'destroy' o 'import'."
    return 1
  fi

  local var_files=()
  for file in "$dir"/*.tfvars; do
    if [[ -f "$file" ]]; then
      var_files+=("--var-file $file")
    fi
  done

  if [[ ${#var_files[@]} -eq 0 ]]; then
    echo "No se encontraron archivos .tfvars en el directorio especificado."
    return 1
  fi

  echo "Inicializando Terraform..."
  eval terraform init
  if [[ $? -ne 0 ]]; then
    echo "La inicialización de Terraform falló."
    return 1
  fi

  echo "Seleccionando el workspace: $workspace"
  eval terraform workspace select "$workspace" || eval terraform workspace new "$workspace"
  if [[ $? -ne 0 ]]; then
    echo "La selección del workspace falló."
    return 1
  fi

  echo "Validando la configuración de Terraform..."
  eval terraform validate
  if [[ $? -ne 0 ]]; then
    echo "La validación de Terraform falló."
    return 1
  fi

  local command="terraform $action ${var_files[@]}"

  if [[ "$action" == "import" ]]; then
    if [[ -z "$resource_address" || -z "$resource_id" ]]; then
      echo "Para la acción 'import', se deben proporcionar la dirección del recurso y el ID del recurso."
      return 1
    fi
    command="terraform $action ${var_files[@]} $resource_address $resource_id"
  elif [[ "$auto" == "auto" && ( "$action" == "apply" || "$action" == "destroy" ) ]]; then
    command="$command -auto-approve"
  fi

  echo "Ejecutando: $command"
  eval "$command"
}

# Uso de la función
# terraform_with_var_files --dir "/ruta/al/directorio" --action "plan" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "apply" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "destroy" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "import" --resource_address "resource_address" --resource_id "resource_id" --workspace "workspace"
terraform_with_var_files.ps1
function Terraform-WithVarFiles {
    param (
        [Parameter(Mandatory=$false)]
        [string]$Dir,

        [Parameter(Mandatory=$false)]
        [string]$Action,

        [Parameter(Mandatory=$false)]
        [string]$Auto,

        [Parameter(Mandatory=$false)]
        [string]$ResourceAddress,

        [Parameter(Mandatory=$false)]
        [string]$ResourceId,

        [Parameter(Mandatory=$false)]
        [string]$Workspace = "default",

        [switch]$Help
    )

    if ($Help) {
        Write-Output "Usage: Terraform-WithVarFiles [OPTIONS]"
        Write-Output "Options:"
        Write-Output "  -Dir DIR                Specify the directory containing .tfvars files"
        Write-Output "  -Action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
        Write-Output "  -Auto AUTO              Specify 'auto' for auto-approve (optional)"
        Write-Output "  -ResourceAddress ADDR   Specify the resource address for import action (optional)"
        Write-Output "  -ResourceId ID          Specify the resource ID for import action (optional)"
        Write-Output "  -Workspace WORKSPACE    Specify the Terraform workspace (default: default)"
        Write-Output "  -Help                   Show this help message"
        return
    }

    if (-not (Test-Path -Path $Dir -PathType Container)) {
        Write-Error "The specified directory does not exist."
        return
    }

    if ($Action -notin @("plan", "apply", "destroy", "import")) {
        Write-Error "Invalid action. Use 'plan', 'apply', 'destroy', or 'import'."
        return
    }

    $varFiles = Get-ChildItem -Path $Dir -Filter *.tfvars | ForEach-Object { "--var-file $_.FullName" }

    if ($varFiles.Count -eq 0) {
        Write-Error "No .tfvars files found in the specified directory."
        return
    }

    Write-Output "Initializing Terraform..."
    terraform init
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform initialization failed."
        return
    }

    Write-Output "Selecting the workspace: $Workspace"
    terraform workspace select -or-create $Workspace
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Workspace selection failed."
        return
    }

    Write-Output "Validating Terraform configuration..."
    terraform validate
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform validation failed."
        return
    }

    $command = "terraform $Action $($varFiles -join ' ')"

    if ($Action -eq "import") {
        if (-not $ResourceAddress -or -not $ResourceId) {
            Write-Error "For 'import' action, both resource address and resource ID must be provided."
            return
        }
        $command = "$command $ResourceAddress $ResourceId"
    } elseif ($Auto -eq "auto" -and ($Action -eq "apply" -or $Action -eq "destroy")) {
        $command = "$command -auto-approve"
    }

    Write-Output "Executing: $command"
    Invoke-Expression $command
}

# Usage examples:
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "plan" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "apply" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "destroy" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "import" -ResourceAddress "resource_address" -ResourceId "resource_id" -Workspace "workspace"

Para cargar la función en tu terminal bash, copia y pega el script en tu archivo .bashrc, .zshrc o el que toque y recarga tu terminal.

Para cargar la función en tu terminal pwsh, sigue este artículo: Personalización del entorno de shell

Espero que os sea de utilidad. ¡Saludos!

Develop my firts policy for Kubernetes with minikube and gatekeeper

Now that we have our development environment, we can start developing our first policy for Kubernetes with minikube and gatekeeper.

First at all, we need some visual code editor to write our policy. I recommend using Visual Studio Code, but you can use any other editor. Exists a plugin for Visual Studio Code that helps you to write policies for gatekeeper. You can install it from the marketplace: Open Policy Agent.

Once you have your editor ready, you can start writing your policy. In this example, we will create a policy that denies the creation of pods with the image nginx:latest.

For that we need two files:

  • constraint.yaml: This file defines the constraint that we want to apply.
  • constraint_template.yaml: This file defines the template that we will use to create the constraint.

Let's start with the constraint_template.yaml file:

constraint_template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenypodswithnginxlatest
spec:
  crd:
    spec:
      names:
        kind: K8sDenyPodsWithNginxLatest
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenypodswithnginxlatest

        violation[{"msg": msg}] {
          input.review.object.spec.containers[_].image == "nginx:latest"
          msg := "Containers cannot use the nginx:latest image"
        }

Now, let's create the constraint.yaml file:

constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithNginxLatest
metadata:
  name: deny-pods-with-nginx-latest
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    msg: "Containers cannot use the nginx:latest image"

Now, we can apply the files to our cluster:

# Create the constraint template
kubectl apply -f constraint_template.yaml

# Create the constraint
kubectl apply -f constraint.yaml

Now, we can test the constraint. Let's create a pod with the image nginx:latest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
EOF

We must see an error message like this:

Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [k8sdenypodswithnginxlatest] Containers cannot use the nginx:latest image

Now, let's create a pod with a different image:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25.5
EOF

We must see a message like this:

pod/nginx-pod created

For cleaning up, you can delete pod,the constraint and the constraint template:

# Delete the pod
kubectl delete pod nginx-pod
# Delete the constraint
kubectl delete -f constraint.yaml

# Delete the constraint template
kubectl delete -f constraint_template.yaml

And that's it! We have developed our first policy for Kubernetes with minikube and gatekeeper. Now you can start developing more complex policies and test them in your cluster.

Happy coding!

How to create a local environment to write policies for Kubernetes with minikube and gatekeeper

minikube in wsl2

Enable systemd in WSL2

sudo nano /etc/wsl.conf

Add the following:

[boot]
systemd=true

Restart WSL2 in command:

wsl --shutdown
wsl

Install docker

Install docker using repository

Minikube

Install minikube

# Download the latest Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# Make it executable
chmod +x ./minikube

# Move it to your user's executable PATH
sudo mv ./minikube /usr/local/bin/

#Set the driver version to Docker
minikube config set driver docker

Test minikube

# Enable completion
source <(minikube completion bash)
# Start minikube
minikube start
# Check the status
minikube status
# set context
kubectl config use-context minikube
# get pods
kubectl get pods --all-namespaces

Install OPA Gatekeeper

# Install OPA Gatekeeper
# check version in https://open-policy-agent.github.io/gatekeeper/website/docs/install#deploying-a-release-using-prebuilt-image
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

# wait and check the status
sleep 60
kubectl get pods -n gatekeeper-system

Test constraints

First, we need to create a constraint template and a constraint.

# Create a constraint template
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/templates/k8srequiredlabels_template.yaml

# Create a constraint
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/constraints/k8srequiredlabels_constraint.yaml

Now, we can test the constraint.

# Create a deployment without the required label
kubectl create namespace petete 
We must see an error message like this:

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}

# Create a deployment with the required label
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: petete
  labels:
    gatekeeper: "true"
EOF
kubectl get namespaces petete
We must see a message like this:

NAME     STATUS   AGE
petete   Active   3s

Conclusion

We have created a local environment to write policies for Kubernetes with minikube and gatekeeper. We have tested the environment with a simple constraint. Now we can write our own policies and test them in our local environment.

References

Trigger an on-demand Azure Policy compliance evaluation scan

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to trigger a scan with Azure Policy.

What is a scan in Azure Policy

A scan in Azure Policy is a process that evaluates your resources against a set of policies to determine if they are compliant. When you trigger a scan, Azure Policy evaluates your resources and generates a compliance report that shows the results of the evaluation. The compliance report includes information about the policies that were evaluated, the resources that were scanned, and the compliance status of each resource.

You can trigger a scan in Azure Policy using the Azure CLI, PowerShell, or the Azure portal. When you trigger a scan, you can specify the scope of the scan, the policies to evaluate, and other parameters that control the behavior of the scan.

Trigger a scan with the Azure CLI

To trigger a scan with the Azure CLI, you can use the az policy state trigger-scan command. This command triggers a policy compliance evaluation for a scope

How to trigger a scan with the Azure CLI for active subscription:

az policy state trigger-scan 

How to trigger a scan with the Azure CLI for a specific resource group:

az policy state trigger-scan --resource-group myResourceGroup

Trigger a scan with PowerShell

To trigger a scan with PowerShell, you can use the Start-AzPolicyComplianceScan cmdlet. This cmdlet triggers a policy compliance evaluation for a scope.

How to trigger a scan with PowerShell for active subscription:

Start-AzPolicyComplianceScan
$job = Start-AzPolicyComplianceScan -AsJob

How to trigger a scan with PowerShell for a specific resource group:

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

Conclusion

In this article, we discussed how to trigger a scan with Azure Policy. We covered how to trigger a scan using the Azure CLI and PowerShell. By triggering a scan, you can evaluate your resources against a set of policies to determine if they are compliant. This can help you ensure that your resources are compliant with your organization's standards and best practices.

References

Custom Azure Policy for Kubernetes

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to create a custom Azure Policy for Kubernetes.

How Azure Policy works in kubernetes

Azure Policy for Kubernetes is an extension of Azure Policy that allows you to enforce policies on your Kubernetes clusters. You can use Azure Policy to define policies that apply to your Kubernetes resources, such as pods, deployments, and services. These policies can help you ensure that your Kubernetes clusters are compliant with your organization's standards and best practices.

Azure Policy for Kubernetes uses Gatekeeper, an open-source policy controller for Kubernetes, to enforce policies on your clusters. Gatekeeper uses the Open Policy Agent (OPA) policy language to define policies and evaluate them against your Kubernetes resources. You can use Gatekeeper to create custom policies that enforce specific rules and effects on your clusters.

graph TD
    A[Azure Policy] -->|Enforce policies| B["add-on azure-policy(Gatekeeper)"]
    B -->|Evaluate policies| C[Kubernetes resources]

Azure Policy for Kubernetes supports the following cluster environments:

  • Azure Kubernetes Service (AKS), through Azure Policy's Add-on for AKS
  • Azure Arc enabled Kubernetes, through Azure Policy's Extension for Arc

Prepare your environment

Before you can create custom Azure Policy for Kubernetes, you need to set up your environment. You will need an Azure Kubernetes Service (AKS) cluster with the Azure Policy add-on enabled. You will also need the Azure CLI and the Azure Policy extension for Visual Studio Code.

To set up your environment, follow these steps:

  1. Create a resource group

    az group create --name myResourceGroup --location spaincentral
    
  2. Create an Azure Kubernetes Service (AKS) cluster with default settings and one node:

    az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
    
  3. Enable Azure Policies for the cluster:

    az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-policy
    
  4. Check the status of the add-on:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.azurepolicy.enabled
    
  5. Check the status of gatekeeper:

    # Install kubectl and kubelogin
    az aks install-cli --install-location .local/bin/kubectl --kubelogin-install-location .local/bin/kubelogin
    # Get the credentials for the AKS cluster
    az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing
    # azure-policy pod is installed in kube-system namespace
    kubectl get pods -n kube-system
    # gatekeeper pod is installed in gatekeeper-system namespace
    kubectl get pods -n gatekeeper-system
    
  6. Install vscode and the Azure Policy extension

    code --install-extension ms-azuretools.vscode-azurepolicy
    

Once you have set up your environment, you can create custom Azure Policy for Kubernetes.

How to create a custom Azure Policy for Kubernetes

To create a custom Azure Policy for Kubernetes, you need to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. You can define policies that enforce specific rules and effects on your Kubernetes resources, such as pods, deployments, and services.

Info

It`s recommended to review Constraint Templates in How to use Gatekeeper

To create a custom Azure Policy for Kubernetes, follow these steps:

  1. Define a constraint template for the policy, I will use an existing constraint template from the Gatekeeper library that requires Ingress resources to be HTTPS only:

    gatekeeper-library/library/general/httpsonly/template.yaml
    apiVersion: templates.gatekeeper.sh/v1
    kind: ConstraintTemplate
    metadata:
        name: k8shttpsonly
        annotations:
            metadata.gatekeeper.sh/title: "HTTPS Only"
            metadata.gatekeeper.sh/version: 1.0.2
            description: >-
            Requires Ingress resources to be HTTPS only.  Ingress resources must
            include the `kubernetes.io/ingress.allow-http` annotation, set to `false`.
            By default a valid TLS {} configuration is required, this can be made
            optional by setting the `tlsOptional` parameter to `true`.
    
            https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    spec:
    crd:
        spec:
        names:
            kind: K8sHttpsOnly
        validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
            type: object
            description: >-
                Requires Ingress resources to be HTTPS only.  Ingress resources must
                include the `kubernetes.io/ingress.allow-http` annotation, set to
                `false`. By default a valid TLS {} configuration is required, this
                can be made optional by setting the `tlsOptional` parameter to
                `true`.
            properties:
                tlsOptional:
                type: boolean
                description: "When set to `true` the TLS {} is optional, defaults 
                to false."
    targets:
        - target: admission.k8s.gatekeeper.sh
        rego: |
            package k8shttpsonly
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not https_complete(ingress)
            not tls_is_optional
            msg := sprintf("Ingress should be https. tls configuration and allow-http=false annotation are required for %v", [ingress.metadata.name])
            }
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not annotation_complete(ingress)
            tls_is_optional
            msg := sprintf("Ingress should be https. The allow-http=false annotation is required for %v", [ingress.metadata.name])
            }
    
            https_complete(ingress) = true {
            ingress.spec["tls"]
            count(ingress.spec.tls) > 0
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            annotation_complete(ingress) = true {
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            tls_is_optional {
            parameters := object.get(input, "parameters", {})
            object.get(parameters, "tlsOptional", false) == true
            }
    

This constrains requires Ingress resources to be HTTPS only

  1. Create an Azure Policy for this constraint template

    1. Open the restriction template created earlier in Visual Studio Code.
    2. Click on Azure Policy icon in the Activity Bar.
    3. Click on View > Command Palette.
    4. Search for the command "Azure Policy for Kubernetes: Create Policy Definition from Constraint Template or Mutation", select base64, this command will create a policy definition from the constraint template.
      Untitled.json
      {
      "properties": {
          "policyType": "Custom",
          "mode": "Microsoft.Kubernetes.Data",
          "displayName": "/* EDIT HERE */",
          "description": "/* EDIT HERE */",
          "policyRule": {
          "if": {
              "field": "type",
              "in": [
              "Microsoft.ContainerService/managedClusters"
              ]
          },
          "then": {
              "effect": "[parameters('effect')]",
              "details": {
              "templateInfo": {
                  "sourceType": "Base64Encoded",
                  "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ=="
              },
              "apiGroups": [
                  "/* EDIT HERE */"
              ],
              "kinds": [
                  "/* EDIT HERE */"
              ],
              "namespaces": "[parameters('namespaces')]",
              "excludedNamespaces": "[parameters('excludedNamespaces')]",
              "labelSelector": "[parameters('labelSelector')]",
              "values": {
                  "tlsOptional": "[parameters('tlsOptional')]"
              }
              }
          }
          },
          "parameters": {
          "effect": {
              "type": "String",
              "metadata": {
              "displayName": "Effect",
              "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
              },
              "allowedValues": [
              "audit",
              "deny",
              "disabled"
              ],
              "defaultValue": "audit"
          },
          "excludedNamespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace exclusions",
              "description": "List of Kubernetes namespaces to exclude from policy evaluation."
              },
              "defaultValue": [
              "kube-system",
              "gatekeeper-system",
              "azure-arc"
              ]
          },
          "namespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace inclusions",
              "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces."
              },
              "defaultValue": []
          },
          "labelSelector": {
              "type": "Object",
              "metadata": {
              "displayName": "Kubernetes label selector",
              "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
              },
              "defaultValue": {},
              "schema": {
              "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
              "type": "object",
              "properties": {
                  "matchLabels": {
                  "description": "matchLabels is a map of {key,value} pairs.",
                  "type": "object",
                  "additionalProperties": {
                      "type": "string"
                  },
                  "minProperties": 1
                  },
                  "matchExpressions": {
                  "description": "matchExpressions is a list of values, a key, and an operator.",
                  "type": "array",
                  "items": {
                      "type": "object",
                      "properties": {
                      "key": {
                          "description": "key is the label key that the selector applies to.",
                          "type": "string"
                      },
                      "operator": {
                          "description": "operator represents a key's relationship to a set of values.",
                          "type": "string",
                          "enum": [
                          "In",
                          "NotIn",
                          "Exists",
                          "DoesNotExist"
                          ]
                      },
                      "values": {
                          "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
                          "type": "array",
                          "items": {
                          "type": "string"
                          }
                      }
                      },
                      "required": [
                      "key",
                      "operator"
                      ],
                      "additionalProperties": false
                  },
                  "minItems": 1
                  }
              },
              "additionalProperties": false
              }
          },
          "tlsOptional": {
              "type": "Boolean",
              "metadata": {
              "displayName": "/* EDIT HERE */",
              "description": "/* EDIT HERE */"
              }
          }
          }
      }
      }
      
    5. Fill the fields with "/ EDIT HERE /" in the policy definition JSON file with the appropriate values, such as the display name, description, API groups, and kinds. For example, in this case you must configure apiGroups: ["extensions", "networking.k8s.io"] and kinds: ["Ingress"]
    6. Save the policy definition JSON file.

This is the complete policy:

json title="https-only.json" { "properties": { "policyType": "Custom", "mode": "Microsoft.Kubernetes.Data", "displayName": "Require HTTPS for Ingress resources", "description": "This policy requires Ingress resources to be HTTPS only. Ingress resources must include the `kubernetes.io/ingress.allow-http` annotation, set to `false`. By default a valid TLS configuration is required, this can be made optional by setting the `tlsOptional` parameter to `true`.", "policyRule": { "if": { "field": "type", "in": [ "Microsoft.ContainerService/managedClusters" ] }, "then": { "effect": "[parameters('effect')]", "details": { "templateInfo": { "sourceType": "Base64Encoded", "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ==" }, "apiGroups": [ "extensions", "networking.k8s.io" ], "kinds": [ "Ingress" ], "namespaces": "[parameters('namespaces')]", "excludedNamespaces": "[parameters('excludedNamespaces')]", "labelSelector": "[parameters('labelSelector')]", "values": { "tlsOptional": "[parameters('tlsOptional')]" } } } }, "parameters": { "effect": { "type": "String", "metadata": { "displayName": "Effect", "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy." }, "allowedValues": [ "audit", "deny", "disabled" ], "defaultValue": "audit" }, "excludedNamespaces": { "type": "Array", "metadata": { "displayName": "Namespace exclusions", "description": "List of Kubernetes namespaces to exclude from policy evaluation." }, "defaultValue": [ "kube-system", "gatekeeper-system", "azure-arc" ] }, "namespaces": { "type": "Array", "metadata": { "displayName": "Namespace inclusions", "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces." }, "defaultValue": [] }, "labelSelector": { "type": "Object", "metadata": { "displayName": "Kubernetes label selector", "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources." }, "defaultValue": {}, "schema": { "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.", "type": "object", "properties": { "matchLabels": { "description": "matchLabels is a map of {key,value} pairs.", "type": "object", "additionalProperties": { "type": "string" }, "minProperties": 1 }, "matchExpressions": { "description": "matchExpressions is a list of values, a key, and an operator.", "type": "array", "items": { "type": "object", "properties": { "key": { "description": "key is the label key that the selector applies to.", "type": "string" }, "operator": { "description": "operator represents a key's relationship to a set of values.", "type": "string", "enum": [ "In", "NotIn", "Exists", "DoesNotExist" ] }, "values": { "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.", "type": "array", "items": { "type": "string" } } }, "required": [ "key", "operator" ], "additionalProperties": false }, "minItems": 1 } }, "additionalProperties": false } }, "tlsOptional": { "type": "Boolean", "metadata": { "displayName": "TLS Optional", "description": "Set to true to make TLS optional" } } } } }

Now you have created a custom Azure Policy for Kubernetes that enforces the HTTPS only constraint on your Kubernetes cluster. You can apply this policy to your cluster to ensure that all Ingress resources are HTTPS only creating a policy definition and assigning it to the management group, subscription or resource group where the AKS cluster is located.

Conclusion

In this article, we discussed how to create a custom Azure Policy for Kubernetes. We showed you how to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. We also showed you how to create a constraint template for the policy and create an Azure Policy for the constraint template. By following these steps, you can create custom policies that enforce specific rules and effects on your Kubernetes resources.

References

Do you need to check if a private endpoint is connected to an external private link service in Azure or just don't know how to do it?

Check this blog post to learn how to do it: Find cross-tenant private endpoint connections in Azure

This is a copy of the script used in the blog post in case it disappears:

scan-private-endpoints-with-manual-connections.ps1
$ErrorActionPreference = "Stop"

class SubscriptionInformation {
    [string] $SubscriptionID
    [string] $Name
    [string] $TenantID
}

class TenantInformation {
    [string] $TenantID
    [string] $DisplayName
    [string] $DomainName
}

class PrivateEndpointData {
    [string] $ID
    [string] $Name
    [string] $Type
    [string] $Location
    [string] $ResourceGroup
    [string] $SubscriptionName
    [string] $SubscriptionID
    [string] $TenantID
    [string] $TenantDisplayName
    [string] $TenantDomainName
    [string] $TargetResourceId
    [string] $TargetSubscriptionName
    [string] $TargetSubscriptionID
    [string] $TargetTenantID
    [string] $TargetTenantDisplayName
    [string] $TargetTenantDomainName
    [string] $Description
    [string] $Status
    [string] $External
}

$installedModule = Get-Module -Name "Az.ResourceGraph" -ListAvailable
if ($null -eq $installedModule) {
    Install-Module "Az.ResourceGraph" -Scope CurrentUser
}
else {
    Import-Module "Az.ResourceGraph"
}

$kqlQuery = @"
resourcecontainers | where type == 'microsoft.resources/subscriptions'
| project  subscriptionId, name, tenantId
"@

$batchSize = 1000
$skipResult = 0

$subscriptions = @{}

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {
        $s = [SubscriptionInformation]::new()
        $s.SubscriptionID = $row.subscriptionId
        $s.Name = $row.name
        $s.TenantID = $row.tenantId

        $subscriptions.Add($s.SubscriptionID, $s) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

"Found $($subscriptions.Count) subscriptions"

function Get-SubscriptionInformation($SubscriptionID) {
    if ($subscriptions.ContainsKey($SubscriptionID)) {
        return $subscriptions[$SubscriptionID]
    } 

    Write-Warning "Using fallback subscription information for '$SubscriptionID'"
    $s = [SubscriptionInformation]::new()
    $s.SubscriptionID = $SubscriptionID
    $s.Name = "<unknown>"
    $s.TenantID = [Guid]::Empty.Guid
    return $s
}

$tenantCache = @{}
$subscriptionToTenantCache = @{}

function Get-TenantInformation($TenantID) {
    $domain = $null
    if ($tenantCache.ContainsKey($TenantID)) {
        $domain = $tenantCache[$TenantID]
    } 
    else {
        try {
            $tenantResponse = Invoke-AzRestMethod -Uri "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$TenantID')"
            $tenantInformation = ($tenantResponse.Content | ConvertFrom-Json)

            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = $tenantInformation.displayName
            $ti.DomainName = $tenantInformation.defaultDomainName

            $domain = $ti
        }
        catch {
            Write-Warning "Failed to get domain information for '$TenantID'"
        }

        if ([string]::IsNullOrEmpty($domain)) {
            Write-Warning "Using fallback domain information for '$TenantID'"
            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = "<unknown>"
            $ti.DomainName = "<unknown>"

            $domain = $ti
        }

        $tenantCache.Add($TenantID, $domain) | Out-Null
    }

    return $domain
}

function Get-TenantFromSubscription($SubscriptionID) {
    $tenant = $null
    if ($subscriptionToTenantCache.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptionToTenantCache[$SubscriptionID]
    }
    elseif ($subscriptions.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptions[$SubscriptionID].TenantID
        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }
    else {
        try {

            $subscriptionResponse = Invoke-AzRestMethod -Path "/subscriptions/$($SubscriptionID)?api-version=2022-12-01"
            $startIndex = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.IndexOf("https://login.windows.net/")
            $tenantID = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.Substring($startIndex + "https://login.windows.net/".Length, 36)

            $tenant = $tenantID
        }
        catch {
            Write-Warning "Failed to get tenant from subscription '$SubscriptionID'"
        }

        if ([string]::IsNullOrEmpty($tenant)) {
            Write-Warning "Using fallback tenant information for '$SubscriptionID'"

            $tenant = [Guid]::Empty.Guid
        }

        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }

    return $tenant
}

$kqlQuery = @"
resources
| where type == "microsoft.network/privateendpoints"
| where isnotnull(properties) and properties contains "manualPrivateLinkServiceConnections"
| where array_length(properties.manualPrivateLinkServiceConnections) > 0
| mv-expand properties.manualPrivateLinkServiceConnections
| extend status = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.status
| extend description = coalesce(properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.description, "")
| extend privateLinkServiceId = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceId
| extend privateLinkServiceSubscriptionId = tostring(split(privateLinkServiceId, "/")[2])
| project id, name, location, type, resourceGroup, subscriptionId, tenantId, privateLinkServiceId, privateLinkServiceSubscriptionId, status, description
"@

$batchSize = 1000
$skipResult = 0

$privateEndpoints = New-Object System.Collections.ArrayList

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {

        $si1 = Get-SubscriptionInformation -SubscriptionID $row.SubscriptionID
        $ti1 = Get-TenantInformation -TenantID $row.TenantID

        $si2 = Get-SubscriptionInformation -SubscriptionID $row.PrivateLinkServiceSubscriptionId
        $tenant2 = Get-TenantFromSubscription -SubscriptionID $si2.SubscriptionID
        $ti2 = Get-TenantInformation -TenantID $tenant2

        $peData = [PrivateEndpointData]::new()
        $peData.ID = $row.ID
        $peData.Name = $row.Name
        $peData.Type = $row.Type
        $peData.Location = $row.Location
        $peData.ResourceGroup = $row.ResourceGroup

        $peData.SubscriptionName = $si1.Name
        $peData.SubscriptionID = $si1.SubscriptionID
        $peData.TenantID = $ti1.TenantID
        $peData.TenantDisplayName = $ti1.DisplayName
        $peData.TenantDomainName = $ti1.DomainName

        $peData.TargetResourceId = $row.PrivateLinkServiceId
        $peData.TargetSubscriptionName = $si2.Name
        $peData.TargetSubscriptionID = $si2.SubscriptionID
        $peData.TargetTenantID = $ti2.TenantID
        $peData.TargetTenantDisplayName = $ti2.DisplayName
        $peData.TargetTenantDomainName = $ti2.DomainName

        $peData.Description = $row.Description
        $peData.Status = $row.Status

        if ($ti2.DomainName -eq "MSAzureCloud.onmicrosoft.com") {
            $peData.External = "Managed by Microsoft"
        }
        elseif ($si2.TenantID -eq [Guid]::Empty.Guid) {
            $peData.External = "Yes"
        }
        else {
            $peData.External = "No"
        }

        $privateEndpoints.Add($peData) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

$privateEndpoints | Format-Table
$privateEndpoints | Export-CSV "private-endpoints.csv" -Delimiter ';' -Force

"Found $($privateEndpoints.Count) private endpoints with manual connections"

if ($privateEndpoints.Count -ne 0) {
    Start-Process "private-endpoints.csv"
}

Conclusion

Now you know how to check if a private endpoint is connected to an external private link service in Azure.

That's all folks!