Skip to content

Blog

Rotación automática de secretos en Azure Key Vault

Resumen

Rotar secretos manualmente es error-prone y tedioso. Azure Key Vault soporta rotación automatizada mediante Event Grid + Azure Functions. Aquí te muestro cómo implementarla paso a paso.

¿Por qué rotar secretos?

  • Compliance: PCI-DSS, SOC2 exigen rotación periódica
  • Seguridad: Limita ventana de exposición si hay leak
  • Best practice: NIST recomienda rotación cada 90 días

Keys vs Secrets

  • Keys criptográficas: Tienen rotación nativa con rotation policy
  • Secrets (passwords, API keys): Requieren Event Grid + Function App

Este artículo cubre secrets. Para keys, ver Configure key rotation.

Arquitectura de rotación automática

Azure Key Vault usa Event Grid para notificar cuando un secreto está cerca de expirar:

graph LR
    A[Key Vault Secret] -->|30 días antes expiración| B[Event Grid]
    B -->|SecretNearExpiry event| C[Function App]
    C -->|Genera nuevo secret| D[Servicio Externo/SQL]
    C -->|Actualiza secreto| A

Proceso: 1. Key Vault publica evento SecretNearExpiry 30 días antes de expiración 2. Event Grid llama a Function App vía HTTP POST 3. Function genera nuevo secreto y actualiza el servicio 4. Function actualiza Key Vault con nueva versión del secreto

Implementación: Rotar SQL Server password

1. Crear secreto con fecha de expiración

1. Crear secreto con fecha de expiración

# Variables
RG="my-rg"
KV_NAME="my-keyvault"
SECRET_NAME="sql-admin-password"
SQL_SERVER="my-sql-server"

# Crear secreto con expiración 90 días
EXPIRY_DATE=$(date -u -d "+90 days" +'%Y-%m-%dT%H:%M:%SZ')

az keyvault secret set \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --value "InitialP@ssw0rd!" \
  --expires $EXPIRY_DATE

2. Desplegar Function App de rotación

Usar template oficial de Microsoft:

# Deploy ARM template con Function App preconfigurada
az deployment group create \
  --resource-group $RG \
  --template-uri https://raw.githubusercontent.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp/main/ARM-Templates/Function/azuredeploy.json \
  --parameters \
    sqlServerName=$SQL_SERVER \
    keyVaultName=$KV_NAME \
    functionAppName="${KV_NAME}-rotation-func" \
    secretName=$SECRET_NAME \
    repoUrl="https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp.git"

Este template despliega: - Function App con managed identity - Event Grid subscription a SecretNearExpiry - Access policy de Key Vault para la función - Código de rotación preconfigura do

3. Código de la función (incluido en el template)

3. Código de la función (incluido en el template)

La función C# incluida en el template maneja: - Recibe evento SecretNearExpiry de Event Grid - Extrae nombre del secreto y versión - Genera nuevo password aleatorio - Actualiza SQL Server con nuevo password - Crea nueva versión del secreto en Key Vault

// Código simplificado (el template incluye implementación completa)
[FunctionName("AKVSQLRotation")]
public static void Run([EventGridTrigger]EventGridEvent eventGridEvent)
{
    var secretName = eventGridEvent.Subject;
    var keyVaultName = ExtractVaultName(eventGridEvent.Topic);

    // Rotar password
    SecretRotator.RotateSecret(log, secretName, keyVaultName);
}

Implementación: Rotar Storage Account keys

Para servicios con dos sets de credenciales (primary/secondary keys):

# Deploy template para Storage Account rotation
az deployment group create \
  --resource-group $RG \
  --template-uri https://raw.githubusercontent.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell/master/ARM-Templates/Function/azuredeploy.json \
  --parameters \
    storageAccountName=$STORAGE_ACCOUNT \
    keyVaultName=$KV_NAME \
    functionAppName="${KV_NAME}-storage-rotation"

# Crear secret con metadata para rotación
EXPIRY_DATE=$(date -u -d "+60 days" +'%Y-%m-%dT%H:%M:%SZ')
STORAGE_KEY=$(az storage account keys list -n $STORAGE_ACCOUNT --query "[0].value" -o tsv)

az keyvault secret set \
  --vault-name $KV_NAME \
  --name storageKey \
  --value "$STORAGE_KEY" \
  --tags CredentialId=key1 ProviderAddress="/subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT" ValidityPeriodDays=60 \
  --expires $EXPIRY_DATE

Estrategia dual-key: 1. Key1 almacenada en Key Vault 2. Evento SecretNearExpiry activa rotación 3. Function regenera Key2 en Storage Account 4. Actualiza secreto en Key Vault con Key2 5. Próxima rotación alterna a Key1

Monitoreo de rotaciones

# Ver versiones de un secreto
az keyvault secret list-versions \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --query "[].{Version:id, Created:attributes.created, Expires:attributes.expires}"

# Ver última actualización
az keyvault secret show \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --query "attributes.{Updated:updated, Expires:expires, Enabled:enabled}"

# Ver logs de rotación en Function App
az monitor app-insights query \
  --app ${KV_NAME}-rotation-func \
  --analytics-query "traces | where message contains 'Rotation' | top 10 by timestamp desc"

Notificaciones por email

# Action Group para alertas
az monitor action-group create \
  --resource-group $RG \
  --name secret-rotation-alerts \
  --short-name SecRot \
  --email-receiver EmailAdmin admin@company.com

# Alert rule
az monitor metrics alert create \
  --resource-group $RG \
  --name secret-rotation-failed \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.KeyVault/vaults/$KV_NAME \
  --condition "count SecretRotationFailed > 0" \
  --action secret-rotation-alerts

Buenas prácticas

  • Overlap period: Mantén versión anterior válida 7-30 días
  • Testing: Rota primero en entorno de dev/staging
  • Documentación: Registra qué servicios usan cada secreto
  • Backup: Exporta secretos críticos a offline storage encriptado
  • Notificaciones: Configura alertas para rotaciones fallidas

Secretos hardcodeados

La rotación no sirve de nada si tienes secretos hardcodeados en código o config files. Usa referencias a Key Vault (@Microsoft.KeyVault(SecretUri=...) en App Settings).

Templates oficiales

Microsoft proporciona templates ARM completos para diferentes escenarios: - SQL password rotation - Storage Account keys rotation - Adaptables para otros servicios (Cosmos DB, Redis, APIs externas)

Referencias

Azure App Service Deployment Slots: Despliegues sin downtime

Resumen

Los Deployment Slots en Azure App Service te permiten desplegar nuevas versiones de tu aplicación sin tiempo de inactividad. es la forma más sencilla de implementar blue-green deployments en Azure.

¿Qué son los Deployment Slots?

Los Deployment Slots son entornos en vivo dentro de tu App Service donde puedes desplegar diferentes versiones de tu aplicación. Cada slot:

  • Tiene su propia URL
  • Puede tener configuración independiente
  • Permite swap instantáneo entre slots
  • Comparte el mismo plan de App Service

Caso de uso típico

# Variables
RG="my-rg"
APP_NAME="my-webapp"
LOCATION="westeurope"

# Crear App Service Plan (mínimo Standard)
az appservice plan create \
  --name ${APP_NAME}-plan \
  --resource-group $RG \
  --location $LOCATION \
  --sku S1

# Crear App Service
az webapp create \
  --name $APP_NAME \
  --resource-group $RG \
  --plan ${APP_NAME}-plan

# Crear slot de staging
az webapp deployment slot create \
  --name $APP_NAME \
  --resource-group $RG \
  --slot staging

Workflow de despliegue

1. Desplegar a staging:

# Desplegar código al slot staging
az webapp deployment source config-zip \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --src app.zip

2. Validar en staging:

URL del slot: https://{app-name}-staging.azurewebsites.net

3. Hacer swap a producción:

# Swap directo staging -> production
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production

# Swap con preview (recomendado)
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action preview

# Después de validar, completar swap
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action swap

Proceso de swap

Durante el swap, Azure sigue este proceso: 1. Aplica settings del target slot al source slot 2. Espera a que todas las instancias se reinicien y calienten 3. Si todas las instancias están healthy, intercambia el routing 4. El target slot (production) queda con la nueva app sin downtime

Tiempo total: 1-5 minutos dependiendo de warmup. La producción NO sufre downtime.

Configuración sticky vs swappable

No toda la configuración se intercambia en el swap:

Sticky (no se mueve con el código): - App settings marcadas como "Deployment slot setting" - Connection strings marcadas como "Deployment slot setting" - Custom domains - Nonpublic certificates y TLS/SSL settings - Scale settings - IP restrictions - Always On, Diagnostic settings, CORS

Swappable (se mueve con el código): - General settings (framework version, 32/64-bit) - App settings no marcadas - Handler mappings - Public certificates - WebJobs content - Hybrid connections - Virtual network integration

Configurar sticky settings:

Desde el portal Azure: 1. Ir a ConfigurationApplication settings del slot 2. Añadir/editar app setting 3. Marcar checkbox Deployment slot setting 4. Apply

# Crear app setting en slot staging
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings DATABASE_URL="staging-connection-string"

# Para hacerla sticky: usar portal o ARM template
# No hay flag CLI directo para marcar como slot-specific

Auto-swap para CI/CD

# Configurar auto-swap desde staging a production
az webapp deployment slot auto-swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --auto-swap-slot production

Con auto-swap habilitado: 1. Push a staging → despliegue automático 2. Warmup automático del slot 3. Swap a producción sin intervención manual

Customizar warmup path:

# Configurar URL de warmup personalizada
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_PATH="/health/ready"

# Validar solo códigos HTTP específicos
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_STATUSES="200,202"

Buenas prácticas

  • Usar staging para testing: Siempre valida en staging antes del swap
  • Configurar health checks: Azure verifica el slot antes de hacer swap
  • Mantener paridad: Staging debe replicar producción (misma configuración, DB de test similar)
  • Rollback rápido: Si falla, haz swap inverso inmediatamente
  • Limitar slots: Máximo 2-3 slots por app (staging, pre-production)

Ahorro de costos

Los slots comparten recursos del App Service Plan. No pagas más por tener staging, pero requieres tier Standard o superior.

Monitoring del swap

# Ver historial de swaps
az webapp deployment slot list \
  --resource-group $RG \
  --name $APP_NAME

# Logs durante el swap
az webapp log tail \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging

Referencias

Azure Cost Management: planifica tu presupuesto 2025

Resumen

enero es el mes para configurar presupuestos y alertas de costes en Azure. Con Azure Cost Management puedes establecer límites, recibir notificaciones y evitar sorpresas en la factura. En este post verás cómo crear presupuestos, configurar alertas automáticas y usar Azure Advisor para optimizar gastos desde el primer día del año.

GTD in Outlook with Microsoft To Do

In this post, we will see how to use GTD in Outlook.

Firs of all, let's define what GTD is. GTD stands for Getting Things Done, a productivity method created by David Allen. The GTD method is based on the idea that you should get things out of your head and into a trusted system, so you can focus on what you need to do.

Important

The GTD method is not about doing more things faster. It's about doing the right things at the right time. This method needs to be aligned with your purpose, objectives, and goals, so you can focus on what really matters to you.

The GTD workflow

The GTD workflow consists of five steps:

  1. Capture: Collect everything that has your attention.
  2. Clarify: Process what you've captured.
  3. Organize: Put everything in its place.
  4. Reflect: Review your system regularly.
  5. Engage: Choose what to do and do it.

The detailed flowchart of the GTD method is shown below:

graph TD;
    subgraph Capture    
    subgraph for_each_item_not_in_Inbox
    AA[Collect everything that has your attention in the Inbox];
    end    
    A[Inbox]
    AA-->A
    end
    subgraph Clarify        
    subgraph for_each_item_in_Inbox
    BB[  What is this?
            What should I do?
            Is this really worth doing?
            How does it fit in with all the other things I have to do?
            Is there any action that needs to be done about it or because of it?]
    end
    end
    A --> BB
    subgraph Organize
    BB --> B{Is it actionable?}
    B -- no --> C{Is it reference material?}
    C -- yes --> D[Archive]
    C -- no --> E{Is it trash?}
    E -- yes --> FF[Trash it]
    E -- no --> GG[Incubate it]
    GG --> HH[Review it weekly]
    B -- yes --> F{Can it be done in one step?}
    F -- no --> G[Project]
    G --> HHH[Define a next action at least]
    HHH --> H[Project Actions]    
    end
    subgraph Engage
    H --> I
    F -- yes --> I{Does it take more than 2 minutes?}
    I -- no --> J[DO IT]
    I -- yes --> K{Is it my responsibility?}
    K -- no --> L[Delegate]
    L --> M[Waiting For]    
    K -- yes --> N{Does it have a specific date?}
    N -- yes --> O[Add to Calendar]
    N -- no --> P[Next Actions]
    O --> Q[Schedule. Do it at a specific time]
    P --> R[Do it as soon as possible]
    end
graph TD;
subgraph Reflect
    S[**Daily Review**
            - Review your tasks daily
            - Calendar
            - Next  actions
            - Urgent tasks]
    T[**Weekly Review**
            - Review your projects weekly, make sure you're making progress
            - Next actions
            - Waiting for
            - Someday/Maybe
            - Calendar]
    U[**Monthly Review**
            - Focus on your goals
            - Reference archive
            - Someday/Maybe
            - Completed projects]
    V[**Review your purpose annually**
            - Goals and purposes
            - Big projects
            - Historical archive]
    end

It's important to note that the GTD method is not a one-size-fits-all solution. You can adapt it to your needs and preferences. The key is to find a system that works for you and stick to it.

And now, let's see how to use GTD in Outlook and Microsoft To Do.

How to use GTD in Outlook with Microsoft To Do

When it comes to implementing the GTD method in Outlook, the key is to use the right tools and techniques. Microsoft To Do is a great tool for managing your tasks and projects, and it integrates seamlessly with Outlook.

You can use Outlook to implement the GTD method by following these steps:

  1. Capture:
    • emails: Use the Inbox to collect everything that has your attention.
    • Other Things: Use Microsoft To Do Taks default list to capture tasks and projects.
  2. Clarify: Process what you've captured by asking yourself the following questions:
    • What is this?
    • What should I do?
    • Is this really worth doing?
    • How does it fit in with all the other things I have to do?
    • Is there any action that needs to be done about it or because of it?
  3. Organize: Put everything in its place by following these steps:
    • Inbox:
      • Move emails to the appropriate folder or delete them.
      • Categories: Use categories to organize your emails by context and folder to organize your emails by project or client.
      • Use search folders to find emails quickly by category or categories, you can clear categories after processing.
      • Flags emails to add to To Do.
      • Create rules to automate repetitive tasks when clarifying one type of email has allways the same action.
    • Tasks: Organize your tasks and projects in Microsoft To Do.
      • Lists: Create lists for different types of tasks, one by context or use #tags for that in one lists. For example:
        • In the case of lists: Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, Someday/Maybe.
        • In the case of tags, one list with: #Agendas, #Anywhere, #Calls, #Computed, #Errands, #Home, #Office, #WaitingFor, #SomedayMaybe.
      • Use tag #nextaction to identify the next task to do.
      • Use tag #urgent to identify urgent tasks.
    • Projects
      • Group Lists: Group lists by category of projects or client.
      • One list per project: Create a list for each project and add tasks to it.
      • Use #nextaction tag to identify the next task in each project.
    • Reference Material:
      • Store reference material in folders, better in OneDrive or SharePoint.
      • Use a folder structure to organize your reference material
      • Use search folders to find it quickly.
      • Use tags to identify the context of the reference material. You can use FileMeta to add tags to files in Windows for non-taggeable files.
  4. Reflect: Review your system regularly to make sure it's up to date.
    • Daily Review
    • Weekly Review
    • Monthly Review
    • Annual Review
  5. Engage: Choose what to do and do it.
    • Use the My Day Bar to see your tasks and events at a glance or in search bar type #nextaction to see all your next actions.

These are just some ideas to get you started. You can adapt the GTD method to your needs and preferences. The key is to find a system that works for you and stick to it.

My example of GTD in Outlook with Microsoft To Do:

Outlook:

alt text

To Do:

alt text

I'm using a mix of lists and tags with the same name to organize my tasks and projects. I have lists for different types of tasks, such as Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, and Someday/Maybe. I also use tags to identify the next action, urgent tasks and context of the task in projects.

In the case of emails, I use categories to organize them by context and folders to organize them by project or client. I also use search folders to find emails quickly by category or categories and filter by unreads. The reason for this is that I can clear categories after processing and in the mayorie of cases, I only need a quick review of the emails without the need to convert them into tasks.

By following these steps, you can implement the GTD method in Outlook and Microsoft To Do and improve your productivity and focus.

Good luck! 🍀

References

How to sign Git commits in Visual Studio Code in Windows Subsystem for Linux (WSL)

In this post, we will see how to sign Git commits in Visual Studio Code.

Prerequisites

  • Visual Studio Code
  • Git
  • gpg
  • gpg-agent
  • gpgconf
  • pinentry-gtk-2
  • Windows Subsystem for Linux (WSL) with Ubuntu 20.04

Steps

1. Install GPG

First, you need to install GPG ans agents. You can do this by running the following command:

sudo apt install gpg gpg-agent gpgconf pinentry-gtk2 -y

2. Generate a GPG key

To generate a GPG key, run the following command:

gpg --full-generate-key

You will be asked to enter your name, email, and passphrase. After that, the key will be generated.

3. List your GPG keys

To list your GPG keys, run the following command:

gpg --list-secret-keys --keyid-format LONG

You will see a list of your GPG keys. Copy the key ID of the key you want to use.

4. Configure Git to use your GPG key

To configure Git to use your GPG key, run the following command:

git config --global user.signingkey YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

5. Configure Git to sign commits by default

To configure Git to sign commits by default, run the following command:

git config --global commit.gpgsign true
git config --global gpg.program (which gpg)

6. EXport the GPG key

To export the GPG key, run the following command:

gpg --armor --export YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

7. Import to github

Go to your github account and add the exported GPG key in GPG keys section, create a new GPG key and paste the exported key.

Configure Visual Studio Code to use GPG

1. Configure gpg-agent

To configure gpg-agent, run the following command:

echo "default-cache-ttl" >> ~/.gnupg/gpg-agent.conf
echo "pinentry-program /usr/bin/pinentry-gtk-2" >> ~/.gnupg/gpg-agent.conf
echo "allow-preset-passphrase" >> ~/.gnupg/gpg-agent.conf

2. Restart the gpg-agent

To restart the gpg-agent, run the following command:

gpgconf --kill gpg-agent
gpgconf --launch gpg-agent

3. Sign a commit

To sign a commit, run the following command:

git commit -S -m "Your commit message"

4. Verify the signature

To verify the signature of a commit, run the following command:

git verify-commit HEAD

5. Configure Visual Studio Code to use GPG

To configure Visual Studio Code to use GPG, open the settings by pressing Ctrl + , and search for git.enableCommitSigning. Set the value to true.

6. Sign a commit

Make a commit in Visual Studio Code, and you will see a prompt asking you introduce your GPG passphrase. Enter your passphrase, and the commit will be signed.

That's it! Now you know how to sign Git commits in Visual Studio Code.

Some tips

For all repositories

  • Establish your email in git configuration:
git config --global user.email "petete@something.es"
  • Establish your name in git configuration:
git config --global user.name "Petete"
  • Establish your GPG key in git configuration:
git config --global user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config --global gpg.program (which gpg)

For a specific repository

  • Establish your email in git configuration:
git config user.email "petete@something.es"
  • Establish your name in git configuration:
git config user.name "Petete"
  • Establish your GPG key in git configuration:
git config user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config gpg.program (which gpg)

Conclusion

In this post, we saw how to sign Git commits in Visual Studio Code. This is useful if you want to verify the authenticity of your commits. I hope you found this post helpful. If you have any questions or comments, please let me know. Thank you for reading!

Ejecutar Terraform con archivos de variables

En ocasiones, cuando trabajamos con Terraform, necesitamos gestionar múltiples archivos de variables para diferentes entornos o configuraciones. En este post, os muestro cómo ejecutar Terraform con archivos de variables de forma sencilla.

Terraform y archivos de variables

Terraform permite cargar variables desde archivos .tfvars mediante la opción --var-file. Por ejemplo, si tenemos un archivo variables.tf con la siguiente definición:

variable "region" {
  type    = string
  default = "westeurope"
}

variable "resource_group_name" {
  type = string
}

Podemos crear un archivo variables.tfvars con los valores de las variables:

region = "westeurope"
resource_group_name = "my-rg"

Y ejecutar Terraform con el archivo de variables:

terraform plan --var-file variables.tfvars

Ejecutar Terraform con múltiples archivos de variables

Si tenemos múltiples archivos de variables, podemos ejecutar Terraform con todos ellos de forma sencilla. Para ello, podemos crear un script que busque los archivos .tfvars en un directorio y ejecute Terraform con ellos.

El problema de ejecutar Terraform con múltiples archivos de variables es que la opción --var-file no admite un array de archivos. Por lo tanto, necesitamos construir el comando de Terraform con todos los archivos de variables, lo cual puede ser un poco tedioso.

A continuación, os muestro un ejemplo de cómo crear una función en Bash/pwsh que ejecuta Terraform con archivos de variables:

Example

terraform_with_var_files.sh
  function terraform_with_var_files() {
  if [[ "$1" == "--help" || "$1" == "-h" ]]; then
    echo "Usage: terraform_with_var_files [OPTIONS]"
    echo "Options:"
    echo "  --dir DIR                Specify the directory containing .tfvars files"
    echo "  --action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
    echo "  --auto AUTO              Specify 'auto' for auto-approve (optional)"
    echo "  --resource_address ADDR  Specify the resource address for import action (optional)"
    echo "  --resource_id ID         Specify the resource ID for import action (optional)"
    echo "  --workspace WORKSPACE    Specify the Terraform workspace (default: default)"
    echo "  --help, -h               Show this help message"
    return 0
  fi

  local dir=""
  local action=""
  local auto=""
  local resource_address=""
  local resource_id=""
  local workspace="default"

  while [[ "$#" -gt 0 ]]; do
    case "$1" in
      --dir) dir="$2"; shift ;;
      --action) action="$2"; shift ;;
      --auto) auto="$2"; shift ;;
      --resource_address) resource_address="$2"; shift ;;
      --resource_id) resource_id="$2"; shift ;;
      --workspace) workspace="$2"; shift ;;
      *) echo "Unknown parameter passed: $1"; return 1 ;;
    esac
    shift
  done

  if [[ ! -d "$dir" ]]; then
    echo "El directorio especificado no existe."
    return 1
  fi

  if [[ "$action" != "plan" && "$action" != "apply" && "$action" != "destroy" && "$action" != "import" ]]; then
    echo "Acción no válida. Usa 'plan', 'apply', 'destroy' o 'import'."
    return 1
  fi

  local var_files=()
  for file in "$dir"/*.tfvars; do
    if [[ -f "$file" ]]; then
      var_files+=("--var-file $file")
    fi
  done

  if [[ ${#var_files[@]} -eq 0 ]]; then
    echo "No se encontraron archivos .tfvars en el directorio especificado."
    return 1
  fi

  echo "Inicializando Terraform..."
  eval terraform init
  if [[ $? -ne 0 ]]; then
    echo "La inicialización de Terraform falló."
    return 1
  fi

  echo "Seleccionando el workspace: $workspace"
  eval terraform workspace select "$workspace" || eval terraform workspace new "$workspace"
  if [[ $? -ne 0 ]]; then
    echo "La selección del workspace falló."
    return 1
  fi

  echo "Validando la configuración de Terraform..."
  eval terraform validate
  if [[ $? -ne 0 ]]; then
    echo "La validación de Terraform falló."
    return 1
  fi

  local command="terraform $action ${var_files[@]}"

  if [[ "$action" == "import" ]]; then
    if [[ -z "$resource_address" || -z "$resource_id" ]]; then
      echo "Para la acción 'import', se deben proporcionar la dirección del recurso y el ID del recurso."
      return 1
    fi
    command="terraform $action ${var_files[@]} $resource_address $resource_id"
  elif [[ "$auto" == "auto" && ( "$action" == "apply" || "$action" == "destroy" ) ]]; then
    command="$command -auto-approve"
  fi

  echo "Ejecutando: $command"
  eval "$command"
}

# Uso de la función
# terraform_with_var_files --dir "/ruta/al/directorio" --action "plan" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "apply" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "destroy" --auto "auto" --workspace "workspace"
# terraform_with_var_files --dir "/ruta/al/directorio" --action "import" --resource_address "resource_address" --resource_id "resource_id" --workspace "workspace"
terraform_with_var_files.ps1
function Terraform-WithVarFiles {
    param (
        [Parameter(Mandatory=$false)]
        [string]$Dir,

        [Parameter(Mandatory=$false)]
        [string]$Action,

        [Parameter(Mandatory=$false)]
        [string]$Auto,

        [Parameter(Mandatory=$false)]
        [string]$ResourceAddress,

        [Parameter(Mandatory=$false)]
        [string]$ResourceId,

        [Parameter(Mandatory=$false)]
        [string]$Workspace = "default",

        [switch]$Help
    )

    if ($Help) {
        Write-Output "Usage: Terraform-WithVarFiles [OPTIONS]"
        Write-Output "Options:"
        Write-Output "  -Dir DIR                Specify the directory containing .tfvars files"
        Write-Output "  -Action ACTION          Specify the Terraform action (plan, apply, destroy, import)"
        Write-Output "  -Auto AUTO              Specify 'auto' for auto-approve (optional)"
        Write-Output "  -ResourceAddress ADDR   Specify the resource address for import action (optional)"
        Write-Output "  -ResourceId ID          Specify the resource ID for import action (optional)"
        Write-Output "  -Workspace WORKSPACE    Specify the Terraform workspace (default: default)"
        Write-Output "  -Help                   Show this help message"
        return
    }

    if (-not (Test-Path -Path $Dir -PathType Container)) {
        Write-Error "The specified directory does not exist."
        return
    }

    if ($Action -notin @("plan", "apply", "destroy", "import")) {
        Write-Error "Invalid action. Use 'plan', 'apply', 'destroy', or 'import'."
        return
    }

    $varFiles = Get-ChildItem -Path $Dir -Filter *.tfvars | ForEach-Object { "--var-file $_.FullName" }

    if ($varFiles.Count -eq 0) {
        Write-Error "No .tfvars files found in the specified directory."
        return
    }

    Write-Output "Initializing Terraform..."
    terraform init
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform initialization failed."
        return
    }

    Write-Output "Selecting the workspace: $Workspace"
    terraform workspace select -or-create $Workspace
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Workspace selection failed."
        return
    }

    Write-Output "Validating Terraform configuration..."
    terraform validate
    if ($LASTEXITCODE -ne 0) {
        Write-Error "Terraform validation failed."
        return
    }

    $command = "terraform $Action $($varFiles -join ' ')"

    if ($Action -eq "import") {
        if (-not $ResourceAddress -or -not $ResourceId) {
            Write-Error "For 'import' action, both resource address and resource ID must be provided."
            return
        }
        $command = "$command $ResourceAddress $ResourceId"
    } elseif ($Auto -eq "auto" -and ($Action -eq "apply" -or $Action -eq "destroy")) {
        $command = "$command -auto-approve"
    }

    Write-Output "Executing: $command"
    Invoke-Expression $command
}

# Usage examples:
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "plan" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "apply" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "destroy" -Auto "auto" -Workspace "workspace"
# Terraform-WithVarFiles -Dir "/path/to/directory" -Action "import" -ResourceAddress "resource_address" -ResourceId "resource_id" -Workspace "workspace"

Para cargar la función en tu terminal bash, copia y pega el script en tu archivo .bashrc, .zshrc o el que toque y recarga tu terminal.

Para cargar la función en tu terminal pwsh, sigue este artículo: Personalización del entorno de shell

Espero que os sea de utilidad. ¡Saludos!

Develop my firts policy for Kubernetes with minikube and gatekeeper

Now that we have our development environment, we can start developing our first policy for Kubernetes with minikube and gatekeeper.

First at all, we need some visual code editor to write our policy. I recommend using Visual Studio Code, but you can use any other editor. Exists a plugin for Visual Studio Code that helps you to write policies for gatekeeper. You can install it from the marketplace: Open Policy Agent.

Once you have your editor ready, you can start writing your policy. In this example, we will create a policy that denies the creation of pods with the image nginx:latest.

For that we need two files:

  • constraint.yaml: This file defines the constraint that we want to apply.
  • constraint_template.yaml: This file defines the template that we will use to create the constraint.

Let's start with the constraint_template.yaml file:

constraint_template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenypodswithnginxlatest
spec:
  crd:
    spec:
      names:
        kind: K8sDenyPodsWithNginxLatest
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenypodswithnginxlatest

        violation[{"msg": msg}] {
          input.review.object.spec.containers[_].image == "nginx:latest"
          msg := "Containers cannot use the nginx:latest image"
        }

Now, let's create the constraint.yaml file:

constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithNginxLatest
metadata:
  name: deny-pods-with-nginx-latest
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    msg: "Containers cannot use the nginx:latest image"

Now, we can apply the files to our cluster:

# Create the constraint template
kubectl apply -f constraint_template.yaml

# Create the constraint
kubectl apply -f constraint.yaml

Now, we can test the constraint. Let's create a pod with the image nginx:latest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
EOF

We must see an error message like this:

Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [k8sdenypodswithnginxlatest] Containers cannot use the nginx:latest image

Now, let's create a pod with a different image:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25.5
EOF

We must see a message like this:

pod/nginx-pod created

For cleaning up, you can delete pod,the constraint and the constraint template:

# Delete the pod
kubectl delete pod nginx-pod
# Delete the constraint
kubectl delete -f constraint.yaml

# Delete the constraint template
kubectl delete -f constraint_template.yaml

And that's it! We have developed our first policy for Kubernetes with minikube and gatekeeper. Now you can start developing more complex policies and test them in your cluster.

Happy coding!

How to create a local environment to write policies for Kubernetes with minikube and gatekeeper

minikube in wsl2

Enable systemd in WSL2

sudo nano /etc/wsl.conf

Add the following:

[boot]
systemd=true

Restart WSL2 in command:

wsl --shutdown
wsl

Install docker

Install docker using repository

Minikube

Install minikube

# Download the latest Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# Make it executable
chmod +x ./minikube

# Move it to your user's executable PATH
sudo mv ./minikube /usr/local/bin/

#Set the driver version to Docker
minikube config set driver docker

Test minikube

# Enable completion
source <(minikube completion bash)
# Start minikube
minikube start
# Check the status
minikube status
# set context
kubectl config use-context minikube
# get pods
kubectl get pods --all-namespaces

Install OPA Gatekeeper

# Install OPA Gatekeeper
# check version in https://open-policy-agent.github.io/gatekeeper/website/docs/install#deploying-a-release-using-prebuilt-image
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

# wait and check the status
sleep 60
kubectl get pods -n gatekeeper-system

Test constraints

First, we need to create a constraint template and a constraint.

# Create a constraint template
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/templates/k8srequiredlabels_template.yaml

# Create a constraint
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/constraints/k8srequiredlabels_constraint.yaml

Now, we can test the constraint.

# Create a deployment without the required label
kubectl create namespace petete 
We must see an error message like this:

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}

# Create a deployment with the required label
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: petete
  labels:
    gatekeeper: "true"
EOF
kubectl get namespaces petete
We must see a message like this:

NAME     STATUS   AGE
petete   Active   3s

Conclusion

We have created a local environment to write policies for Kubernetes with minikube and gatekeeper. We have tested the environment with a simple constraint. Now we can write our own policies and test them in our local environment.

References

Trigger an on-demand Azure Policy compliance evaluation scan

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to trigger a scan with Azure Policy.

What is a scan in Azure Policy

A scan in Azure Policy is a process that evaluates your resources against a set of policies to determine if they are compliant. When you trigger a scan, Azure Policy evaluates your resources and generates a compliance report that shows the results of the evaluation. The compliance report includes information about the policies that were evaluated, the resources that were scanned, and the compliance status of each resource.

You can trigger a scan in Azure Policy using the Azure CLI, PowerShell, or the Azure portal. When you trigger a scan, you can specify the scope of the scan, the policies to evaluate, and other parameters that control the behavior of the scan.

Trigger a scan with the Azure CLI

To trigger a scan with the Azure CLI, you can use the az policy state trigger-scan command. This command triggers a policy compliance evaluation for a scope

How to trigger a scan with the Azure CLI for active subscription:

az policy state trigger-scan 

How to trigger a scan with the Azure CLI for a specific resource group:

az policy state trigger-scan --resource-group myResourceGroup

Trigger a scan with PowerShell

To trigger a scan with PowerShell, you can use the Start-AzPolicyComplianceScan cmdlet. This cmdlet triggers a policy compliance evaluation for a scope.

How to trigger a scan with PowerShell for active subscription:

Start-AzPolicyComplianceScan
$job = Start-AzPolicyComplianceScan -AsJob

How to trigger a scan with PowerShell for a specific resource group:

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

Conclusion

In this article, we discussed how to trigger a scan with Azure Policy. We covered how to trigger a scan using the Azure CLI and PowerShell. By triggering a scan, you can evaluate your resources against a set of policies to determine if they are compliant. This can help you ensure that your resources are compliant with your organization's standards and best practices.

References

Custom Azure Policy for Kubernetes

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to create a custom Azure Policy for Kubernetes.

How Azure Policy works in kubernetes

Azure Policy for Kubernetes is an extension of Azure Policy that allows you to enforce policies on your Kubernetes clusters. You can use Azure Policy to define policies that apply to your Kubernetes resources, such as pods, deployments, and services. These policies can help you ensure that your Kubernetes clusters are compliant with your organization's standards and best practices.

Azure Policy for Kubernetes uses Gatekeeper, an open-source policy controller for Kubernetes, to enforce policies on your clusters. Gatekeeper uses the Open Policy Agent (OPA) policy language to define policies and evaluate them against your Kubernetes resources. You can use Gatekeeper to create custom policies that enforce specific rules and effects on your clusters.

graph TD
    A[Azure Policy] -->|Enforce policies| B["add-on azure-policy(Gatekeeper)"]
    B -->|Evaluate policies| C[Kubernetes resources]

Azure Policy for Kubernetes supports the following cluster environments:

  • Azure Kubernetes Service (AKS), through Azure Policy's Add-on for AKS
  • Azure Arc enabled Kubernetes, through Azure Policy's Extension for Arc

Prepare your environment

Before you can create custom Azure Policy for Kubernetes, you need to set up your environment. You will need an Azure Kubernetes Service (AKS) cluster with the Azure Policy add-on enabled. You will also need the Azure CLI and the Azure Policy extension for Visual Studio Code.

To set up your environment, follow these steps:

  1. Create a resource group

    az group create --name myResourceGroup --location spaincentral
    
  2. Create an Azure Kubernetes Service (AKS) cluster with default settings and one node:

    az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
    
  3. Enable Azure Policies for the cluster:

    az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-policy
    
  4. Check the status of the add-on:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.azurepolicy.enabled
    
  5. Check the status of gatekeeper:

    # Install kubectl and kubelogin
    az aks install-cli --install-location .local/bin/kubectl --kubelogin-install-location .local/bin/kubelogin
    # Get the credentials for the AKS cluster
    az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing
    # azure-policy pod is installed in kube-system namespace
    kubectl get pods -n kube-system
    # gatekeeper pod is installed in gatekeeper-system namespace
    kubectl get pods -n gatekeeper-system
    
  6. Install vscode and the Azure Policy extension

    code --install-extension ms-azuretools.vscode-azurepolicy
    

Once you have set up your environment, you can create custom Azure Policy for Kubernetes.

How to create a custom Azure Policy for Kubernetes

To create a custom Azure Policy for Kubernetes, you need to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. You can define policies that enforce specific rules and effects on your Kubernetes resources, such as pods, deployments, and services.

Info

It`s recommended to review Constraint Templates in How to use Gatekeeper

To create a custom Azure Policy for Kubernetes, follow these steps:

  1. Define a constraint template for the policy, I will use an existing constraint template from the Gatekeeper library that requires Ingress resources to be HTTPS only:

    gatekeeper-library/library/general/httpsonly/template.yaml
    apiVersion: templates.gatekeeper.sh/v1
    kind: ConstraintTemplate
    metadata:
        name: k8shttpsonly
        annotations:
            metadata.gatekeeper.sh/title: "HTTPS Only"
            metadata.gatekeeper.sh/version: 1.0.2
            description: >-
            Requires Ingress resources to be HTTPS only.  Ingress resources must
            include the `kubernetes.io/ingress.allow-http` annotation, set to `false`.
            By default a valid TLS {} configuration is required, this can be made
            optional by setting the `tlsOptional` parameter to `true`.
    
            https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    spec:
    crd:
        spec:
        names:
            kind: K8sHttpsOnly
        validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
            type: object
            description: >-
                Requires Ingress resources to be HTTPS only.  Ingress resources must
                include the `kubernetes.io/ingress.allow-http` annotation, set to
                `false`. By default a valid TLS {} configuration is required, this
                can be made optional by setting the `tlsOptional` parameter to
                `true`.
            properties:
                tlsOptional:
                type: boolean
                description: "When set to `true` the TLS {} is optional, defaults 
                to false."
    targets:
        - target: admission.k8s.gatekeeper.sh
        rego: |
            package k8shttpsonly
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not https_complete(ingress)
            not tls_is_optional
            msg := sprintf("Ingress should be https. tls configuration and allow-http=false annotation are required for %v", [ingress.metadata.name])
            }
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not annotation_complete(ingress)
            tls_is_optional
            msg := sprintf("Ingress should be https. The allow-http=false annotation is required for %v", [ingress.metadata.name])
            }
    
            https_complete(ingress) = true {
            ingress.spec["tls"]
            count(ingress.spec.tls) > 0
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            annotation_complete(ingress) = true {
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            tls_is_optional {
            parameters := object.get(input, "parameters", {})
            object.get(parameters, "tlsOptional", false) == true
            }
    

This constrains requires Ingress resources to be HTTPS only

  1. Create an Azure Policy for this constraint template

    1. Open the restriction template created earlier in Visual Studio Code.
    2. Click on Azure Policy icon in the Activity Bar.
    3. Click on View > Command Palette.
    4. Search for the command "Azure Policy for Kubernetes: Create Policy Definition from Constraint Template or Mutation", select base64, this command will create a policy definition from the constraint template.
      Untitled.json
      {
      "properties": {
          "policyType": "Custom",
          "mode": "Microsoft.Kubernetes.Data",
          "displayName": "/* EDIT HERE */",
          "description": "/* EDIT HERE */",
          "policyRule": {
          "if": {
              "field": "type",
              "in": [
              "Microsoft.ContainerService/managedClusters"
              ]
          },
          "then": {
              "effect": "[parameters('effect')]",
              "details": {
              "templateInfo": {
                  "sourceType": "Base64Encoded",
                  "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ=="
              },
              "apiGroups": [
                  "/* EDIT HERE */"
              ],
              "kinds": [
                  "/* EDIT HERE */"
              ],
              "namespaces": "[parameters('namespaces')]",
              "excludedNamespaces": "[parameters('excludedNamespaces')]",
              "labelSelector": "[parameters('labelSelector')]",
              "values": {
                  "tlsOptional": "[parameters('tlsOptional')]"
              }
              }
          }
          },
          "parameters": {
          "effect": {
              "type": "String",
              "metadata": {
              "displayName": "Effect",
              "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
              },
              "allowedValues": [
              "audit",
              "deny",
              "disabled"
              ],
              "defaultValue": "audit"
          },
          "excludedNamespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace exclusions",
              "description": "List of Kubernetes namespaces to exclude from policy evaluation."
              },
              "defaultValue": [
              "kube-system",
              "gatekeeper-system",
              "azure-arc"
              ]
          },
          "namespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace inclusions",
              "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces."
              },
              "defaultValue": []
          },
          "labelSelector": {
              "type": "Object",
              "metadata": {
              "displayName": "Kubernetes label selector",
              "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
              },
              "defaultValue": {},
              "schema": {
              "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
              "type": "object",
              "properties": {
                  "matchLabels": {
                  "description": "matchLabels is a map of {key,value} pairs.",
                  "type": "object",
                  "additionalProperties": {
                      "type": "string"
                  },
                  "minProperties": 1
                  },
                  "matchExpressions": {
                  "description": "matchExpressions is a list of values, a key, and an operator.",
                  "type": "array",
                  "items": {
                      "type": "object",
                      "properties": {
                      "key": {
                          "description": "key is the label key that the selector applies to.",
                          "type": "string"
                      },
                      "operator": {
                          "description": "operator represents a key's relationship to a set of values.",
                          "type": "string",
                          "enum": [
                          "In",
                          "NotIn",
                          "Exists",
                          "DoesNotExist"
                          ]
                      },
                      "values": {
                          "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
                          "type": "array",
                          "items": {
                          "type": "string"
                          }
                      }
                      },
                      "required": [
                      "key",
                      "operator"
                      ],
                      "additionalProperties": false
                  },
                  "minItems": 1
                  }
              },
              "additionalProperties": false
              }
          },
          "tlsOptional": {
              "type": "Boolean",
              "metadata": {
              "displayName": "/* EDIT HERE */",
              "description": "/* EDIT HERE */"
              }
          }
          }
      }
      }
      
    5. Fill the fields with "/ EDIT HERE /" in the policy definition JSON file with the appropriate values, such as the display name, description, API groups, and kinds. For example, in this case you must configure apiGroups: ["extensions", "networking.k8s.io"] and kinds: ["Ingress"]
    6. Save the policy definition JSON file.

This is the complete policy:

json title="https-only.json" { "properties": { "policyType": "Custom", "mode": "Microsoft.Kubernetes.Data", "displayName": "Require HTTPS for Ingress resources", "description": "This policy requires Ingress resources to be HTTPS only. Ingress resources must include the `kubernetes.io/ingress.allow-http` annotation, set to `false`. By default a valid TLS configuration is required, this can be made optional by setting the `tlsOptional` parameter to `true`.", "policyRule": { "if": { "field": "type", "in": [ "Microsoft.ContainerService/managedClusters" ] }, "then": { "effect": "[parameters('effect')]", "details": { "templateInfo": { "sourceType": "Base64Encoded", "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ==" }, "apiGroups": [ "extensions", "networking.k8s.io" ], "kinds": [ "Ingress" ], "namespaces": "[parameters('namespaces')]", "excludedNamespaces": "[parameters('excludedNamespaces')]", "labelSelector": "[parameters('labelSelector')]", "values": { "tlsOptional": "[parameters('tlsOptional')]" } } } }, "parameters": { "effect": { "type": "String", "metadata": { "displayName": "Effect", "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy." }, "allowedValues": [ "audit", "deny", "disabled" ], "defaultValue": "audit" }, "excludedNamespaces": { "type": "Array", "metadata": { "displayName": "Namespace exclusions", "description": "List of Kubernetes namespaces to exclude from policy evaluation." }, "defaultValue": [ "kube-system", "gatekeeper-system", "azure-arc" ] }, "namespaces": { "type": "Array", "metadata": { "displayName": "Namespace inclusions", "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces." }, "defaultValue": [] }, "labelSelector": { "type": "Object", "metadata": { "displayName": "Kubernetes label selector", "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources." }, "defaultValue": {}, "schema": { "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.", "type": "object", "properties": { "matchLabels": { "description": "matchLabels is a map of {key,value} pairs.", "type": "object", "additionalProperties": { "type": "string" }, "minProperties": 1 }, "matchExpressions": { "description": "matchExpressions is a list of values, a key, and an operator.", "type": "array", "items": { "type": "object", "properties": { "key": { "description": "key is the label key that the selector applies to.", "type": "string" }, "operator": { "description": "operator represents a key's relationship to a set of values.", "type": "string", "enum": [ "In", "NotIn", "Exists", "DoesNotExist" ] }, "values": { "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.", "type": "array", "items": { "type": "string" } } }, "required": [ "key", "operator" ], "additionalProperties": false }, "minItems": 1 } }, "additionalProperties": false } }, "tlsOptional": { "type": "Boolean", "metadata": { "displayName": "TLS Optional", "description": "Set to true to make TLS optional" } } } } }

Now you have created a custom Azure Policy for Kubernetes that enforces the HTTPS only constraint on your Kubernetes cluster. You can apply this policy to your cluster to ensure that all Ingress resources are HTTPS only creating a policy definition and assigning it to the management group, subscription or resource group where the AKS cluster is located.

Conclusion

In this article, we discussed how to create a custom Azure Policy for Kubernetes. We showed you how to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. We also showed you how to create a constraint template for the policy and create an Azure Policy for the constraint template. By following these steps, you can create custom policies that enforce specific rules and effects on your Kubernetes resources.

References