Skip to content

Azure Services

Azure Container Registry: Geo-replication y webhook automation

Resumen

Azure Container Registry (ACR) no es solo un registro Docker. Con geo-replication consigues latencia mínima global y con webhooks automatizas despliegues. Aquí el setup avanzado.

¿Cuándo usar ACR vs Docker Hub?

Usa ACR si: - ✅ Necesitas registry privado en Azure - ✅ Quieres integración nativa con AKS, App Service, Container Apps - ✅ Compliance requiere datos en región específica - ✅ Necesitas geo-replication

Usa Docker Hub si: - Imágenes públicas open source - Proyectos personales sin requisitos enterprise

Crear ACR con Premium SKU

# Variables
RG="my-rg"
ACR_NAME="myacr$(date +%s)"
LOCATION="westeurope"

# Crear ACR Premium (requerido para geo-replication)
az acr create \
  --resource-group $RG \
  --name $ACR_NAME \
  --sku Premium \
  --location $LOCATION \
  --admin-enabled false

SKUs disponibles: - Basic: €4.4/mes - Sin geo-replication, webhooks limitados - Standard: €22/mes - Webhooks, mejor throughput - Premium: €44/mes - Geo-replication, Content Trust, Private Link

Geo-replication

Replica tu registry a múltiples regiones para: - Reducir latencia de pull - Alta disponibilidad - Cumplir data residency

# Replicar a East US
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location eastus

# Replicar a Southeast Asia
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location southeastasia

# Listar réplicas
az acr replication list \
  --resource-group $RG \
  --registry $ACR_NAME \
  --output table

Ahora tu imagen myacr.azurecr.io/app:v1 está disponible en 3 regiones con un solo push.

Push de imágenes

# Login a ACR
az acr login --name $ACR_NAME

# Tag imagen
docker tag my-app:latest ${ACR_NAME}.azurecr.io/my-app:v1.0

# Push
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0

# Listar imágenes
az acr repository list --name $ACR_NAME --output table

# Ver tags
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --output table

ACR Tasks: Build en Azure

Build imágenes sin Docker local:

# Build desde Dockerfile en repo Git
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:{{.Run.ID}} \
  --image my-app:latest \
  https://github.com/myorg/myapp.git#main

# Build desde directorio local
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:v1.1 \
  .

Webhooks para CI/CD

Trigger despliegue automático cuando hay nuevo push:

# Crear webhook
az acr webhook create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --name deployWebhook \
  --actions push \
  --uri https://my-function-app.azurewebsites.net/api/deploy \
  --scope my-app:*

Payload del webhook:

{
  "id": "unique-id",
  "timestamp": "2025-01-22T10:00:00Z",
  "action": "push",
  "target": {
    "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "size": 1234,
    "digest": "sha256:abc123...",
    "repository": "my-app",
    "tag": "v1.2"
  },
  "request": {
    "id": "req-id",
    "host": "myacr.azurecr.io",
    "method": "PUT",
    "useragent": "docker/20.10.12"
  }
}

Azure Function para auto-deploy

import azure.functions as func
import json
import subprocess

def main(req: func.HttpRequest) -> func.HttpResponse:
    # Parse webhook payload
    webhook_data = req.get_json()

    repository = webhook_data['target']['repository']
    tag = webhook_data['target']['tag']
    image = f"myacr.azurecr.io/{repository}:{tag}"

    # Trigger deployment (ejemplo: kubectl)
    subprocess.run([
        'kubectl', 'set', 'image',
        'deployment/my-app',
        f'app={image}',
        '--record'
    ])

    return func.HttpResponse(f"Deployed {image}", status_code=200)

Security: Managed Identity

# Asignar managed identity a AKS
AKS_PRINCIPAL_ID=$(az aks show \
  --resource-group $RG \
  --name my-aks \
  --query identityProfile.kubeletidentity.objectId -o tsv)

# Dar permiso AcrPull
az role assignment create \
  --assignee $AKS_PRINCIPAL_ID \
  --role AcrPull \
  --scope $(az acr show --resource-group $RG --name $ACR_NAME --query id -o tsv)

Ahora AKS puede pullear sin passwords:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: myacr.azurecr.io/my-app:v1.0
        # No imagePullSecrets needed!

Content Trust (image signing)

# Habilitar Content Trust
az acr config content-trust update \
  --resource-group $RG \
  --registry $ACR_NAME \
  --status enabled

# Docker content trust
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://${ACR_NAME}.azurecr.io

# Push firmado
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0-signed

Vulnerability scanning

# Habilitar Microsoft Defender for Container Registries
az security pricing create \
  --name ContainerRegistry \
  --tier Standard

# Ver vulnerabilidades
az acr repository show \
  --name $ACR_NAME \
  --repository my-app \
  --query "metadata.vulnerabilities"

Retention policy (cleanup automático)

# Retener solo últimos 30 días
az acr config retention update \
  --registry $ACR_NAME \
  --status enabled \
  --days 30 \
  --type UntaggedManifests

# Borrar tags viejos manualmente
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --orderby time_asc \
  --output tsv \
  | head -n 10 \
  | xargs -I {} az acr repository delete \
      --name $ACR_NAME \
      --image my-app:{} \
      --yes

Import imágenes desde Docker Hub

# Importar imagen pública
az acr import \
  --name $ACR_NAME \
  --source docker.io/library/nginx:latest \
  --image nginx:latest

# Importar desde otro ACR
az acr import \
  --name $ACR_NAME \
  --source otheracr.azurecr.io/app:v1 \
  --image app:v1 \
  --username <user> \
  --password <password>

Buenas prácticas

  • Tagging strategy: Usa semver (v1.2.3) + latest + commit SHA
  • Multi-arch images: Build para amd64 y arm64
  • Scan antes de deploy: Integra vulnerability scanning en CI
  • Cleanup periódico: Retención de 30-90 días
  • Private endpoint: No expongas ACR a Internet
  • Geo-replication: Mínimo 2 regiones para producción

Costos

  • Storage: €0.08/GB/mes
  • Geo-replication: €44/mes por región
  • Build minutes: €0.0008/segundo

Ejemplo: ACR Premium + 50GB + 2 réplicas = ~€140/mes

Referencias

Azure Private DNS Zones: Resolución de nombres en VNets

Resumen

Private DNS Zones te permite usar nombres DNS personalizados dentro de tus VNets sin exponer nada a Internet. Esencial para arquitecturas privadas y hybrid cloud.

¿Qué es Private DNS Zone?

Es un servicio DNS que solo resuelve dentro de VNets enlazadas. Casos de uso:

  • Nombres personalizados para VMs privadas (db01.internal.company.com)
  • Private Endpoints de servicios PaaS (mystorageacct.privatelink.blob.core.windows.net)
  • Integración con on-premises DNS (conditional forwarding)
  • Split-horizon DNS (nombre público vs privado)

Crear Private DNS Zone

# Variables
RG="my-rg"
ZONE_NAME="internal.company.com"
VNET_NAME="my-vnet"

# Crear Private DNS Zone
az network private-dns zone create \
  --resource-group $RG \
  --name $ZONE_NAME

# Enlazar a VNet (Virtual Network Link)
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name ${VNET_NAME}-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

Auto-registration

Si --registration-enabled true, Azure crea automáticamente records A/AAAA cuando despliegas VMs en la VNet.

Añadir registros DNS

# Record A (IPv4)
az network private-dns record-set a add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name db01 \
  --ipv4-address 10.0.1.10

# Record CNAME
az network private-dns record-set cname set-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name www \
  --cname db01.internal.company.com

# Record TXT (verificación de dominio)
az network private-dns record-set txt add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name _verification \
  --value "verification-token-12345"

Auto-registration de VMs

# Crear zona con auto-registration
az network private-dns zone create \
  --resource-group $RG \
  --name auto.internal.com

# Link con auto-registration habilitado
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name auto.internal.com \
  --name auto-vnet-link \
  --virtual-network $VNET_NAME \
  --registration-enabled true

# Crear VM - se auto-registra
az vm create \
  --resource-group $RG \
  --name myvm01 \
  --vnet-name $VNET_NAME \
  --subnet default \
  --image Ubuntu2204 \
  --admin-username azureuser

La VM se registra automáticamente como myvm01.auto.internal.com apuntando a su IP privada.

Private Endpoints con DNS

Cuando creas un Private Endpoint para Azure Storage, SQL, etc., necesitas Private DNS Zone para resolución:

# Crear Storage Account
STORAGE_ACCOUNT="mystorageacct$(date +%s)"
az storage account create \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG \
  --sku Standard_LRS \
  --public-network-access Disabled

# Crear Private DNS Zone para Blob
az network private-dns zone create \
  --resource-group $RG \
  --name privatelink.blob.core.windows.net

# Link a VNet
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name privatelink.blob.core.windows.net \
  --name blob-dns-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

# Crear Private Endpoint
az network private-endpoint create \
  --resource-group $RG \
  --name ${STORAGE_ACCOUNT}-pe \
  --vnet-name $VNET_NAME \
  --subnet default \
  --private-connection-resource-id $(az storage account show --name $STORAGE_ACCOUNT --resource-group $RG --query id -o tsv) \
  --group-id blob \
  --connection-name blob-connection

# Crear DNS record automáticamente
az network private-endpoint dns-zone-group create \
  --resource-group $RG \
  --endpoint-name ${STORAGE_ACCOUNT}-pe \
  --name blob-dns-group \
  --private-dns-zone privatelink.blob.core.windows.net \
  --zone-name blob

Ahora desde la VNet:

nslookup mystorageacct.blob.core.windows.net
# Resuelve a IP privada 10.0.1.5

DNS Forwarding para hybrid cloud

Para que on-premises resuelva nombres de Private DNS Zone:

graph LR
    A[On-Premises DNS] -->|Conditional Forwarding| B[Azure DNS Resolver]
    B --> C[Private DNS Zone]
    C --> D[mystorageacct.privatelink.blob.core.windows.net]

Paso 1: Crear DNS Private Resolver

# Crear subnet para resolver
az network vnet subnet create \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name dns-resolver-inbound \
  --address-prefixes 10.0.255.0/28

# Crear DNS Private Resolver
az dns-resolver create \
  --resource-group $RG \
  --name my-dns-resolver \
  --location westeurope \
  --id /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET_NAME

# Crear inbound endpoint
az dns-resolver inbound-endpoint create \
  --resource-group $RG \
  --dns-resolver-name my-dns-resolver \
  --name inbound-endpoint \
  --location westeurope \
  --ip-configurations '[{"subnet":{"id":"/subscriptions/{sub-id}/resourceGroups/'$RG'/providers/Microsoft.Network/virtualNetworks/'$VNET_NAME'/subnets/dns-resolver-inbound"},"privateIpAllocationMethod":"Dynamic"}]'

Paso 2: Configurar on-premises DNS

En tu DNS on-premises (BIND, Windows DNS, etc.):

# Conditional Forwarder
Zone: privatelink.blob.core.windows.net
Forwarder: 10.0.255.4  # IP del inbound endpoint

Listar registros

# Ver todos los record sets
az network private-dns record-set list \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --output table

# Ver record específico
az network private-dns record-set a show \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name db01

Troubleshooting DNS

# Desde una VM en la VNet
nslookup db01.internal.company.com

# Ver configuración DNS de la NIC
az network nic show \
  --resource-group $RG \
  --name myvm-nic \
  --query "dnsSettings"

# Probar desde VM con dig
dig db01.internal.company.com

# Flush DNS cache en Linux VM
sudo systemd-resolve --flush-caches

Buenas prácticas

  • Naming convention: Usa .internal, .private o .local para zonas privadas
  • Un zone por VNet: Evita múltiples zonas con el mismo nombre
  • RBAC: Separa permisos de DNS de permisos de red
  • Monitoring: Habilita diagnostic logs para audit
  • Terraform: Gestiona DNS zones como código
  • Private Endpoint DNS: Usa DNS Zone Groups para auto-configuración

Limitaciones

  • Máximo 25,000 records por zone
  • Máximo 1,000 VNet links por zone
  • No soporta DNSSEC
  • No soporta zone transfers (AXFR/IXFR)

Costos

  • Hosted zone: €0.45/zone/mes
  • Queries: Primeros 1B gratis, luego €0.36/millón
  • VNet links: €0.09/link/mes

En práctica: 1 zone + 5 VNet links = ~€0.90/mes

Referencias

Application Insights: Instrumentación completa en 10 minutos

Resumen

Application Insights te da observability completa de tu aplicación: traces, metrics, logs y dependencies. aquí está el setup mínimo para .NET, Python y Node.js.

¿Qué es Application Insights?

Application Insights es el APM (Application Performance Monitoring) nativo de Azure. Captura automáticamente:

  • Requests: HTTP requests con duración y status code
  • Dependencies: Llamadas a DBs, APIs externas, Redis, etc.
  • Exceptions: Stack traces completos
  • Custom events: Lo que tú quieras trackear
  • User telemetry: Sessions, page views, user flows

Crear recurso

# Variables
RG="my-rg"
LOCATION="westeurope"
APPINSIGHTS_NAME="my-appinsights"

# Crear Log Analytics Workspace (requerido)
az monitor log-analytics workspace create \
  --resource-group $RG \
  --workspace-name my-workspace \
  --location $LOCATION

WORKSPACE_ID=$(az monitor log-analytics workspace show \
  --resource-group $RG \
  --workspace-name my-workspace \
  --query id -o tsv)

# Crear Application Insights
az monitor app-insights component create \
  --app $APPINSIGHTS_NAME \
  --location $LOCATION \
  --resource-group $RG \
  --workspace $WORKSPACE_ID

Obtener connection string

# Connection string (nuevo método recomendado)
CONN_STRING=$(az monitor app-insights component show \
  --resource-group $RG \
  --app $APPINSIGHTS_NAME \
  --query connectionString -o tsv)

echo $CONN_STRING
# InstrumentationKey=abc-123;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/

Instrumentación por lenguaje

.NET 6+

# Instalar paquete NuGet
dotnet add package Microsoft.ApplicationInsights.AspNetCore

Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Añadir Application Insights
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = builder.Configuration["ApplicationInsights:ConnectionString"];
});

var app = builder.Build();

appsettings.json:

{
  "ApplicationInsights": {
    "ConnectionString": "InstrumentationKey=abc-123;..."
  }
}

Python (Flask/FastAPI)

# Instalar SDK
pip install opencensus-ext-azure
pip install opencensus-ext-flask  # O opencensus-ext-fastapi
from flask import Flask
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
import logging

app = Flask(__name__)

# Middleware para auto-instrumentación
middleware = FlaskMiddleware(
    app,
    exporter=AzureLogHandler(
        connection_string='InstrumentationKey=abc-123;...'
    )
)

# Logger
logger = logging.getLogger(__name__)
logger.addHandler(AzureLogHandler(
    connection_string='InstrumentationKey=abc-123;...'
))

@app.route('/')
def hello():
    logger.info('Home page accessed')
    return 'Hello World'

Node.js

# Instalar SDK
npm install applicationinsights
const appInsights = require('applicationinsights');

appInsights.setup('InstrumentationKey=abc-123;...')
    .setAutoDependencyCorrelation(true)
    .setAutoCollectRequests(true)
    .setAutoCollectPerformance(true)
    .setAutoCollectExceptions(true)
    .setAutoCollectDependencies(true)
    .setAutoCollectConsole(true)
    .start();

// Tu código Express
const express = require('express');
const app = express();

app.get('/', (req, res) => {
    appInsights.defaultClient.trackEvent({name: 'HomePage'});
    res.send('Hello World');
});

Custom telemetry

Tracking personalizado

// .NET
using Microsoft.ApplicationInsights;

private readonly TelemetryClient _telemetry;

public MyService(TelemetryClient telemetry)
{
    _telemetry = telemetry;
}

public void ProcessOrder(Order order)
{
    // Track evento
    _telemetry.TrackEvent("OrderProcessed", new Dictionary<string, string>
    {
        {"OrderId", order.Id},
        {"Amount", order.Total.ToString()}
    });

    // Track métrica
    _telemetry.TrackMetric("OrderValue", order.Total);

    // Track trace (log)
    _telemetry.TrackTrace($"Processing order {order.Id}", SeverityLevel.Information);
}
# Python
from opencensus.trace import tracer as tracer_module
from opencensus.ext.azure.trace_exporter import AzureExporter

tracer = tracer_module.Tracer(
    exporter=AzureExporter(connection_string='...'),
)

with tracer.span(name='ProcessOrder'):
    # Tu lógica
    tracer.add_attribute_to_current_span('orderId', order_id)
    tracer.add_attribute_to_current_span('amount', amount)

Queries útiles en Log Analytics

// Requests más lentos (P95)
requests
| where timestamp > ago(1h)
| summarize percentile(duration, 95) by name
| order by percentile_duration_95 desc

// Excepciones por tipo
exceptions
| where timestamp > ago(24h)
| summarize count() by type, outerMessage
| order by count_ desc

// Dependency calls fallando
dependencies
| where success == false
| where timestamp > ago(1h)
| summarize count() by name, resultCode

// User journey (funnels)
customEvents
| where timestamp > ago(7d)
| where name in ('PageView', 'AddToCart', 'Checkout', 'Purchase')
| summarize count() by name

Alertas automatizadas

# Alert para tasa de errores > 5%
az monitor metrics alert create \
  --name high-error-rate \
  --resource-group $RG \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/components/$APPINSIGHTS_NAME \
  --condition "avg requests/failed > 5" \
  --window-size 5m \
  --evaluation-frequency 1m \
  --action /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/actionGroups/ops-team

Live Metrics Stream

Para debugging en tiempo real:

  1. Portal Azure → Application Insights → Live Metrics
  2. Ver requests, dependencies, exceptions en vivo
  3. Aplicar filtros por server, cloud role, etc.

Sampling para reducir costos

// .NET - adaptive sampling (recomendado)
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.EnableAdaptiveSampling = true;
    options.ConnectionString = connectionString;
});

// Fixed sampling (5% de telemetría)
builder.Services.AddApplicationInsightsTelemetryProcessor<AdaptiveSamplingTelemetryProcessor>(p =>
{
    p.SamplingPercentage = 5;
});

Costos

Application Insights cobra por GB ingestado: - Primeros 5 GB/mes: gratis - >5 GB: ~€2/GB

Con sampling al 10%, una app con 1M requests/día → ~15GB/mes → €20/mes

Distributed tracing

Para microservicios, Application Insights correlaciona automáticamente:

graph LR
    A[API Gateway] -->|operation_Id: abc123| B[Auth Service]
    A -->|operation_Id: abc123| C[Order Service]
    C -->|operation_Id: abc123| D[Payment API]

Query cross-service:

// Trace completo de una operación
union requests, dependencies
| where operation_Id == 'abc123'
| project timestamp, itemType, name, duration, success
| order by timestamp asc

Buenas prácticas

  • No loguees PII: Filtra datos sensibles antes de enviar
  • Usa sampling en producción: 10-20% es suficiente
  • Custom dimensions: Añade tenant_id, user_role para segmentar
  • Dependency tracking: Verifica que captura SQL, Redis, HTTP
  • Availability tests: Configura pings cada 5 minutos desde múltiples regiones

Referencias

Azure App Service Deployment Slots: Despliegues sin downtime

Resumen

Los Deployment Slots en Azure App Service te permiten desplegar nuevas versiones de tu aplicación sin tiempo de inactividad. es la forma más sencilla de implementar blue-green deployments en Azure.

¿Qué son los Deployment Slots?

Los Deployment Slots son entornos en vivo dentro de tu App Service donde puedes desplegar diferentes versiones de tu aplicación. Cada slot:

  • Tiene su propia URL
  • Puede tener configuración independiente
  • Permite swap instantáneo entre slots
  • Comparte el mismo plan de App Service

Caso de uso típico

# Variables
RG="my-rg"
APP_NAME="my-webapp"
LOCATION="westeurope"

# Crear App Service Plan (mínimo Standard)
az appservice plan create \
  --name ${APP_NAME}-plan \
  --resource-group $RG \
  --location $LOCATION \
  --sku S1

# Crear App Service
az webapp create \
  --name $APP_NAME \
  --resource-group $RG \
  --plan ${APP_NAME}-plan

# Crear slot de staging
az webapp deployment slot create \
  --name $APP_NAME \
  --resource-group $RG \
  --slot staging

Workflow de despliegue

1. Desplegar a staging:

# Desplegar código al slot staging
az webapp deployment source config-zip \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --src app.zip

2. Validar en staging:

URL del slot: https://{app-name}-staging.azurewebsites.net

3. Hacer swap a producción:

# Swap directo staging -> production
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production

# Swap con preview (recomendado)
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action preview

# Después de validar, completar swap
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action swap

Proceso de swap

Durante el swap, Azure sigue este proceso: 1. Aplica settings del target slot al source slot 2. Espera a que todas las instancias se reinicien y calienten 3. Si todas las instancias están healthy, intercambia el routing 4. El target slot (production) queda con la nueva app sin downtime

Tiempo total: 1-5 minutos dependiendo de warmup. La producción NO sufre downtime.

Configuración sticky vs swappable

No toda la configuración se intercambia en el swap:

Sticky (no se mueve con el código): - App settings marcadas como "Deployment slot setting" - Connection strings marcadas como "Deployment slot setting" - Custom domains - Nonpublic certificates y TLS/SSL settings - Scale settings - IP restrictions - Always On, Diagnostic settings, CORS

Swappable (se mueve con el código): - General settings (framework version, 32/64-bit) - App settings no marcadas - Handler mappings - Public certificates - WebJobs content - Hybrid connections - Virtual network integration

Configurar sticky settings:

Desde el portal Azure: 1. Ir a ConfigurationApplication settings del slot 2. Añadir/editar app setting 3. Marcar checkbox Deployment slot setting 4. Apply

# Crear app setting en slot staging
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings DATABASE_URL="staging-connection-string"

# Para hacerla sticky: usar portal o ARM template
# No hay flag CLI directo para marcar como slot-specific

Auto-swap para CI/CD

# Configurar auto-swap desde staging a production
az webapp deployment slot auto-swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --auto-swap-slot production

Con auto-swap habilitado: 1. Push a staging → despliegue automático 2. Warmup automático del slot 3. Swap a producción sin intervención manual

Customizar warmup path:

# Configurar URL de warmup personalizada
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_PATH="/health/ready"

# Validar solo códigos HTTP específicos
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_STATUSES="200,202"

Buenas prácticas

  • Usar staging para testing: Siempre valida en staging antes del swap
  • Configurar health checks: Azure verifica el slot antes de hacer swap
  • Mantener paridad: Staging debe replicar producción (misma configuración, DB de test similar)
  • Rollback rápido: Si falla, haz swap inverso inmediatamente
  • Limitar slots: Máximo 2-3 slots por app (staging, pre-production)

Ahorro de costos

Los slots comparten recursos del App Service Plan. No pagas más por tener staging, pero requieres tier Standard o superior.

Monitoring del swap

# Ver historial de swaps
az webapp deployment slot list \
  --resource-group $RG \
  --name $APP_NAME

# Logs durante el swap
az webapp log tail \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging

Referencias

Azure Cost Management: planifica tu presupuesto 2025

Resumen

enero es el mes para configurar presupuestos y alertas de costes en Azure. Con Azure Cost Management puedes establecer límites, recibir notificaciones y evitar sorpresas en la factura. En este post verás cómo crear presupuestos, configurar alertas automáticas y usar Azure Advisor para optimizar gastos desde el primer día del año.

Develop my firts policy for Kubernetes with minikube and gatekeeper

Now that we have our development environment, we can start developing our first policy for Kubernetes with minikube and gatekeeper.

First at all, we need some visual code editor to write our policy. I recommend using Visual Studio Code, but you can use any other editor. Exists a plugin for Visual Studio Code that helps you to write policies for gatekeeper. You can install it from the marketplace: Open Policy Agent.

Once you have your editor ready, you can start writing your policy. In this example, we will create a policy that denies the creation of pods with the image nginx:latest.

For that we need two files:

  • constraint.yaml: This file defines the constraint that we want to apply.
  • constraint_template.yaml: This file defines the template that we will use to create the constraint.

Let's start with the constraint_template.yaml file:

constraint_template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenypodswithnginxlatest
spec:
  crd:
    spec:
      names:
        kind: K8sDenyPodsWithNginxLatest
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenypodswithnginxlatest

        violation[{"msg": msg}] {
          input.review.object.spec.containers[_].image == "nginx:latest"
          msg := "Containers cannot use the nginx:latest image"
        }

Now, let's create the constraint.yaml file:

constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithNginxLatest
metadata:
  name: deny-pods-with-nginx-latest
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    msg: "Containers cannot use the nginx:latest image"

Now, we can apply the files to our cluster:

# Create the constraint template
kubectl apply -f constraint_template.yaml

# Create the constraint
kubectl apply -f constraint.yaml

Now, we can test the constraint. Let's create a pod with the image nginx:latest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
EOF

We must see an error message like this:

Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [k8sdenypodswithnginxlatest] Containers cannot use the nginx:latest image

Now, let's create a pod with a different image:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25.5
EOF

We must see a message like this:

pod/nginx-pod created

For cleaning up, you can delete pod,the constraint and the constraint template:

# Delete the pod
kubectl delete pod nginx-pod
# Delete the constraint
kubectl delete -f constraint.yaml

# Delete the constraint template
kubectl delete -f constraint_template.yaml

And that's it! We have developed our first policy for Kubernetes with minikube and gatekeeper. Now you can start developing more complex policies and test them in your cluster.

Happy coding!

How to create a local environment to write policies for Kubernetes with minikube and gatekeeper

minikube in wsl2

Enable systemd in WSL2

sudo nano /etc/wsl.conf

Add the following:

[boot]
systemd=true

Restart WSL2 in command:

wsl --shutdown
wsl

Install docker

Install docker using repository

Minikube

Install minikube

# Download the latest Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# Make it executable
chmod +x ./minikube

# Move it to your user's executable PATH
sudo mv ./minikube /usr/local/bin/

#Set the driver version to Docker
minikube config set driver docker

Test minikube

# Enable completion
source <(minikube completion bash)
# Start minikube
minikube start
# Check the status
minikube status
# set context
kubectl config use-context minikube
# get pods
kubectl get pods --all-namespaces

Install OPA Gatekeeper

# Install OPA Gatekeeper
# check version in https://open-policy-agent.github.io/gatekeeper/website/docs/install#deploying-a-release-using-prebuilt-image
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

# wait and check the status
sleep 60
kubectl get pods -n gatekeeper-system

Test constraints

First, we need to create a constraint template and a constraint.

# Create a constraint template
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/templates/k8srequiredlabels_template.yaml

# Create a constraint
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/constraints/k8srequiredlabels_constraint.yaml

Now, we can test the constraint.

# Create a deployment without the required label
kubectl create namespace petete 
We must see an error message like this:

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}

# Create a deployment with the required label
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: petete
  labels:
    gatekeeper: "true"
EOF
kubectl get namespaces petete
We must see a message like this:

NAME     STATUS   AGE
petete   Active   3s

Conclusion

We have created a local environment to write policies for Kubernetes with minikube and gatekeeper. We have tested the environment with a simple constraint. Now we can write our own policies and test them in our local environment.

References

Trigger an on-demand Azure Policy compliance evaluation scan

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to trigger a scan with Azure Policy.

What is a scan in Azure Policy

A scan in Azure Policy is a process that evaluates your resources against a set of policies to determine if they are compliant. When you trigger a scan, Azure Policy evaluates your resources and generates a compliance report that shows the results of the evaluation. The compliance report includes information about the policies that were evaluated, the resources that were scanned, and the compliance status of each resource.

You can trigger a scan in Azure Policy using the Azure CLI, PowerShell, or the Azure portal. When you trigger a scan, you can specify the scope of the scan, the policies to evaluate, and other parameters that control the behavior of the scan.

Trigger a scan with the Azure CLI

To trigger a scan with the Azure CLI, you can use the az policy state trigger-scan command. This command triggers a policy compliance evaluation for a scope

How to trigger a scan with the Azure CLI for active subscription:

az policy state trigger-scan 

How to trigger a scan with the Azure CLI for a specific resource group:

az policy state trigger-scan --resource-group myResourceGroup

Trigger a scan with PowerShell

To trigger a scan with PowerShell, you can use the Start-AzPolicyComplianceScan cmdlet. This cmdlet triggers a policy compliance evaluation for a scope.

How to trigger a scan with PowerShell for active subscription:

Start-AzPolicyComplianceScan
$job = Start-AzPolicyComplianceScan -AsJob

How to trigger a scan with PowerShell for a specific resource group:

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

Conclusion

In this article, we discussed how to trigger a scan with Azure Policy. We covered how to trigger a scan using the Azure CLI and PowerShell. By triggering a scan, you can evaluate your resources against a set of policies to determine if they are compliant. This can help you ensure that your resources are compliant with your organization's standards and best practices.

References

Custom Azure Policy for Kubernetes

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to create a custom Azure Policy for Kubernetes.

How Azure Policy works in kubernetes

Azure Policy for Kubernetes is an extension of Azure Policy that allows you to enforce policies on your Kubernetes clusters. You can use Azure Policy to define policies that apply to your Kubernetes resources, such as pods, deployments, and services. These policies can help you ensure that your Kubernetes clusters are compliant with your organization's standards and best practices.

Azure Policy for Kubernetes uses Gatekeeper, an open-source policy controller for Kubernetes, to enforce policies on your clusters. Gatekeeper uses the Open Policy Agent (OPA) policy language to define policies and evaluate them against your Kubernetes resources. You can use Gatekeeper to create custom policies that enforce specific rules and effects on your clusters.

graph TD
    A[Azure Policy] -->|Enforce policies| B["add-on azure-policy(Gatekeeper)"]
    B -->|Evaluate policies| C[Kubernetes resources]

Azure Policy for Kubernetes supports the following cluster environments:

  • Azure Kubernetes Service (AKS), through Azure Policy's Add-on for AKS
  • Azure Arc enabled Kubernetes, through Azure Policy's Extension for Arc

Prepare your environment

Before you can create custom Azure Policy for Kubernetes, you need to set up your environment. You will need an Azure Kubernetes Service (AKS) cluster with the Azure Policy add-on enabled. You will also need the Azure CLI and the Azure Policy extension for Visual Studio Code.

To set up your environment, follow these steps:

  1. Create a resource group

    az group create --name myResourceGroup --location spaincentral
    
  2. Create an Azure Kubernetes Service (AKS) cluster with default settings and one node:

    az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
    
  3. Enable Azure Policies for the cluster:

    az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-policy
    
  4. Check the status of the add-on:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.azurepolicy.enabled
    
  5. Check the status of gatekeeper:

    # Install kubectl and kubelogin
    az aks install-cli --install-location .local/bin/kubectl --kubelogin-install-location .local/bin/kubelogin
    # Get the credentials for the AKS cluster
    az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing
    # azure-policy pod is installed in kube-system namespace
    kubectl get pods -n kube-system
    # gatekeeper pod is installed in gatekeeper-system namespace
    kubectl get pods -n gatekeeper-system
    
  6. Install vscode and the Azure Policy extension

    code --install-extension ms-azuretools.vscode-azurepolicy
    

Once you have set up your environment, you can create custom Azure Policy for Kubernetes.

How to create a custom Azure Policy for Kubernetes

To create a custom Azure Policy for Kubernetes, you need to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. You can define policies that enforce specific rules and effects on your Kubernetes resources, such as pods, deployments, and services.

Info

It`s recommended to review Constraint Templates in How to use Gatekeeper

To create a custom Azure Policy for Kubernetes, follow these steps:

  1. Define a constraint template for the policy, I will use an existing constraint template from the Gatekeeper library that requires Ingress resources to be HTTPS only:

    gatekeeper-library/library/general/httpsonly/template.yaml
    apiVersion: templates.gatekeeper.sh/v1
    kind: ConstraintTemplate
    metadata:
        name: k8shttpsonly
        annotations:
            metadata.gatekeeper.sh/title: "HTTPS Only"
            metadata.gatekeeper.sh/version: 1.0.2
            description: >-
            Requires Ingress resources to be HTTPS only.  Ingress resources must
            include the `kubernetes.io/ingress.allow-http` annotation, set to `false`.
            By default a valid TLS {} configuration is required, this can be made
            optional by setting the `tlsOptional` parameter to `true`.
    
            https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    spec:
    crd:
        spec:
        names:
            kind: K8sHttpsOnly
        validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
            type: object
            description: >-
                Requires Ingress resources to be HTTPS only.  Ingress resources must
                include the `kubernetes.io/ingress.allow-http` annotation, set to
                `false`. By default a valid TLS {} configuration is required, this
                can be made optional by setting the `tlsOptional` parameter to
                `true`.
            properties:
                tlsOptional:
                type: boolean
                description: "When set to `true` the TLS {} is optional, defaults 
                to false."
    targets:
        - target: admission.k8s.gatekeeper.sh
        rego: |
            package k8shttpsonly
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not https_complete(ingress)
            not tls_is_optional
            msg := sprintf("Ingress should be https. tls configuration and allow-http=false annotation are required for %v", [ingress.metadata.name])
            }
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not annotation_complete(ingress)
            tls_is_optional
            msg := sprintf("Ingress should be https. The allow-http=false annotation is required for %v", [ingress.metadata.name])
            }
    
            https_complete(ingress) = true {
            ingress.spec["tls"]
            count(ingress.spec.tls) > 0
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            annotation_complete(ingress) = true {
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            tls_is_optional {
            parameters := object.get(input, "parameters", {})
            object.get(parameters, "tlsOptional", false) == true
            }
    

This constrains requires Ingress resources to be HTTPS only

  1. Create an Azure Policy for this constraint template

    1. Open the restriction template created earlier in Visual Studio Code.
    2. Click on Azure Policy icon in the Activity Bar.
    3. Click on View > Command Palette.
    4. Search for the command "Azure Policy for Kubernetes: Create Policy Definition from Constraint Template or Mutation", select base64, this command will create a policy definition from the constraint template.
      Untitled.json
      {
      "properties": {
          "policyType": "Custom",
          "mode": "Microsoft.Kubernetes.Data",
          "displayName": "/* EDIT HERE */",
          "description": "/* EDIT HERE */",
          "policyRule": {
          "if": {
              "field": "type",
              "in": [
              "Microsoft.ContainerService/managedClusters"
              ]
          },
          "then": {
              "effect": "[parameters('effect')]",
              "details": {
              "templateInfo": {
                  "sourceType": "Base64Encoded",
                  "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ=="
              },
              "apiGroups": [
                  "/* EDIT HERE */"
              ],
              "kinds": [
                  "/* EDIT HERE */"
              ],
              "namespaces": "[parameters('namespaces')]",
              "excludedNamespaces": "[parameters('excludedNamespaces')]",
              "labelSelector": "[parameters('labelSelector')]",
              "values": {
                  "tlsOptional": "[parameters('tlsOptional')]"
              }
              }
          }
          },
          "parameters": {
          "effect": {
              "type": "String",
              "metadata": {
              "displayName": "Effect",
              "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
              },
              "allowedValues": [
              "audit",
              "deny",
              "disabled"
              ],
              "defaultValue": "audit"
          },
          "excludedNamespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace exclusions",
              "description": "List of Kubernetes namespaces to exclude from policy evaluation."
              },
              "defaultValue": [
              "kube-system",
              "gatekeeper-system",
              "azure-arc"
              ]
          },
          "namespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace inclusions",
              "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces."
              },
              "defaultValue": []
          },
          "labelSelector": {
              "type": "Object",
              "metadata": {
              "displayName": "Kubernetes label selector",
              "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
              },
              "defaultValue": {},
              "schema": {
              "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
              "type": "object",
              "properties": {
                  "matchLabels": {
                  "description": "matchLabels is a map of {key,value} pairs.",
                  "type": "object",
                  "additionalProperties": {
                      "type": "string"
                  },
                  "minProperties": 1
                  },
                  "matchExpressions": {
                  "description": "matchExpressions is a list of values, a key, and an operator.",
                  "type": "array",
                  "items": {
                      "type": "object",
                      "properties": {
                      "key": {
                          "description": "key is the label key that the selector applies to.",
                          "type": "string"
                      },
                      "operator": {
                          "description": "operator represents a key's relationship to a set of values.",
                          "type": "string",
                          "enum": [
                          "In",
                          "NotIn",
                          "Exists",
                          "DoesNotExist"
                          ]
                      },
                      "values": {
                          "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
                          "type": "array",
                          "items": {
                          "type": "string"
                          }
                      }
                      },
                      "required": [
                      "key",
                      "operator"
                      ],
                      "additionalProperties": false
                  },
                  "minItems": 1
                  }
              },
              "additionalProperties": false
              }
          },
          "tlsOptional": {
              "type": "Boolean",
              "metadata": {
              "displayName": "/* EDIT HERE */",
              "description": "/* EDIT HERE */"
              }
          }
          }
      }
      }
      
    5. Fill the fields with "/ EDIT HERE /" in the policy definition JSON file with the appropriate values, such as the display name, description, API groups, and kinds. For example, in this case you must configure apiGroups: ["extensions", "networking.k8s.io"] and kinds: ["Ingress"]
    6. Save the policy definition JSON file.

This is the complete policy:

json title="https-only.json" { "properties": { "policyType": "Custom", "mode": "Microsoft.Kubernetes.Data", "displayName": "Require HTTPS for Ingress resources", "description": "This policy requires Ingress resources to be HTTPS only. Ingress resources must include the `kubernetes.io/ingress.allow-http` annotation, set to `false`. By default a valid TLS configuration is required, this can be made optional by setting the `tlsOptional` parameter to `true`.", "policyRule": { "if": { "field": "type", "in": [ "Microsoft.ContainerService/managedClusters" ] }, "then": { "effect": "[parameters('effect')]", "details": { "templateInfo": { "sourceType": "Base64Encoded", "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ==" }, "apiGroups": [ "extensions", "networking.k8s.io" ], "kinds": [ "Ingress" ], "namespaces": "[parameters('namespaces')]", "excludedNamespaces": "[parameters('excludedNamespaces')]", "labelSelector": "[parameters('labelSelector')]", "values": { "tlsOptional": "[parameters('tlsOptional')]" } } } }, "parameters": { "effect": { "type": "String", "metadata": { "displayName": "Effect", "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy." }, "allowedValues": [ "audit", "deny", "disabled" ], "defaultValue": "audit" }, "excludedNamespaces": { "type": "Array", "metadata": { "displayName": "Namespace exclusions", "description": "List of Kubernetes namespaces to exclude from policy evaluation." }, "defaultValue": [ "kube-system", "gatekeeper-system", "azure-arc" ] }, "namespaces": { "type": "Array", "metadata": { "displayName": "Namespace inclusions", "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces." }, "defaultValue": [] }, "labelSelector": { "type": "Object", "metadata": { "displayName": "Kubernetes label selector", "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources." }, "defaultValue": {}, "schema": { "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.", "type": "object", "properties": { "matchLabels": { "description": "matchLabels is a map of {key,value} pairs.", "type": "object", "additionalProperties": { "type": "string" }, "minProperties": 1 }, "matchExpressions": { "description": "matchExpressions is a list of values, a key, and an operator.", "type": "array", "items": { "type": "object", "properties": { "key": { "description": "key is the label key that the selector applies to.", "type": "string" }, "operator": { "description": "operator represents a key's relationship to a set of values.", "type": "string", "enum": [ "In", "NotIn", "Exists", "DoesNotExist" ] }, "values": { "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.", "type": "array", "items": { "type": "string" } } }, "required": [ "key", "operator" ], "additionalProperties": false }, "minItems": 1 } }, "additionalProperties": false } }, "tlsOptional": { "type": "Boolean", "metadata": { "displayName": "TLS Optional", "description": "Set to true to make TLS optional" } } } } }

Now you have created a custom Azure Policy for Kubernetes that enforces the HTTPS only constraint on your Kubernetes cluster. You can apply this policy to your cluster to ensure that all Ingress resources are HTTPS only creating a policy definition and assigning it to the management group, subscription or resource group where the AKS cluster is located.

Conclusion

In this article, we discussed how to create a custom Azure Policy for Kubernetes. We showed you how to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. We also showed you how to create a constraint template for the policy and create an Azure Policy for the constraint template. By following these steps, you can create custom policies that enforce specific rules and effects on your Kubernetes resources.

References

Do you need to check if a private endpoint is connected to an external private link service in Azure or just don't know how to do it?

Check this blog post to learn how to do it: Find cross-tenant private endpoint connections in Azure

This is a copy of the script used in the blog post in case it disappears:

scan-private-endpoints-with-manual-connections.ps1
$ErrorActionPreference = "Stop"

class SubscriptionInformation {
    [string] $SubscriptionID
    [string] $Name
    [string] $TenantID
}

class TenantInformation {
    [string] $TenantID
    [string] $DisplayName
    [string] $DomainName
}

class PrivateEndpointData {
    [string] $ID
    [string] $Name
    [string] $Type
    [string] $Location
    [string] $ResourceGroup
    [string] $SubscriptionName
    [string] $SubscriptionID
    [string] $TenantID
    [string] $TenantDisplayName
    [string] $TenantDomainName
    [string] $TargetResourceId
    [string] $TargetSubscriptionName
    [string] $TargetSubscriptionID
    [string] $TargetTenantID
    [string] $TargetTenantDisplayName
    [string] $TargetTenantDomainName
    [string] $Description
    [string] $Status
    [string] $External
}

$installedModule = Get-Module -Name "Az.ResourceGraph" -ListAvailable
if ($null -eq $installedModule) {
    Install-Module "Az.ResourceGraph" -Scope CurrentUser
}
else {
    Import-Module "Az.ResourceGraph"
}

$kqlQuery = @"
resourcecontainers | where type == 'microsoft.resources/subscriptions'
| project  subscriptionId, name, tenantId
"@

$batchSize = 1000
$skipResult = 0

$subscriptions = @{}

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {
        $s = [SubscriptionInformation]::new()
        $s.SubscriptionID = $row.subscriptionId
        $s.Name = $row.name
        $s.TenantID = $row.tenantId

        $subscriptions.Add($s.SubscriptionID, $s) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

"Found $($subscriptions.Count) subscriptions"

function Get-SubscriptionInformation($SubscriptionID) {
    if ($subscriptions.ContainsKey($SubscriptionID)) {
        return $subscriptions[$SubscriptionID]
    } 

    Write-Warning "Using fallback subscription information for '$SubscriptionID'"
    $s = [SubscriptionInformation]::new()
    $s.SubscriptionID = $SubscriptionID
    $s.Name = "<unknown>"
    $s.TenantID = [Guid]::Empty.Guid
    return $s
}

$tenantCache = @{}
$subscriptionToTenantCache = @{}

function Get-TenantInformation($TenantID) {
    $domain = $null
    if ($tenantCache.ContainsKey($TenantID)) {
        $domain = $tenantCache[$TenantID]
    } 
    else {
        try {
            $tenantResponse = Invoke-AzRestMethod -Uri "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$TenantID')"
            $tenantInformation = ($tenantResponse.Content | ConvertFrom-Json)

            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = $tenantInformation.displayName
            $ti.DomainName = $tenantInformation.defaultDomainName

            $domain = $ti
        }
        catch {
            Write-Warning "Failed to get domain information for '$TenantID'"
        }

        if ([string]::IsNullOrEmpty($domain)) {
            Write-Warning "Using fallback domain information for '$TenantID'"
            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = "<unknown>"
            $ti.DomainName = "<unknown>"

            $domain = $ti
        }

        $tenantCache.Add($TenantID, $domain) | Out-Null
    }

    return $domain
}

function Get-TenantFromSubscription($SubscriptionID) {
    $tenant = $null
    if ($subscriptionToTenantCache.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptionToTenantCache[$SubscriptionID]
    }
    elseif ($subscriptions.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptions[$SubscriptionID].TenantID
        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }
    else {
        try {

            $subscriptionResponse = Invoke-AzRestMethod -Path "/subscriptions/$($SubscriptionID)?api-version=2022-12-01"
            $startIndex = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.IndexOf("https://login.windows.net/")
            $tenantID = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.Substring($startIndex + "https://login.windows.net/".Length, 36)

            $tenant = $tenantID
        }
        catch {
            Write-Warning "Failed to get tenant from subscription '$SubscriptionID'"
        }

        if ([string]::IsNullOrEmpty($tenant)) {
            Write-Warning "Using fallback tenant information for '$SubscriptionID'"

            $tenant = [Guid]::Empty.Guid
        }

        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }

    return $tenant
}

$kqlQuery = @"
resources
| where type == "microsoft.network/privateendpoints"
| where isnotnull(properties) and properties contains "manualPrivateLinkServiceConnections"
| where array_length(properties.manualPrivateLinkServiceConnections) > 0
| mv-expand properties.manualPrivateLinkServiceConnections
| extend status = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.status
| extend description = coalesce(properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.description, "")
| extend privateLinkServiceId = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceId
| extend privateLinkServiceSubscriptionId = tostring(split(privateLinkServiceId, "/")[2])
| project id, name, location, type, resourceGroup, subscriptionId, tenantId, privateLinkServiceId, privateLinkServiceSubscriptionId, status, description
"@

$batchSize = 1000
$skipResult = 0

$privateEndpoints = New-Object System.Collections.ArrayList

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {

        $si1 = Get-SubscriptionInformation -SubscriptionID $row.SubscriptionID
        $ti1 = Get-TenantInformation -TenantID $row.TenantID

        $si2 = Get-SubscriptionInformation -SubscriptionID $row.PrivateLinkServiceSubscriptionId
        $tenant2 = Get-TenantFromSubscription -SubscriptionID $si2.SubscriptionID
        $ti2 = Get-TenantInformation -TenantID $tenant2

        $peData = [PrivateEndpointData]::new()
        $peData.ID = $row.ID
        $peData.Name = $row.Name
        $peData.Type = $row.Type
        $peData.Location = $row.Location
        $peData.ResourceGroup = $row.ResourceGroup

        $peData.SubscriptionName = $si1.Name
        $peData.SubscriptionID = $si1.SubscriptionID
        $peData.TenantID = $ti1.TenantID
        $peData.TenantDisplayName = $ti1.DisplayName
        $peData.TenantDomainName = $ti1.DomainName

        $peData.TargetResourceId = $row.PrivateLinkServiceId
        $peData.TargetSubscriptionName = $si2.Name
        $peData.TargetSubscriptionID = $si2.SubscriptionID
        $peData.TargetTenantID = $ti2.TenantID
        $peData.TargetTenantDisplayName = $ti2.DisplayName
        $peData.TargetTenantDomainName = $ti2.DomainName

        $peData.Description = $row.Description
        $peData.Status = $row.Status

        if ($ti2.DomainName -eq "MSAzureCloud.onmicrosoft.com") {
            $peData.External = "Managed by Microsoft"
        }
        elseif ($si2.TenantID -eq [Guid]::Empty.Guid) {
            $peData.External = "Yes"
        }
        else {
            $peData.External = "No"
        }

        $privateEndpoints.Add($peData) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

$privateEndpoints | Format-Table
$privateEndpoints | Export-CSV "private-endpoints.csv" -Delimiter ';' -Force

"Found $($privateEndpoints.Count) private endpoints with manual connections"

if ($privateEndpoints.Count -ne 0) {
    Start-Process "private-endpoints.csv"
}

Conclusion

Now you know how to check if a private endpoint is connected to an external private link service in Azure.

That's all folks!