Skip to content

Blog

Azure Logic Apps: Automatización sin código para integraciones

Resumen

Logic Apps te permite crear workflows de integración sin escribir código. Ideal para automatizar procesos que involucran múltiples servicios: recibir email → guardar adjunto en Blob → procesar con Function → enviar notificación.

¿Cuándo usar Logic Apps?

  • Integraciones entre SaaS (Office 365, Salesforce, Dynamics)
  • Procesos de negocio automatizados
  • Event-driven workflows
  • B2B con EDI/AS2/X12

No uses Logic Apps si: - Necesitas lógica compleja (usa Functions) - Performance crítico <100ms (usa Functions) - Procesamiento en batch grande (usa Data Factory)

Crear Logic App

# Variables
RG="my-rg"
LOGIC_APP="email-processor"
LOCATION="westeurope"

# Crear Logic App (Consumption plan)
az logic workflow create \
  --resource-group $RG \
  --location $LOCATION \
  --name $LOGIC_APP

Ejemplo: Procesar emails con adjuntos

Workflow: 1. Trigger: Cuando llega email a Outlook 2. Acción: Si tiene adjunto PDF 3. Acción: Guardar en Blob Storage 4. Acción: Llamar Function para OCR 5. Acción: Guardar metadatos en Cosmos DB 6. Acción: Enviar notificación a Teams

{
  "definition": {
    "$schema": "https://schema.management.azure.com/schemas/2016-06-01/Microsoft.Logic.json",
    "triggers": {
      "When_a_new_email_arrives": {
        "type": "ApiConnection",
        "inputs": {
          "host": {"connection": {"name": "@parameters('$connections')['office365']['connectionId']"}},
          "method": "get",
          "path": "/Mail/OnNewEmail"
        }
      }
    },
    "actions": {
      "Condition_has_PDF": {
        "type": "If",
        "expression": {
          "@contains(triggerBody()?['Attachments'], '.pdf')"
        },
        "actions": {
          "Upload_to_Blob": {
            "type": "ApiConnection",
            "inputs": {
              "host": {"connection": {"name": "@parameters('$connections')['azureblob']['connectionId']"}},
              "method": "post",
              "path": "/datasets/default/files",
              "queries": {
                "folderPath": "/invoices",
                "name": "@triggerBody()?['Attachments'][0]['Name']"
              },
              "body": "@base64ToBinary(triggerBody()?['Attachments'][0]['ContentBytes'])"
            }
          }
        }
      }
    }
  }
}

Conectores más útiles

Azure: - Service Bus, Event Grid, Storage, Cosmos DB, Key Vault, Functions

Microsoft 365: - Outlook, Teams, SharePoint, OneDrive, Planner

Terceros: - Salesforce, SAP, Twitter, Slack, Twilio, SendGrid

On-premises: - SQL Server, File System, Oracle (requiere Data Gateway)

Standard vs Consumption

Feature Consumption Standard
Pricing Por ejecución Por vCPU/hora
Networking Public VNET integration
Stateful/Stateless Stateful Ambos
Local dev No Sí (VS Code)
CI/CD Difícil Fácil

Buenas prácticas

  • Idempotencia: Usa ClientTrackingId para deduplicación
  • Error handling: Configura retry policy y run after
  • Monitoreo: Habilita diagnostic logs
  • Variables: Usa variables para valores reutilizables
  • Funciones inline: Usa expressions para transformaciones simples

Referencias

Test the proxy configuration in an AKS cluster

Variables

export RESOURCE_GROUP=<your-resource-group>
export AKS_CLUSTER_NAME=<your-aks-cluster-name>
export MC_RESOURCE_GROUP_NAME=<your-mc-resource-group-name>
export VMSS_NAME=<your-vmss-name>
export HTTP_PROXYCONFIGURED="http://<your-http-proxy>:8080/"
export HTTPS_PROXYCONFIGURED="https://<your-https-proxy>:8080/"

Get the VMSS instance IDs

# To get the instance IDs of all the instances in the VMSS, use the following command:
az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId"

# For one instance, you can use the following command to get the instance ID:
VMSS_INSTANCE_IDS=$(az vmss list-instances --resource-group $MC_RESOURCE_GROUP_NAME --name $VMSS_NAME --output table --query "[].instanceId" | tail -1)

Use an instance ID to test connectivity from the HTTP proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTP_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test connectivity from the HTTPS proxy server to HTTPS

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "curl --proxy $HTTPS_PROXYCONFIGURED --head https://mcr.microsoft.com"

Use an instance ID to test DNS functionality

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "dig mcr.microsoft.com 443"

Test wagent logs

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "cat /var/log/waagent.log"

Test wagent status

az vmss run-command invoke --resource-group $MC_RESOURCE_GROUP_NAME \
    --name $VMSS_NAME \
    --command-id RunShellScript \
    --instance-id $VMSS_INSTANCE_IDS \
    --output json \
    --scripts "systemctl status waagent"

Update the proxy configuration

az aks update -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP --http-proxy-config aks-proxy-config.json
az aks update --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --http-proxy-config aks-proxy-config.json

AKS: Autoscaling con KEDA para event-driven workloads

Resumen

KEDA (Kubernetes Event-Driven Autoscaling) escala tus pods basado en eventos externos: colas de Service Bus, métricas de Prometheus, HTTP requests, etc. Mucho más flexible que HPA estándar.

¿Qué es KEDA?

KEDA extiende Kubernetes con scalers específicos para: - Azure Service Bus queues/topics - Azure Storage queues - Kafka topics - Redis lists - Prometheus metrics - HTTP traffic - Cron schedules

Instalación en AKS

# Habilitar KEDA addon (managed)
az aks update \
  --resource-group $RG \
  --name my-aks \
  --enable-keda

# Ver pods de KEDA
kubectl get pods -n kube-system | grep keda

Ejemplo 1: Escalar con Azure Service Bus

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: servicebus-scaler
spec:
  scaleTargetRef:
    name: message-processor  # Deployment a escalar
  minReplicaCount: 0  # Scale to zero cuando no hay mensajes
  maxReplicaCount: 10
  triggers:
  - type: azure-servicebus
    metadata:
      queueName: myqueue
      namespace: myservicebus
      messageCount: "5"  # 1 pod por cada 5 mensajes
    authenticationRef:
      name: servicebus-auth
---
apiVersion: v1
kind: Secret
metadata:
  name: servicebus-connection
stringData:
  connection: "Endpoint=sb://myservicebus.servicebus.windows.net/;..."
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: servicebus-auth
spec:
  secretTargetRef:
  - parameter: connection
    name: servicebus-connection
    key: connection

Ejemplo 2: Escalar con métricas Prometheus

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: prometheus-scaler
spec:
  scaleTargetRef:
    name: api-server
  minReplicaCount: 2
  maxReplicaCount: 20
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://prometheus:9090
      metricName: http_requests_per_second
      threshold: '100'
      query: sum(rate(http_requests_total[2m]))

Ejemplo 3: Escalar con HTTP traffic

# Instalar HTTP Add-on
az aks addon enable \
  --resource-group $RG \
  --name my-aks \
  --addon http_application_routing
apiVersion: keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
  name: http-scaler
spec:
  scaleTargetRef:
    name: web-app
    kind: Deployment
  minReplicaCount: 1
  maxReplicaCount: 50
  targetPendingRequests: 100  # Escalar cuando hay >100 requests pendientes

Ejemplo 4: Cron-based scaling

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cron-scaler
spec:
  scaleTargetRef:
    name: batch-processor
  minReplicaCount: 0
  maxReplicaCount: 10
  triggers:
  - type: cron
    metadata:
      timezone: Europe/Madrid
      start: 0 8 * * 1-5  # Lunes a viernes a las 8:00
      end: 0 18 * * 1-5   # Lunes a viernes a las 18:00
      desiredReplicas: "5"

Monitore de KEDA

# Ver scaled objects
kubectl get scaledobjects

# Ver estado detallado
kubectl describe scaledobject servicebus-scaler

# Logs de KEDA operator
kubectl logs -n kube-system deployment/keda-operator

Buenas prácticas

  • Scale to zero: Solo para workloads que toleran cold start
  • Min replicas: Al menos 2 para alta disponibilidad
  • Cooldown period: Evita flapping con cooldownPeriod: 300
  • Managed Identity: Usa workload identity para autenticación
  • Monitoring: Integra con Prometheus para métricas de scaling

Referencias

Conditional Access: Políticas esenciales para Zero Trust

Resumen

Conditional Access es el corazón de Zero Trust en Azure. Voy al grano: aquí están las 5 políticas que deberías implementar HOY en tu tenant.

¿Qué es Conditional Access?

Conditional Access evalúa signals en tiempo real para decidir si permitir, bloquear o requerir MFA en un acceso:

Signals: - Usuario/grupo - Ubicación (IP ranges) - Dispositivo (managed, compliant) - Aplicación - Riesgo (Azure AD Identity Protection)

Decisiones: - Permitir - Bloquear - Requiere MFA - Requiere dispositivo compliant - Requiere hybrid Azure AD join

Prerequisitos

# Verificar licencias (P1 mínimo)
az ad signed-in-user show --query "assignedLicenses[].skuId"

# Crear grupo de exclusión (break-glass accounts)
az ad group create \
  --display-name "CA-Exclusion-Emergency" \
  --mail-nickname "ca-exclusion"

Break-glass accounts

SIEMPRE excluye 2 cuentas de emergencia de todas las políticas CA. Si algo falla, necesitas acceso.

Política 1: MFA para todos los admins

# Esta política la creas desde el portal por ser más visual
# Portal → Azure AD → Security → Conditional Access

Configuración: - Usuarios: Todos los roles admin (Global Admin, Security Admin, etc.) - Cloud apps: Todas las aplicaciones - Condiciones: Ninguna - Grant: Require MFA - Estado: Report-only (primero testea)

JSON de ejemplo (via API):

{
  "displayName": "CA001: Require MFA for administrators",
  "state": "enabledForReportingButNotEnforced",
  "conditions": {
    "users": {
      "includeRoles": [
        "62e90394-69f5-4237-9190-012177145e10",  // Global Admin
        "194ae4cb-b126-40b2-bd5b-6091b380977d"   // Security Admin
      ],
      "excludeGroups": ["break-glass-group-id"]
    },
    "applications": {
      "includeApplications": ["All"]
    }
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": ["mfa"]
  }
}

Política 2: Bloquear legacy authentication

Legacy auth (IMAP, POP3, SMTP) no soporta MFA → vector de ataque.

{
  "displayName": "CA002: Block legacy authentication",
  "state": "enabled",
  "conditions": {
    "users": {
      "includeUsers": ["All"],
      "excludeGroups": ["break-glass-group-id"]
    },
    "applications": {
      "includeApplications": ["All"]
    },
    "clientAppTypes": [
      "exchangeActiveSync",
      "other"  // Incluye IMAP, POP3, SMTP
    ]
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": ["block"]
  }
}

Política 3: Requiere managed devices para apps corporativas

{
  "displayName": "CA003: Require compliant device for corporate apps",
  "state": "enabled",
  "conditions": {
    "users": {
      "includeUsers": ["All"],
      "excludeGroups": ["break-glass-group-id"]
    },
    "applications": {
      "includeApplications": [
        "00000003-0000-0000-c000-000000000000",  // Microsoft Graph
        "Office365"
      ]
    },
    "platforms": {
      "includePlatforms": ["windows", "macOS", "iOS", "android"]
    }
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": [
      "compliantDevice",
      "domainJoinedDevice"
    ]
  }
}

Política 4: Bloquear acceso desde países no autorizados

# Crear Named Location
az rest --method PUT \
  --url 'https://graph.microsoft.com/v1.0/identity/conditionalAccess/namedLocations' \
  --body '{
    "@odata.type": "#microsoft.graph.countryNamedLocation",
    "displayName": "Blocked Countries",
    "countriesAndRegions": ["KP", "IR", "SY"],
    "includeUnknownCountriesAndRegions": false
  }'

Política:

{
  "displayName": "CA004: Block access from blocked countries",
  "state": "enabled",
  "conditions": {
    "users": {
      "includeUsers": ["All"],
      "excludeGroups": ["break-glass-group-id", "travelers-group-id"]
    },
    "applications": {
      "includeApplications": ["All"]
    },
    "locations": {
      "includeLocations": ["blocked-countries-location-id"]
    }
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": ["block"]
  }
}

Política 5: MFA para acceso desde fuera de red corporativa

# Crear Named Location para IPs corporativas
az rest --method POST \
  --url 'https://graph.microsoft.com/v1.0/identity/conditionalAccess/namedLocations' \
  --body '{
    "@odata.type": "#microsoft.graph.ipNamedLocation",
    "displayName": "Corporate Network",
    "isTrusted": true,
    "ipRanges": [
      {"@odata.type": "#microsoft.graph.iPv4CidrRange", "cidrAddress": "203.0.113.0/24"},
      {"@odata.type": "#microsoft.graph.iPv4CidrRange", "cidrAddress": "198.51.100.0/24"}
    ]
  }'

Política:

{
  "displayName": "CA005: Require MFA for external access",
  "state": "enabled",
  "conditions": {
    "users": {
      "includeUsers": ["All"],
      "excludeGroups": ["break-glass-group-id"]
    },
    "applications": {
      "includeApplications": ["All"]
    },
    "locations": {
      "includeLocations": ["Any"],
      "excludeLocations": ["corporate-network-location-id"]
    }
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": ["mfa"]
  }
}

Testing con Report-Only mode

# Ver reportes de impacto
# Portal → Azure AD → Sign-in logs → Conditional Access tab

Analiza: - ¿Cuántos usuarios impactados? - ¿Algún servicio crítico bloqueado? - ¿Break-glass accounts funcionando?

Después de 7-14 días de monitoring → cambiar a enabled.

Monitoreo continuo

Query en Log Analytics:

SigninLogs
| where TimeGenerated > ago(24h)
| where ConditionalAccessStatus != "notApplied"
| summarize count() by ConditionalAccessStatus, ConditionalAccessPolicies
| order by count_ desc

Alerta para fallos:

az monitor scheduled-query create \
  --resource-group $RG \
  --name ca-policy-failures \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.OperationalInsights/workspaces/my-law \
  --condition "count 'SigninLogs | where ConditionalAccessStatus == \"failure\"' > 10" \
  --window-size 5m \
  --evaluation-frequency 5m \
  --action /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/actionGroups/security-team

Política avanzada: Risk-based con Identity Protection

Requiere Azure AD P2:

{
  "displayName": "CA006: Block high risk sign-ins",
  "state": "enabled",
  "conditions": {
    "users": {
      "includeUsers": ["All"],
      "excludeGroups": ["break-glass-group-id"]
    },
    "applications": {
      "includeApplications": ["All"]
    },
    "signInRiskLevels": ["high"]
  },
  "grantControls": {
    "operator": "OR",
    "builtInControls": ["block"]
  }
}

Buenas prácticas

  • Naming convention: CA###: [Acción] for [Condición]
  • Report-only primero: Nunca actives sin testear
  • Exclusiones documentadas: Justifica cada grupo excluido
  • Review trimestral: Las políticas se vuelven obsoletas
  • Combinación de políticas: No crees una mega-política, divide por propósito
  • What If tool: Usa el simulador antes de activar

Errores comunes

  • Bloquear sin break-glass accounts
  • No testear en report-only mode
  • Aplicar a "All apps" sin excluir Azure Management
  • No documentar políticas

Referencias

Azure Virtual Network Manager: A Comprehensive Guide

Azure Virtual Network Manager is a powerful management service that allows you to group, configure, deploy, and manage virtual networks across subscriptions on a global scale. It provides the ability to define network groups for logical segmentation of your virtual networks. You can then establish connectivity and security configurations and apply them across all selected virtual networks in network groups simultaneously.

How Does Azure Virtual Network Manager Work?

The functionality of Azure Virtual Network Manager revolves around a well-defined process:

  1. Scope Definition: During the creation process, you determine the scope of what your Azure Virtual Network Manager will manage. The Network Manager only has delegated access to apply configurations within this defined scope boundary. Although you can directly define a scope on a list of subscriptions, it's recommended to use management groups for scope definition as they provide hierarchical organization to your subscriptions.

  2. Deployment of Configuration Types: After defining the scope, you deploy configuration types including Connectivity and SecurityAdmin rules for your Virtual Network Manager.

  3. Creation of Network Group: Post-deployment, you create a network group which acts as a logical container of networking resources for applying configurations at scale. You can manually select individual virtual networks to be added to your network group (static membership) or use Azure Policy to define conditions for dynamic group membership.

  4. Connectivity and Security Configurations: Next, you create connectivity and/or security configurations to be applied to those network groups based on your topology and security requirements. A connectivity configuration enables you to create a mesh or a hub-and-spoke network topology, while a security configuration lets you define a collection of rules that can be applied globally to one or more network groups.

  5. Deployment of Configurations: Once you've created your desired network groups and configurations, you can deploy the configurations to any region of your choosing.

Azure Virtual Network Manager can be deployed and managed through various platforms such as the Azure portal, Azure CLI, Azure PowerShell, or Terraform.

Key Benefits of Azure Virtual Network Manager

  • Centralized management of connectivity and security policies globally across regions and subscriptions.
  • Direct connectivity between spokes in a hub-and-spoke configuration without the complexity of managing a mesh network.
  • Highly scalable and highly available service with redundancy and replication across the globe.
  • Ability to create network security rules that override network security group rules.
  • Low latency and high bandwidth between resources in different virtual networks using virtual network peering.
  • Roll out network changes through a specific region sequence and frequency of your choosing.

Use Cases for Azure Virtual Network Manager

Azure Virtual Network Manager is a versatile tool that can be used in a variety of scenarios:

  1. Hub-and-Spoke Network Topology: Azure Virtual Network Manager is ideal for managing hub-and-spoke network topologies where you have a central hub virtual network that connects to multiple spoke virtual networks. You can easily create and manage these configurations at scale using Azure Virtual Network Manager.

  2. Global Connectivity and Security Policies: If you have a global presence with virtual networks deployed across multiple regions, Azure Virtual Network Manager allows you to define and apply connectivity and security policies globally, ensuring consistent network configurations across all regions.

  3. Network Segmentation and Isolation: Azure Virtual Network Manager enables you to segment and isolate your virtual networks based on your organizational requirements. You can create network groups and apply security configurations to enforce network isolation and access control.

  4. Centralized Network Management: For organizations with multiple subscriptions and virtual networks, Azure Virtual Network Manager provides a centralized management solution to manage network configurations, connectivity, and security policies across all subscriptions.

  5. Automated Network Configuration Deployment: By leveraging Azure Policy and Azure Virtual Network Manager, you can automate the deployment of network configurations based on predefined conditions, ensuring consistent network configurations and compliance across your Azure environment.

Example connectivity and security configurations forcing a Network Security Rule

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Example Connectivity Configuration forcing a Hub-and-Spoke Network Topology

In Azure Virtual Network Manager:

alt text

In Azure Virtual Network:

alt text

Preview Features

At the time of writing, Azure Virtual Network Manager has some features in preview and may not be available in all regions. Some of the preview features include:

  • IP address management: allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks.
  • Virtual Network verifier: Enables you to check if your network policies allow or disallow traffic between your Azure network resources.

  • Configurations, creation of a routing is in preview, very interesting to manage the traffic between the different networks.

Conclusion

Azure Virtual Network Manager is a powerful service that simplifies the management of virtual networks in Azure. By providing a centralized platform to define and apply connectivity and security configurations at scale, Azure Virtual Network Manager streamlines network management tasks and ensures consistent network configurations across your Azure environment.

For up-to-date information on the regions where Azure Virtual Network Manager is available, refer to Products available by region.

Azure Container Registry: Geo-replication y webhook automation

Resumen

Azure Container Registry (ACR) no es solo un registro Docker. Con geo-replication consigues latencia mínima global y con webhooks automatizas despliegues. Aquí el setup avanzado.

¿Cuándo usar ACR vs Docker Hub?

Usa ACR si: - ✅ Necesitas registry privado en Azure - ✅ Quieres integración nativa con AKS, App Service, Container Apps - ✅ Compliance requiere datos en región específica - ✅ Necesitas geo-replication

Usa Docker Hub si: - Imágenes públicas open source - Proyectos personales sin requisitos enterprise

Crear ACR con Premium SKU

# Variables
RG="my-rg"
ACR_NAME="myacr$(date +%s)"
LOCATION="westeurope"

# Crear ACR Premium (requerido para geo-replication)
az acr create \
  --resource-group $RG \
  --name $ACR_NAME \
  --sku Premium \
  --location $LOCATION \
  --admin-enabled false

SKUs disponibles: - Basic: €4.4/mes - Sin geo-replication, webhooks limitados - Standard: €22/mes - Webhooks, mejor throughput - Premium: €44/mes - Geo-replication, Content Trust, Private Link

Geo-replication

Replica tu registry a múltiples regiones para: - Reducir latencia de pull - Alta disponibilidad - Cumplir data residency

# Replicar a East US
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location eastus

# Replicar a Southeast Asia
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location southeastasia

# Listar réplicas
az acr replication list \
  --resource-group $RG \
  --registry $ACR_NAME \
  --output table

Ahora tu imagen myacr.azurecr.io/app:v1 está disponible en 3 regiones con un solo push.

Push de imágenes

# Login a ACR
az acr login --name $ACR_NAME

# Tag imagen
docker tag my-app:latest ${ACR_NAME}.azurecr.io/my-app:v1.0

# Push
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0

# Listar imágenes
az acr repository list --name $ACR_NAME --output table

# Ver tags
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --output table

ACR Tasks: Build en Azure

Build imágenes sin Docker local:

# Build desde Dockerfile en repo Git
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:{{.Run.ID}} \
  --image my-app:latest \
  https://github.com/myorg/myapp.git#main

# Build desde directorio local
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:v1.1 \
  .

Webhooks para CI/CD

Trigger despliegue automático cuando hay nuevo push:

# Crear webhook
az acr webhook create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --name deployWebhook \
  --actions push \
  --uri https://my-function-app.azurewebsites.net/api/deploy \
  --scope my-app:*

Payload del webhook:

{
  "id": "unique-id",
  "timestamp": "2025-01-22T10:00:00Z",
  "action": "push",
  "target": {
    "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "size": 1234,
    "digest": "sha256:abc123...",
    "repository": "my-app",
    "tag": "v1.2"
  },
  "request": {
    "id": "req-id",
    "host": "myacr.azurecr.io",
    "method": "PUT",
    "useragent": "docker/20.10.12"
  }
}

Azure Function para auto-deploy

import azure.functions as func
import json
import subprocess

def main(req: func.HttpRequest) -> func.HttpResponse:
    # Parse webhook payload
    webhook_data = req.get_json()

    repository = webhook_data['target']['repository']
    tag = webhook_data['target']['tag']
    image = f"myacr.azurecr.io/{repository}:{tag}"

    # Trigger deployment (ejemplo: kubectl)
    subprocess.run([
        'kubectl', 'set', 'image',
        'deployment/my-app',
        f'app={image}',
        '--record'
    ])

    return func.HttpResponse(f"Deployed {image}", status_code=200)

Security: Managed Identity

# Asignar managed identity a AKS
AKS_PRINCIPAL_ID=$(az aks show \
  --resource-group $RG \
  --name my-aks \
  --query identityProfile.kubeletidentity.objectId -o tsv)

# Dar permiso AcrPull
az role assignment create \
  --assignee $AKS_PRINCIPAL_ID \
  --role AcrPull \
  --scope $(az acr show --resource-group $RG --name $ACR_NAME --query id -o tsv)

Ahora AKS puede pullear sin passwords:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: myacr.azurecr.io/my-app:v1.0
        # No imagePullSecrets needed!

Content Trust (image signing)

# Habilitar Content Trust
az acr config content-trust update \
  --resource-group $RG \
  --registry $ACR_NAME \
  --status enabled

# Docker content trust
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://${ACR_NAME}.azurecr.io

# Push firmado
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0-signed

Vulnerability scanning

# Habilitar Microsoft Defender for Container Registries
az security pricing create \
  --name ContainerRegistry \
  --tier Standard

# Ver vulnerabilidades
az acr repository show \
  --name $ACR_NAME \
  --repository my-app \
  --query "metadata.vulnerabilities"

Retention policy (cleanup automático)

# Retener solo últimos 30 días
az acr config retention update \
  --registry $ACR_NAME \
  --status enabled \
  --days 30 \
  --type UntaggedManifests

# Borrar tags viejos manualmente
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --orderby time_asc \
  --output tsv \
  | head -n 10 \
  | xargs -I {} az acr repository delete \
      --name $ACR_NAME \
      --image my-app:{} \
      --yes

Import imágenes desde Docker Hub

# Importar imagen pública
az acr import \
  --name $ACR_NAME \
  --source docker.io/library/nginx:latest \
  --image nginx:latest

# Importar desde otro ACR
az acr import \
  --name $ACR_NAME \
  --source otheracr.azurecr.io/app:v1 \
  --image app:v1 \
  --username <user> \
  --password <password>

Buenas prácticas

  • Tagging strategy: Usa semver (v1.2.3) + latest + commit SHA
  • Multi-arch images: Build para amd64 y arm64
  • Scan antes de deploy: Integra vulnerability scanning en CI
  • Cleanup periódico: Retención de 30-90 días
  • Private endpoint: No expongas ACR a Internet
  • Geo-replication: Mínimo 2 regiones para producción

Costos

  • Storage: €0.08/GB/mes
  • Geo-replication: €44/mes por región
  • Build minutes: €0.0008/segundo

Ejemplo: ACR Premium + 50GB + 2 réplicas = ~€140/mes

Referencias

Azure Private DNS Zones: Resolución de nombres en VNets

Resumen

Private DNS Zones te permite usar nombres DNS personalizados dentro de tus VNets sin exponer nada a Internet. Esencial para arquitecturas privadas y hybrid cloud.

¿Qué es Private DNS Zone?

Es un servicio DNS que solo resuelve dentro de VNets enlazadas. Casos de uso:

  • Nombres personalizados para VMs privadas (db01.internal.company.com)
  • Private Endpoints de servicios PaaS (mystorageacct.privatelink.blob.core.windows.net)
  • Integración con on-premises DNS (conditional forwarding)
  • Split-horizon DNS (nombre público vs privado)

Crear Private DNS Zone

# Variables
RG="my-rg"
ZONE_NAME="internal.company.com"
VNET_NAME="my-vnet"

# Crear Private DNS Zone
az network private-dns zone create \
  --resource-group $RG \
  --name $ZONE_NAME

# Enlazar a VNet (Virtual Network Link)
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name ${VNET_NAME}-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

Auto-registration

Si --registration-enabled true, Azure crea automáticamente records A/AAAA cuando despliegas VMs en la VNet.

Añadir registros DNS

# Record A (IPv4)
az network private-dns record-set a add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name db01 \
  --ipv4-address 10.0.1.10

# Record CNAME
az network private-dns record-set cname set-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name www \
  --cname db01.internal.company.com

# Record TXT (verificación de dominio)
az network private-dns record-set txt add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name _verification \
  --value "verification-token-12345"

Auto-registration de VMs

# Crear zona con auto-registration
az network private-dns zone create \
  --resource-group $RG \
  --name auto.internal.com

# Link con auto-registration habilitado
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name auto.internal.com \
  --name auto-vnet-link \
  --virtual-network $VNET_NAME \
  --registration-enabled true

# Crear VM - se auto-registra
az vm create \
  --resource-group $RG \
  --name myvm01 \
  --vnet-name $VNET_NAME \
  --subnet default \
  --image Ubuntu2204 \
  --admin-username azureuser

La VM se registra automáticamente como myvm01.auto.internal.com apuntando a su IP privada.

Private Endpoints con DNS

Cuando creas un Private Endpoint para Azure Storage, SQL, etc., necesitas Private DNS Zone para resolución:

# Crear Storage Account
STORAGE_ACCOUNT="mystorageacct$(date +%s)"
az storage account create \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG \
  --sku Standard_LRS \
  --public-network-access Disabled

# Crear Private DNS Zone para Blob
az network private-dns zone create \
  --resource-group $RG \
  --name privatelink.blob.core.windows.net

# Link a VNet
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name privatelink.blob.core.windows.net \
  --name blob-dns-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

# Crear Private Endpoint
az network private-endpoint create \
  --resource-group $RG \
  --name ${STORAGE_ACCOUNT}-pe \
  --vnet-name $VNET_NAME \
  --subnet default \
  --private-connection-resource-id $(az storage account show --name $STORAGE_ACCOUNT --resource-group $RG --query id -o tsv) \
  --group-id blob \
  --connection-name blob-connection

# Crear DNS record automáticamente
az network private-endpoint dns-zone-group create \
  --resource-group $RG \
  --endpoint-name ${STORAGE_ACCOUNT}-pe \
  --name blob-dns-group \
  --private-dns-zone privatelink.blob.core.windows.net \
  --zone-name blob

Ahora desde la VNet:

nslookup mystorageacct.blob.core.windows.net
# Resuelve a IP privada 10.0.1.5

DNS Forwarding para hybrid cloud

Para que on-premises resuelva nombres de Private DNS Zone:

graph LR
    A[On-Premises DNS] -->|Conditional Forwarding| B[Azure DNS Resolver]
    B --> C[Private DNS Zone]
    C --> D[mystorageacct.privatelink.blob.core.windows.net]

Paso 1: Crear DNS Private Resolver

# Crear subnet para resolver
az network vnet subnet create \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name dns-resolver-inbound \
  --address-prefixes 10.0.255.0/28

# Crear DNS Private Resolver
az dns-resolver create \
  --resource-group $RG \
  --name my-dns-resolver \
  --location westeurope \
  --id /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET_NAME

# Crear inbound endpoint
az dns-resolver inbound-endpoint create \
  --resource-group $RG \
  --dns-resolver-name my-dns-resolver \
  --name inbound-endpoint \
  --location westeurope \
  --ip-configurations '[{"subnet":{"id":"/subscriptions/{sub-id}/resourceGroups/'$RG'/providers/Microsoft.Network/virtualNetworks/'$VNET_NAME'/subnets/dns-resolver-inbound"},"privateIpAllocationMethod":"Dynamic"}]'

Paso 2: Configurar on-premises DNS

En tu DNS on-premises (BIND, Windows DNS, etc.):

# Conditional Forwarder
Zone: privatelink.blob.core.windows.net
Forwarder: 10.0.255.4  # IP del inbound endpoint

Listar registros

# Ver todos los record sets
az network private-dns record-set list \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --output table

# Ver record específico
az network private-dns record-set a show \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name db01

Troubleshooting DNS

# Desde una VM en la VNet
nslookup db01.internal.company.com

# Ver configuración DNS de la NIC
az network nic show \
  --resource-group $RG \
  --name myvm-nic \
  --query "dnsSettings"

# Probar desde VM con dig
dig db01.internal.company.com

# Flush DNS cache en Linux VM
sudo systemd-resolve --flush-caches

Buenas prácticas

  • Naming convention: Usa .internal, .private o .local para zonas privadas
  • Un zone por VNet: Evita múltiples zonas con el mismo nombre
  • RBAC: Separa permisos de DNS de permisos de red
  • Monitoring: Habilita diagnostic logs para audit
  • Terraform: Gestiona DNS zones como código
  • Private Endpoint DNS: Usa DNS Zone Groups para auto-configuración

Limitaciones

  • Máximo 25,000 records por zone
  • Máximo 1,000 VNet links por zone
  • No soporta DNSSEC
  • No soporta zone transfers (AXFR/IXFR)

Costos

  • Hosted zone: €0.45/zone/mes
  • Queries: Primeros 1B gratis, luego €0.36/millón
  • VNet links: €0.09/link/mes

En práctica: 1 zone + 5 VNet links = ~€0.90/mes

Referencias

Application Insights: Instrumentación completa en 10 minutos

Resumen

Application Insights te da observability completa de tu aplicación: traces, metrics, logs y dependencies. Voy al grano: aquí está el setup mínimo para .NET, Python y Node.js.

¿Qué es Application Insights?

Application Insights es el APM (Application Performance Monitoring) nativo de Azure. Captura automáticamente:

  • Requests: HTTP requests con duración y status code
  • Dependencies: Llamadas a DBs, APIs externas, Redis, etc.
  • Exceptions: Stack traces completos
  • Custom events: Lo que tú quieras trackear
  • User telemetry: Sessions, page views, user flows

Crear recurso

# Variables
RG="my-rg"
LOCATION="westeurope"
APPINSIGHTS_NAME="my-appinsights"

# Crear Log Analytics Workspace (requerido)
az monitor log-analytics workspace create \
  --resource-group $RG \
  --workspace-name my-workspace \
  --location $LOCATION

WORKSPACE_ID=$(az monitor log-analytics workspace show \
  --resource-group $RG \
  --workspace-name my-workspace \
  --query id -o tsv)

# Crear Application Insights
az monitor app-insights component create \
  --app $APPINSIGHTS_NAME \
  --location $LOCATION \
  --resource-group $RG \
  --workspace $WORKSPACE_ID

Obtener connection string

# Connection string (nuevo método recomendado)
CONN_STRING=$(az monitor app-insights component show \
  --resource-group $RG \
  --app $APPINSIGHTS_NAME \
  --query connectionString -o tsv)

echo $CONN_STRING
# InstrumentationKey=abc-123;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/

Instrumentación por lenguaje

.NET 6+

# Instalar paquete NuGet
dotnet add package Microsoft.ApplicationInsights.AspNetCore

Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Añadir Application Insights
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = builder.Configuration["ApplicationInsights:ConnectionString"];
});

var app = builder.Build();

appsettings.json:

{
  "ApplicationInsights": {
    "ConnectionString": "InstrumentationKey=abc-123;..."
  }
}

Python (Flask/FastAPI)

# Instalar SDK
pip install opencensus-ext-azure
pip install opencensus-ext-flask  # O opencensus-ext-fastapi
from flask import Flask
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
import logging

app = Flask(__name__)

# Middleware para auto-instrumentación
middleware = FlaskMiddleware(
    app,
    exporter=AzureLogHandler(
        connection_string='InstrumentationKey=abc-123;...'
    )
)

# Logger
logger = logging.getLogger(__name__)
logger.addHandler(AzureLogHandler(
    connection_string='InstrumentationKey=abc-123;...'
))

@app.route('/')
def hello():
    logger.info('Home page accessed')
    return 'Hello World'

Node.js

# Instalar SDK
npm install applicationinsights
const appInsights = require('applicationinsights');

appInsights.setup('InstrumentationKey=abc-123;...')
    .setAutoDependencyCorrelation(true)
    .setAutoCollectRequests(true)
    .setAutoCollectPerformance(true)
    .setAutoCollectExceptions(true)
    .setAutoCollectDependencies(true)
    .setAutoCollectConsole(true)
    .start();

// Tu código Express
const express = require('express');
const app = express();

app.get('/', (req, res) => {
    appInsights.defaultClient.trackEvent({name: 'HomePage'});
    res.send('Hello World');
});

Custom telemetry

Tracking personalizado

// .NET
using Microsoft.ApplicationInsights;

private readonly TelemetryClient _telemetry;

public MyService(TelemetryClient telemetry)
{
    _telemetry = telemetry;
}

public void ProcessOrder(Order order)
{
    // Track evento
    _telemetry.TrackEvent("OrderProcessed", new Dictionary<string, string>
    {
        {"OrderId", order.Id},
        {"Amount", order.Total.ToString()}
    });

    // Track métrica
    _telemetry.TrackMetric("OrderValue", order.Total);

    // Track trace (log)
    _telemetry.TrackTrace($"Processing order {order.Id}", SeverityLevel.Information);
}
# Python
from opencensus.trace import tracer as tracer_module
from opencensus.ext.azure.trace_exporter import AzureExporter

tracer = tracer_module.Tracer(
    exporter=AzureExporter(connection_string='...'),
)

with tracer.span(name='ProcessOrder'):
    # Tu lógica
    tracer.add_attribute_to_current_span('orderId', order_id)
    tracer.add_attribute_to_current_span('amount', amount)

Queries útiles en Log Analytics

// Requests más lentos (P95)
requests
| where timestamp > ago(1h)
| summarize percentile(duration, 95) by name
| order by percentile_duration_95 desc

// Excepciones por tipo
exceptions
| where timestamp > ago(24h)
| summarize count() by type, outerMessage
| order by count_ desc

// Dependency calls fallando
dependencies
| where success == false
| where timestamp > ago(1h)
| summarize count() by name, resultCode

// User journey (funnels)
customEvents
| where timestamp > ago(7d)
| where name in ('PageView', 'AddToCart', 'Checkout', 'Purchase')
| summarize count() by name

Alertas automatizadas

# Alert para tasa de errores > 5%
az monitor metrics alert create \
  --name high-error-rate \
  --resource-group $RG \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/components/$APPINSIGHTS_NAME \
  --condition "avg requests/failed > 5" \
  --window-size 5m \
  --evaluation-frequency 1m \
  --action /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/actionGroups/ops-team

Live Metrics Stream

Para debugging en tiempo real:

  1. Portal Azure → Application Insights → Live Metrics
  2. Ver requests, dependencies, exceptions en vivo
  3. Aplicar filtros por server, cloud role, etc.

Sampling para reducir costos

// .NET - adaptive sampling (recomendado)
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.EnableAdaptiveSampling = true;
    options.ConnectionString = connectionString;
});

// Fixed sampling (5% de telemetría)
builder.Services.AddApplicationInsightsTelemetryProcessor<AdaptiveSamplingTelemetryProcessor>(p =>
{
    p.SamplingPercentage = 5;
});

Costos

Application Insights cobra por GB ingestado: - Primeros 5 GB/mes: gratis - >5 GB: ~€2/GB

Con sampling al 10%, una app con 1M requests/día → ~15GB/mes → €20/mes

Distributed tracing

Para microservicios, Application Insights correlaciona automáticamente:

graph LR
    A[API Gateway] -->|operation_Id: abc123| B[Auth Service]
    A -->|operation_Id: abc123| C[Order Service]
    C -->|operation_Id: abc123| D[Payment API]

Query cross-service:

// Trace completo de una operación
union requests, dependencies
| where operation_Id == 'abc123'
| project timestamp, itemType, name, duration, success
| order by timestamp asc

Buenas prácticas

  • No loguees PII: Filtra datos sensibles antes de enviar
  • Usa sampling en producción: 10-20% es suficiente
  • Custom dimensions: Añade tenant_id, user_role para segmentar
  • Dependency tracking: Verifica que captura SQL, Redis, HTTP
  • Availability tests: Configura pings cada 5 minutos desde múltiples regiones

Referencias

Terraform Backend en Azure Storage: Setup completo

Resumen

Guardar el estado de Terraform en local es peligroso y no escala. Voy al grano: Azure Storage con state locking es la solución estándar para equipos. Aquí el setup completo en 5 minutos.

¿Por qué usar remote backend?

Problemas con backend local: - ❌ Estado se pierde si borras tu laptop - ❌ Imposible colaborar en equipo - ❌ No hay locking → corrupciones en despliegues simultáneos - ❌ Secretos en plaintext en disco local

Ventajas con Azure Storage: - ✅ Estado centralizado y versionado - ✅ Locking automático con blob lease - ✅ Encriptación at-rest - ✅ Acceso controlado con RBAC

Setup del backend

1. Crear Storage Account

# Variables
RG="terraform-state-rg"
LOCATION="westeurope"
STORAGE_ACCOUNT="tfstate$(date +%s)"  # Nombre único
CONTAINER="tfstate"

# Crear resource group
az group create \
  --name $RG \
  --location $LOCATION

# Crear storage account
az storage account create \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG \
  --location $LOCATION \
  --sku Standard_LRS \
  --encryption-services blob \
  --min-tls-version TLS1_2

# Crear container
az storage container create \
  --name $CONTAINER \
  --account-name $STORAGE_ACCOUNT \
  --auth-mode login

Naming convention

Storage account solo acepta lowercase y números, máximo 24 caracteres. Usa un sufijo único para evitar colisiones.

2. Configurar backend en Terraform

backend.tf:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstate1234567890"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

3. Inicializar

# Login a Azure
az login

# Inicializar backend
terraform init

# Migrar estado existente (si aplica)
terraform init -migrate-state

State locking

Azure usa blob leases para locking automático:

# Ver si hay lock activo
az storage blob show \
  --container-name $CONTAINER \
  --name prod.terraform.tfstate \
  --account-name $STORAGE_ACCOUNT \
  --query "properties.lease.status"

Si alguien está ejecutando terraform apply, verás:

"locked"

Multi-entorno con workspaces

# Crear workspace por entorno
terraform workspace new dev
terraform workspace new staging  
terraform workspace new prod

# Listar workspaces
terraform workspace list

# Cambiar entre entornos
terraform workspace select prod

Cada workspace crea su propio state file:

prod.terraform.tfstate
dev.terraform.tfstate
staging.terraform.tfstate

Seguridad: Managed Identity

Evita usar access keys en pipelines:

# Crear managed identity
az identity create \
  --resource-group $RG \
  --name terraform-identity

# Asignar rol Storage Blob Data Contributor
IDENTITY_ID=$(az identity show \
  --resource-group $RG \
  --name terraform-identity \
  --query principalId -o tsv)

az role assignment create \
  --role "Storage Blob Data Contributor" \
  --assignee $IDENTITY_ID \
  --scope /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT

Backend config con managed identity:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstate1234567890"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
    use_msi              = true
    subscription_id      = "00000000-0000-0000-0000-000000000000"
    tenant_id            = "00000000-0000-0000-0000-000000000000"
  }
}

Versionado del estado

# Habilitar versioning en el container
az storage blob service-properties update \
  --account-name $STORAGE_ACCOUNT \
  --enable-versioning true

# Ver versiones anteriores
az storage blob list \
  --container-name $CONTAINER \
  --account-name $STORAGE_ACCOUNT \
  --include v \
  --query "[?name=='prod.terraform.tfstate'].{Version:versionId, LastModified:properties.lastModified}"

Pipeline CI/CD con Azure DevOps

azure-pipelines.yml:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  - group: terraform-variables

stages:
  - stage: Plan
    jobs:
      - job: TerraformPlan
        steps:
          - task: AzureCLI@2
            displayName: 'Terraform Init & Plan'
            inputs:
              azureSubscription: 'Azure Service Connection'
              scriptType: 'bash'
              scriptLocation: 'inlineScript'
              inlineScript: |
                terraform init
                terraform plan -out=tfplan

          - publish: '$(System.DefaultWorkingDirectory)/tfplan'
            artifact: tfplan

  - stage: Apply
    dependsOn: Plan
    condition: succeeded()
    jobs:
      - deployment: TerraformApply
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - download: current
                  artifact: tfplan

                - task: AzureCLI@2
                  displayName: 'Terraform Apply'
                  inputs:
                    azureSubscription: 'Azure Service Connection'
                    scriptType: 'bash'
                    scriptLocation: 'inlineScript'
                    inlineScript: |
                      terraform init
                      terraform apply tfplan

Troubleshooting

Error: "Failed to lock state"

# Forzar unlock (solo si estás seguro)
terraform force-unlock <LOCK_ID>

# O romper lease manualmente
az storage blob lease break \
  --container-name $CONTAINER \
  --blob-name prod.terraform.tfstate \
  --account-name $STORAGE_ACCOUNT

Error: "storage account not found"

# Verificar permisos
az storage account show \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG

Buenas prácticas

  • State file por proyecto: No mezcles infraestructuras diferentes
  • Soft delete: Habilita soft delete en storage account
  • Network security: Restringe acceso desde VNet específicas
  • Audit logs: Habilita diagnostic settings para compliance
  • Backup externo: Exporta estados críticos a otro storage account

Never commit state files

Añade *.tfstate* a .gitignore. El state contiene secretos en plaintext.

Referencias