Skip to content

Azure Services

Azure Virtual Network Manager: Illustrative relationship between components

graph TD
    subgraph "Azure Scope"
        MG["Management Group / Subscription"]
    end
    subgraph "Azure Virtual Network Manager (AVNM)"
        AVNM_Instance["AVNM Instance"] -- Manages --> MG
        AVNM_Instance -- Contains --> NG1["Network Group A (e.g., Production)"]
        AVNM_Instance -- Contains --> NG2["Network Group B (e.g., Test)"]
    end
    subgraph "Virtual Networks (VNets)"
        VNet1["VNet 1"]
        VNet2["VNet 2"]
        VNet3["VNet 3"]
        VNet4["VNet 4"]
    end
    subgraph "Membership"
        Static["Static Membership"] -- Manually Adds --> NG1
        Static -- Manually Adds --> NG2
        VNet1 -- Member Of --> Static
        VNet4 -- Member Of --> Static
        Dynamic["Dynamic Membership (Azure Policy)"] -- Automatically Adds --> NG1
        Dynamic -- Automatically Adds --> NG2
        Policy["Azure Policy Definition (e.g., 'tag=prod')"] -- Defines --> Dynamic
        VNet2 -- Meets Policy --> Dynamic
        VNet3 -- Meets Policy --> Dynamic
    end
    subgraph "Configurations"
        ConnConfig["Connectivity Config (Mesh / Hub-Spoke)"] -- Targets --> NG1
        SecConfig["Security Admin Config (Rules)"] -- Targets --> NG1
        SecConfig2["Security Admin Config (Rules)"] -- Targets --> NG2
    end
    subgraph "Deployment & Enforcement"
        Deploy["Deployment"] -- Applies --> ConnConfig
        Deploy -- Applies --> SecConfig
        Deploy -- Applies --> SecConfig2
        NG1 -- Receives --> ConnConfig
        NG1 -- Receives --> SecConfig
        NG2 -- Receives --> SecConfig2
        VNet1 -- Enforced --> NG1
        VNet2 -- Enforced --> NG1
        VNet3 -- Enforced --> NG1
        VNet4 -- Enforced --> NG2
    end
    %% Removed 'fill' for better dark mode compatibility, kept colored strokes
    classDef avnm stroke:#f9f,stroke-width:2px;
    classDef ng stroke:#ccf,stroke-width:1px;
    classDef vnet stroke:#cfc,stroke-width:1px;
    classDef config stroke:#ffc,stroke-width:1px;
    classDef policy stroke:#fcc,stroke-width:1px;
    class AVNM_Instance avnm;
    class NG1,NG2 ng;
    class VNet1,VNet2,VNet3,VNet4 vnet;
    class ConnConfig,SecConfig,SecConfig2 config;
    class Policy,Dynamic,Static policy;

Diagram Explanation

  1. Azure Scope: Azure Virtual Network Manager (AVNM) operates within a defined scope, which can be a Management Group or a Subscription. This determines which VNets AVNM can "see" and manage.
  2. AVNM Instance: This is the main Azure Virtual Network Manager resource. Network groups and configurations are created and managed from here.
  3. Network Groups:
    • These are logical containers for your Virtual Networks (VNets).
    • They allow you to group VNets with common characteristics (environment, region, etc.).
    • A VNet can belong to multiple network groups.
  4. Membership: How VNets are added to Network Groups:
    • Static Membership: You add VNets manually, selecting them one by one.
    • Dynamic Membership: Uses Azure Policy to automatically add VNets that meet certain criteria (like tags, names, locations). VNets matching the policy are dynamically added (and removed) from the group.
  5. Virtual Networks (VNets): These are the Azure virtual networks that are being managed.
  6. Configurations: AVNM allows you to apply two main types of configurations to Network Groups:
    • Connectivity Config: Defines how VNets connect within a group (or between groups). You can create topologies like Mesh (all connected to each other) or Hub-and-Spoke (a central VNet connected to several "spoke" VNets).
    • Security Admin Config: Allows you to define high-level security rules that apply to the VNets in a group. These rules can override Network Security Group (NSG) rules, enabling centralized and mandatory security policies.
  7. Deployment & Enforcement:

    • The created configurations (connectivity and security) must be Deployed.
    • During deployment, AVNM translates these configurations and applies them to the VNets that are members of the target network groups in the selected regions.
    • Once deployed, the VNets within the groups receive and apply (Enforced) these configurations, establishing the defined connections and security rules.

    And maybe this post will be published in the official documentation of Azure Virtual Network Manager, who knows? 😉

Azure Service Bus: Mensajería confiable entre servicios

Resumen

Service Bus es el servicio de mensajería enterprise de Azure. Queues para comunicación punto a punto, Topics para pub/sub. Voy al grano: setup completo con dead-letter queues y sessions.

¿Qué es Azure Service Bus?

Service Bus es un message broker managed para desacoplar aplicaciones. Garantiza entrega de mensajes con:

  • At-least-once delivery: El mensaje llega mínimo 1 vez
  • FIFO ordering: Con sessions
  • Transactions: Operaciones atómicas
  • Dead-letter queues: Manejo de mensajes problemáticos

Casos de uso: - Desacoplar microservicios - Order processing (ecommerce) - Event-driven architectures - Integration with on-premises systems

Crear namespace y queue

# Variables
RG="messaging-rg"
LOCATION="westeurope"
NAMESPACE="sb-prod-$(date +%s)"
QUEUE_NAME="orders"

# Crear resource group
az group create \
  --name $RG \
  --location $LOCATION

# Crear Service Bus namespace (Standard tier)
az servicebus namespace create \
  --name $NAMESPACE \
  --resource-group $RG \
  --location $LOCATION \
  --sku Standard

# Crear queue con dead-lettering
az servicebus queue create \
  --name $QUEUE_NAME \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --enable-dead-lettering-on-message-expiration true \
  --default-message-time-to-live PT1H \
  --max-delivery-count 10

SKUs disponibles: - Basic: €0.05/million ops - Sin topics, sessions ni transactions - Standard: €10/mes + ops - Topics, sessions, auto-forward - Premium: €550/mes - Recursos dedicados, VNet integration

Queues vs Topics

Queues (punto a punto)

Sender → [Queue] → Receiver (solo 1)
  • Un solo consumer procesa cada mensaje
  • FIFO si usas sessions
  • Ideal para task queues
# Crear queue simple
az servicebus queue create \
  --name tasks \
  --namespace-name $NAMESPACE \
  --resource-group $RG

Topics (pub/sub)

Sender → [Topic] → Subscription 1 → Receiver A
                 → Subscription 2 → Receiver B
                 → Subscription 3 → Receiver C
  • Múltiples subscriptions
  • Cada subscription recibe copia del mensaje
  • Filters para routing
# Crear topic
az servicebus topic create \
  --name events \
  --namespace-name $NAMESPACE \
  --resource-group $RG

# Crear subscriptions con filtros
az servicebus topic subscription create \
  --name mobile-orders \
  --namespace-name $NAMESPACE \
  --topic-name events \
  --resource-group $RG \
  --enable-session true

az servicebus topic subscription create \
  --name web-orders \
  --namespace-name $NAMESPACE \
  --topic-name events \
  --resource-group $RG

Dead-Letter Queue (DLQ)

Mensajes van al DLQ cuando: - TTL expira (Time-to-Live) - MaxDeliveryCount excedido (defecto 10 reintentos) - Filtros no coinciden - Aplicación marca mensaje como dead-letter

# Habilitar DLQ en queue existente
az servicebus queue update \
  --name $QUEUE_NAME \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --enable-dead-lettering-on-message-expiration true

# DLQ para subscription
az servicebus topic subscription update \
  --name mobile-orders \
  --namespace-name $NAMESPACE \
  --topic-name events \
  --resource-group $RG \
  --enable-dead-lettering-on-message-expiration true

Procesar mensajes del DLQ

Path del DLQ:

Queue: myqueue/$deadletterqueue
Topic: mytopic/subscriptions/mysub/$deadletterqueue

Código C# para leer DLQ:

using Azure.Messaging.ServiceBus;

string connectionString = "<connection-string>";
string queueName = "orders";

await using var client = new ServiceBusClient(connectionString);
var receiver = client.CreateReceiver(queueName, new ServiceBusReceiverOptions
{
    SubQueue = SubQueue.DeadLetter
});

var messages = await receiver.ReceiveMessagesAsync(maxMessages: 10);
foreach (var msg in messages)
{
    Console.WriteLine($"DLQ Reason: {msg.DeadLetterReason}");
    Console.WriteLine($"DLQ Description: {msg.DeadLetterErrorDescription}");
    Console.WriteLine($"Body: {msg.Body}");

    // Procesar y completar
    await receiver.CompleteMessageAsync(msg);
}

Message Sessions (FIFO)

Sessions garantizan orden FIFO para mensajes relacionados:

# Crear queue con sessions
az servicebus queue create \
  --name ordered-tasks \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --enable-session true

Código C# con sessions:

// Enviar mensajes con SessionId
var sender = client.CreateSender("ordered-tasks");
await sender.SendMessageAsync(new ServiceBusMessage("Order 1")
{
    SessionId = "session-customer-123"
});

// Recibir por session
var sessionReceiver = await client.AcceptNextSessionAsync("ordered-tasks");
var messages = await sessionReceiver.ReceiveMessagesAsync(10);

Auto-forwarding

Encadena queues/topics automáticamente:

# Queue A → Queue B (auto-forward)
az servicebus queue create \
  --name process-later \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --forward-to completed-tasks

# Topic subscription → Queue
az servicebus topic subscription create \
  --name archive \
  --namespace-name $NAMESPACE \
  --topic-name events \
  --resource-group $RG \
  --forward-to archive-queue

Caso de uso: Filtrar mensajes del DLQ a otra queue para reprocesamiento.

Scheduled delivery

// Enviar mensaje para procesarse en 1 hora
var message = new ServiceBusMessage("Reminder");
message.ScheduledEnqueueTime = DateTimeOffset.UtcNow.AddHours(1);

await sender.SendMessageAsync(message);

Útil para: - Recordatorios - Retry con backoff exponencial - Batch processing diferido

Seguridad: Managed Identity

# Crear managed identity
az identity create \
  --name app-identity \
  --resource-group $RG

IDENTITY_ID=$(az identity show \
  --name app-identity \
  --resource-group $RG \
  --query principalId -o tsv)

# Asignar rol Service Bus Data Sender
az role assignment create \
  --role "Azure Service Bus Data Sender" \
  --assignee $IDENTITY_ID \
  --scope /subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.ServiceBus/namespaces/$NAMESPACE

# Asignar rol Service Bus Data Receiver
az role assignment create \
  --role "Azure Service Bus Data Receiver" \
  --assignee $IDENTITY_ID \
  --scope /subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.ServiceBus/namespaces/$NAMESPACE

Código sin connection strings:

using Azure.Identity;

var credential = new DefaultAzureCredential();
var client = new ServiceBusClient(
    "<namespace>.servicebus.windows.net",
    credential
);

Monitoreo

# Ver count de mensajes en queue
az servicebus queue show \
  --name $QUEUE_NAME \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --query "messageCount"

# Ver mensajes en DLQ
az servicebus queue show \
  --name $QUEUE_NAME \
  --namespace-name $NAMESPACE \
  --resource-group $RG \
  --query "deadLetterMessageCount"

# Metrics en Log Analytics
az monitor metrics list \
  --resource /subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.ServiceBus/namespaces/$NAMESPACE \
  --metric "ActiveMessages" \
  --interval PT1H

Queries útiles:

// Mensajes en DLQ por queue
AzureMetrics
| where ResourceProvider == "MICROSOFT.SERVICEBUS"
| where MetricName == "DeadletteredMessages"
| summarize DeadLetterCount = max(Maximum) by Resource
| order by DeadLetterCount desc

// Latencia de procesamiento
ServiceBusLogs
| where OperationName == "CompleteMessage"
| extend Duration = DurationMs
| summarize AvgDuration = avg(Duration), P95 = percentile(Duration, 95)

Buenas prácticas

  • MaxDeliveryCount: Ajusta según tu lógica de retry (defecto 10)
  • TTL realista: No uses TTL infinito, causa acumulación
  • Procesa DLQ: Monitorea y alertea en DLQ > 0
  • Sessions solo si necesitas: Añade overhead
  • Managed Identity: Nunca uses connection strings en código
  • Prefetch: Habilita prefetch en receivers para throughput
  • Batch: Envía mensajes en batch (hasta 100)

Quota exceeded

Si ves QuotaExceeded, revisa DLQ. Mensajes no procesados acumulan y bloquean nuevos envíos.

Troubleshooting común

# Error: Queue not found
# Solución: Verifica namespace correcto
az servicebus queue list \
  --namespace-name $NAMESPACE \
  --resource-group $RG

# Error: Max message size exceeded (256KB Standard, 1MB Premium)
# Solución: Reduce payload o usa Premium tier

# Mensajes no llegan
# 1. Verifica receiver activo
# 2. Revisa DLQ
# 3. Comprueba TTL no expiró

Referencias

Azure Kubernetes Service: Cluster production-ready en 30 minutos

Resumen

AKS elimina la complejidad de gestionar el control plane de Kubernetes. Voy al grano: aquí está el setup mínimo para producción con autoscaling, availability zones y networking correcto.

¿Qué es AKS?

Azure Kubernetes Service (AKS) es Kubernetes managed donde Azure gestiona:

  • Control plane (API server, etcd, scheduler) - Sin coste
  • Actualizaciones y patches - Automatizadas
  • Alta disponibilidad - SLA 99.95% con uptime SLA

Tú solo pagas y gestionas los worker nodes.

Crear cluster production-ready

Setup básico con mejores prácticas

# Variables
RG="aks-prod-rg"
LOCATION="westeurope"
CLUSTER_NAME="aks-prod-cluster"
NODE_COUNT=2

# Crear resource group
az group create \
  --name $RG \
  --location $LOCATION

# Crear AKS con best practices
az aks create \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --node-count $NODE_COUNT \
  --node-vm-size Standard_DS2_v2 \
  --vm-set-type VirtualMachineScaleSets \
  --load-balancer-sku standard \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5 \
  --network-plugin azure \
  --generate-ssh-keys \
  --zones 1 2 3

Availability Zones

--zones 1 2 3 distribuye nodes entre 3 AZs para alta disponibilidad. Esto NO se puede cambiar después de crear el cluster.

Obtener credenciales

# Configurar kubectl
az aks get-credentials \
  --resource-group $RG \
  --name $CLUSTER_NAME

# Verificar conexión
kubectl get nodes

Cluster Autoscaler

El Cluster Autoscaler escala nodes automáticamente basándose en pods pendientes:

Cómo funciona: 1. Pod no puede ser scheduled (falta capacidad) 2. Autoscaler añade node nuevo 3. Pod se programa en el nuevo node 4. Cuando nodes están infrautilizados, autoscaler los elimina

Configurar autoscaler profile

# Autoscaler agresivo (escala rápido, libera rápido)
az aks update \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --cluster-autoscaler-profile \
    scan-interval=30s \
    scale-down-delay-after-add=0m \
    scale-down-unneeded-time=3m \
    scale-down-unready-time=3m

# Autoscaler conservador (para workloads bursty)
az aks update \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --cluster-autoscaler-profile \
    scan-interval=20s \
    scale-down-delay-after-add=10m \
    scale-down-unneeded-time=5m \
    scale-down-unready-time=5m

Parámetros clave: - scan-interval: Frecuencia de evaluación (defecto 10s) - scale-down-delay-after-add: Espera después de añadir node - scale-down-unneeded-time: Tiempo antes de eliminar node infrautilizado

Actualizar límites de autoscaling

# Cambiar min/max de un node pool
az aks nodepool update \
  --resource-group $RG \
  --cluster-name $CLUSTER_NAME \
  --name nodepool1 \
  --update-cluster-autoscaler \
  --min-count 2 \
  --max-count 10

Horizontal Pod Autoscaler (HPA)

Escala pods basándose en CPU/memoria o métricas custom:

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
kubectl apply -f hpa.yaml

# Ver estado HPA
kubectl get hpa

Vertical Pod Autoscaler (VPA)

Ajusta CPU/memoria requests automáticamente:

# Habilitar VPA en el cluster
az aks update \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --enable-vpa
# vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-app-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Auto"  # Auto | Off | Initial

Modos VPA: - Off: Solo recomendaciones, no aplica cambios - Auto: Actualiza resources durante pod restart - Initial: Solo establece resources en creación

HPA + VPA

NO uses HPA y VPA juntos en las mismas métricas (CPU/memoria). Crea conflictos. Puedes combinar HPA en custom metrics con VPA en CPU/memoria.

Node pools separados

# Node pool para cargas batch (spot instances)
az aks nodepool add \
  --resource-group $RG \
  --cluster-name $CLUSTER_NAME \
  --name spotpool \
  --node-count 1 \
  --priority Spot \
  --eviction-policy Delete \
  --spot-max-price -1 \
  --enable-cluster-autoscaler \
  --min-count 0 \
  --max-count 5 \
  --node-taints kubernetes.azure.com/scalesetpriority=spot:NoSchedule

# Node pool para sistema (garantizado)
az aks nodepool add \
  --resource-group $RG \
  --cluster-name $CLUSTER_NAME \
  --name systempool \
  --node-count 2 \
  --node-vm-size Standard_DS3_v2 \
  --mode System \
  --zones 1 2 3

Networking: Azure CNI vs Kubenet

Azure CNI (recomendado para producción): - Pods tienen IPs de la VNet - Integración directa con servicios Azure - Soporte para Network Policies

Kubenet (más simple, menos IPs): - Pods usan IPs privadas (NAT) - Menos consumo de IPs de subnet - No soporta Windows node pools

# Crear cluster con Azure CNI Overlay (más eficiente IPs)
az aks create \
  --resource-group $RG \
  --name aks-cni-overlay \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --pod-cidr 10.244.0.0/16

Upgrades de cluster

# Ver versiones disponibles
az aks get-upgrades \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --output table

# Upgrade a versión específica
az aks upgrade \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --kubernetes-version 1.28.3

# Habilitar auto-upgrade
az aks update \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --auto-upgrade-channel stable

Canales auto-upgrade: - none: Manual - patch: Auto-upgrade a patches (1.27.3 → 1.27.5) - stable: Versión N-1 estable - rapid: Última versión disponible

Pod Disruption Budgets (PDB)

Garantiza disponibilidad durante upgrades:

# pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: my-app-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: my-app
kubectl apply -f pdb.yaml

# Verificar PDB
kubectl get pdb

Monitoreo con Container Insights

# Habilitar Container Insights
az aks enable-addons \
  --resource-group $RG \
  --name $CLUSTER_NAME \
  --addons monitoring

# Ver logs en tiempo real
az aks browse \
  --resource-group $RG \
  --name $CLUSTER_NAME

Queries útiles en Log Analytics:

// Pods con más restarts
KubePodInventory
| where ClusterName == "aks-prod-cluster"
| summarize RestartCount = sum(PodRestartCount) by Name
| order by RestartCount desc
| take 10

// Nodes con alta CPU
Perf
| where ObjectName == "K8SNode"
| where CounterName == "cpuUsageNanoCores"
| summarize AvgCPU = avg(CounterValue) by Computer
| order by AvgCPU desc

Buenas prácticas

  • Availability Zones: Siempre usa 3 AZs para producción
  • System node pool separado: Aísla workloads de componentes del sistema
  • Resource limits: Define requests y limits en todos los pods
  • PodDisruptionBudgets: Evita downtime en upgrades
  • Azure CNI: Usa Azure CNI para integración completa
  • Autoscaling: Combina cluster autoscaler + HPA
  • No B-series VMs: Usa D/E/F series para workloads serios
  • Premium Disks: Para bases de datos y cargas I/O intensivas

Costos

  • Control plane: Gratis (o €0.10/hora con uptime SLA)
  • Nodes: Pagas VMs estándar (~€50-150/mes por node DS2_v2)
  • Usa Spot VMs para workloads batch (70% descuento)

Troubleshooting común

# Ver eventos del cluster
kubectl get events --sort-by='.lastTimestamp'

# Pods que no arrancan
kubectl describe pod <pod-name>

# Logs de un pod
kubectl logs <pod-name> --previous

# Ejecutar comando en pod
kubectl exec -it <pod-name> -- /bin/bash

# Ver resource usage
kubectl top nodes
kubectl top pods

Referencias

Azure Backup: Políticas de respaldo para VMs

Resumen

Azure Backup protege tus VMs con snapshots incrementales. Aquí el setup de recovery vault y políticas de retención.

Introducción

[Contenido técnico detallado a desarrollar]

Configuración básica

# Comandos de ejemplo
RG="my-rg"
LOCATION="westeurope"

# Comandos Azure CLI
az group create --name $RG --location $LOCATION

Casos de uso

  • Caso 1: [Descripción]
  • Caso 2: [Descripción]
  • Caso 3: [Descripción]

Buenas prácticas

  • Práctica 1: Descripción
  • Práctica 2: Descripción
  • Práctica 3: Descripción

Monitoreo y troubleshooting

# Comandos de diagnóstico
az monitor metrics list --resource ...

Referencias

Cosmos DB: Consistency levels explicados

Resumen

Cosmos DB ofrece 5 niveles de consistencia. Aquí cuándo usar cada uno: Strong, Bounded Staleness, Session, Consistent Prefix, Eventual.

Introducción

[Contenido técnico detallado a desarrollar]

Configuración básica

# Comandos de ejemplo
RG="my-rg"
LOCATION="westeurope"

# Comandos Azure CLI
az group create --name $RG --location $LOCATION

Casos de uso

  • Caso 1: [Descripción]
  • Caso 2: [Descripción]
  • Caso 3: [Descripción]

Buenas prácticas

  • Práctica 1: Descripción
  • Práctica 2: Descripción
  • Práctica 3: Descripción

Monitoreo y troubleshooting

# Comandos de diagnóstico
az monitor metrics list --resource ...

Referencias

Azure Front Door con WAF: Protección global de aplicaciones web

Resumen

Front Door es CDN + load balancer global + WAF. Aquí cómo configurar reglas OWASP Top 10.

Introducción

[Contenido técnico detallado a desarrollar]

Configuración básica

# Comandos de ejemplo
RG="my-rg"
LOCATION="westeurope"

# Comandos Azure CLI
az group create --name $RG --location $LOCATION

Casos de uso

  • Caso 1: [Descripción]
  • Caso 2: [Descripción]
  • Caso 3: [Descripción]

Buenas prácticas

  • Práctica 1: Descripción
  • Práctica 2: Descripción
  • Práctica 3: Descripción

Monitoreo y troubleshooting

# Comandos de diagnóstico
az monitor metrics list --resource ...

Referencias