Skip to content

Blog

Azure Container Registry: Geo-replication y webhook automation

Resumen

Azure Container Registry (ACR) no es solo un registro Docker. Con geo-replication consigues latencia mínima global y con webhooks automatizas despliegues. Aquí el setup avanzado.

¿Cuándo usar ACR vs Docker Hub?

Usa ACR si: - ✅ Necesitas registry privado en Azure - ✅ Quieres integración nativa con AKS, App Service, Container Apps - ✅ Compliance requiere datos en región específica - ✅ Necesitas geo-replication

Usa Docker Hub si: - Imágenes públicas open source - Proyectos personales sin requisitos enterprise

Crear ACR con Premium SKU

# Variables
RG="my-rg"
ACR_NAME="myacr$(date +%s)"
LOCATION="westeurope"

# Crear ACR Premium (requerido para geo-replication)
az acr create \
  --resource-group $RG \
  --name $ACR_NAME \
  --sku Premium \
  --location $LOCATION \
  --admin-enabled false

SKUs disponibles: - Basic: €4.4/mes - Sin geo-replication, webhooks limitados - Standard: €22/mes - Webhooks, mejor throughput - Premium: €44/mes - Geo-replication, Content Trust, Private Link

Geo-replication

Replica tu registry a múltiples regiones para: - Reducir latencia de pull - Alta disponibilidad - Cumplir data residency

# Replicar a East US
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location eastus

# Replicar a Southeast Asia
az acr replication create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --location southeastasia

# Listar réplicas
az acr replication list \
  --resource-group $RG \
  --registry $ACR_NAME \
  --output table

Ahora tu imagen myacr.azurecr.io/app:v1 está disponible en 3 regiones con un solo push.

Push de imágenes

# Login a ACR
az acr login --name $ACR_NAME

# Tag imagen
docker tag my-app:latest ${ACR_NAME}.azurecr.io/my-app:v1.0

# Push
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0

# Listar imágenes
az acr repository list --name $ACR_NAME --output table

# Ver tags
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --output table

ACR Tasks: Build en Azure

Build imágenes sin Docker local:

# Build desde Dockerfile en repo Git
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:{{.Run.ID}} \
  --image my-app:latest \
  https://github.com/myorg/myapp.git#main

# Build desde directorio local
az acr build \
  --resource-group $RG \
  --registry $ACR_NAME \
  --image my-app:v1.1 \
  .

Webhooks para CI/CD

Trigger despliegue automático cuando hay nuevo push:

# Crear webhook
az acr webhook create \
  --resource-group $RG \
  --registry $ACR_NAME \
  --name deployWebhook \
  --actions push \
  --uri https://my-function-app.azurewebsites.net/api/deploy \
  --scope my-app:*

Payload del webhook:

{
  "id": "unique-id",
  "timestamp": "2025-01-22T10:00:00Z",
  "action": "push",
  "target": {
    "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "size": 1234,
    "digest": "sha256:abc123...",
    "repository": "my-app",
    "tag": "v1.2"
  },
  "request": {
    "id": "req-id",
    "host": "myacr.azurecr.io",
    "method": "PUT",
    "useragent": "docker/20.10.12"
  }
}

Azure Function para auto-deploy

import azure.functions as func
import json
import subprocess

def main(req: func.HttpRequest) -> func.HttpResponse:
    # Parse webhook payload
    webhook_data = req.get_json()

    repository = webhook_data['target']['repository']
    tag = webhook_data['target']['tag']
    image = f"myacr.azurecr.io/{repository}:{tag}"

    # Trigger deployment (ejemplo: kubectl)
    subprocess.run([
        'kubectl', 'set', 'image',
        'deployment/my-app',
        f'app={image}',
        '--record'
    ])

    return func.HttpResponse(f"Deployed {image}", status_code=200)

Security: Managed Identity

# Asignar managed identity a AKS
AKS_PRINCIPAL_ID=$(az aks show \
  --resource-group $RG \
  --name my-aks \
  --query identityProfile.kubeletidentity.objectId -o tsv)

# Dar permiso AcrPull
az role assignment create \
  --assignee $AKS_PRINCIPAL_ID \
  --role AcrPull \
  --scope $(az acr show --resource-group $RG --name $ACR_NAME --query id -o tsv)

Ahora AKS puede pullear sin passwords:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: myacr.azurecr.io/my-app:v1.0
        # No imagePullSecrets needed!

Content Trust (image signing)

# Habilitar Content Trust
az acr config content-trust update \
  --resource-group $RG \
  --registry $ACR_NAME \
  --status enabled

# Docker content trust
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://${ACR_NAME}.azurecr.io

# Push firmado
docker push ${ACR_NAME}.azurecr.io/my-app:v1.0-signed

Vulnerability scanning

# Habilitar Microsoft Defender for Container Registries
az security pricing create \
  --name ContainerRegistry \
  --tier Standard

# Ver vulnerabilidades
az acr repository show \
  --name $ACR_NAME \
  --repository my-app \
  --query "metadata.vulnerabilities"

Retention policy (cleanup automático)

# Retener solo últimos 30 días
az acr config retention update \
  --registry $ACR_NAME \
  --status enabled \
  --days 30 \
  --type UntaggedManifests

# Borrar tags viejos manualmente
az acr repository show-tags \
  --name $ACR_NAME \
  --repository my-app \
  --orderby time_asc \
  --output tsv \
  | head -n 10 \
  | xargs -I {} az acr repository delete \
      --name $ACR_NAME \
      --image my-app:{} \
      --yes

Import imágenes desde Docker Hub

# Importar imagen pública
az acr import \
  --name $ACR_NAME \
  --source docker.io/library/nginx:latest \
  --image nginx:latest

# Importar desde otro ACR
az acr import \
  --name $ACR_NAME \
  --source otheracr.azurecr.io/app:v1 \
  --image app:v1 \
  --username <user> \
  --password <password>

Buenas prácticas

  • Tagging strategy: Usa semver (v1.2.3) + latest + commit SHA
  • Multi-arch images: Build para amd64 y arm64
  • Scan antes de deploy: Integra vulnerability scanning en CI
  • Cleanup periódico: Retención de 30-90 días
  • Private endpoint: No expongas ACR a Internet
  • Geo-replication: Mínimo 2 regiones para producción

Costos

  • Storage: €0.08/GB/mes
  • Geo-replication: €44/mes por región
  • Build minutes: €0.0008/segundo

Ejemplo: ACR Premium + 50GB + 2 réplicas = ~€140/mes

Referencias

Azure Private DNS Zones: Resolución de nombres en VNets

Resumen

Private DNS Zones te permite usar nombres DNS personalizados dentro de tus VNets sin exponer nada a Internet. Esencial para arquitecturas privadas y hybrid cloud.

¿Qué es Private DNS Zone?

Es un servicio DNS que solo resuelve dentro de VNets enlazadas. Casos de uso:

  • Nombres personalizados para VMs privadas (db01.internal.company.com)
  • Private Endpoints de servicios PaaS (mystorageacct.privatelink.blob.core.windows.net)
  • Integración con on-premises DNS (conditional forwarding)
  • Split-horizon DNS (nombre público vs privado)

Crear Private DNS Zone

# Variables
RG="my-rg"
ZONE_NAME="internal.company.com"
VNET_NAME="my-vnet"

# Crear Private DNS Zone
az network private-dns zone create \
  --resource-group $RG \
  --name $ZONE_NAME

# Enlazar a VNet (Virtual Network Link)
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name ${VNET_NAME}-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

Auto-registration

Si --registration-enabled true, Azure crea automáticamente records A/AAAA cuando despliegas VMs en la VNet.

Añadir registros DNS

# Record A (IPv4)
az network private-dns record-set a add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name db01 \
  --ipv4-address 10.0.1.10

# Record CNAME
az network private-dns record-set cname set-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name www \
  --cname db01.internal.company.com

# Record TXT (verificación de dominio)
az network private-dns record-set txt add-record \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --record-set-name _verification \
  --value "verification-token-12345"

Auto-registration de VMs

# Crear zona con auto-registration
az network private-dns zone create \
  --resource-group $RG \
  --name auto.internal.com

# Link con auto-registration habilitado
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name auto.internal.com \
  --name auto-vnet-link \
  --virtual-network $VNET_NAME \
  --registration-enabled true

# Crear VM - se auto-registra
az vm create \
  --resource-group $RG \
  --name myvm01 \
  --vnet-name $VNET_NAME \
  --subnet default \
  --image Ubuntu2204 \
  --admin-username azureuser

La VM se registra automáticamente como myvm01.auto.internal.com apuntando a su IP privada.

Private Endpoints con DNS

Cuando creas un Private Endpoint para Azure Storage, SQL, etc., necesitas Private DNS Zone para resolución:

# Crear Storage Account
STORAGE_ACCOUNT="mystorageacct$(date +%s)"
az storage account create \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG \
  --sku Standard_LRS \
  --public-network-access Disabled

# Crear Private DNS Zone para Blob
az network private-dns zone create \
  --resource-group $RG \
  --name privatelink.blob.core.windows.net

# Link a VNet
az network private-dns link vnet create \
  --resource-group $RG \
  --zone-name privatelink.blob.core.windows.net \
  --name blob-dns-link \
  --virtual-network $VNET_NAME \
  --registration-enabled false

# Crear Private Endpoint
az network private-endpoint create \
  --resource-group $RG \
  --name ${STORAGE_ACCOUNT}-pe \
  --vnet-name $VNET_NAME \
  --subnet default \
  --private-connection-resource-id $(az storage account show --name $STORAGE_ACCOUNT --resource-group $RG --query id -o tsv) \
  --group-id blob \
  --connection-name blob-connection

# Crear DNS record automáticamente
az network private-endpoint dns-zone-group create \
  --resource-group $RG \
  --endpoint-name ${STORAGE_ACCOUNT}-pe \
  --name blob-dns-group \
  --private-dns-zone privatelink.blob.core.windows.net \
  --zone-name blob

Ahora desde la VNet:

nslookup mystorageacct.blob.core.windows.net
# Resuelve a IP privada 10.0.1.5

DNS Forwarding para hybrid cloud

Para que on-premises resuelva nombres de Private DNS Zone:

graph LR
    A[On-Premises DNS] -->|Conditional Forwarding| B[Azure DNS Resolver]
    B --> C[Private DNS Zone]
    C --> D[mystorageacct.privatelink.blob.core.windows.net]

Paso 1: Crear DNS Private Resolver

# Crear subnet para resolver
az network vnet subnet create \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name dns-resolver-inbound \
  --address-prefixes 10.0.255.0/28

# Crear DNS Private Resolver
az dns-resolver create \
  --resource-group $RG \
  --name my-dns-resolver \
  --location westeurope \
  --id /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET_NAME

# Crear inbound endpoint
az dns-resolver inbound-endpoint create \
  --resource-group $RG \
  --dns-resolver-name my-dns-resolver \
  --name inbound-endpoint \
  --location westeurope \
  --ip-configurations '[{"subnet":{"id":"/subscriptions/{sub-id}/resourceGroups/'$RG'/providers/Microsoft.Network/virtualNetworks/'$VNET_NAME'/subnets/dns-resolver-inbound"},"privateIpAllocationMethod":"Dynamic"}]'

Paso 2: Configurar on-premises DNS

En tu DNS on-premises (BIND, Windows DNS, etc.):

# Conditional Forwarder
Zone: privatelink.blob.core.windows.net
Forwarder: 10.0.255.4  # IP del inbound endpoint

Listar registros

# Ver todos los record sets
az network private-dns record-set list \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --output table

# Ver record específico
az network private-dns record-set a show \
  --resource-group $RG \
  --zone-name $ZONE_NAME \
  --name db01

Troubleshooting DNS

# Desde una VM en la VNet
nslookup db01.internal.company.com

# Ver configuración DNS de la NIC
az network nic show \
  --resource-group $RG \
  --name myvm-nic \
  --query "dnsSettings"

# Probar desde VM con dig
dig db01.internal.company.com

# Flush DNS cache en Linux VM
sudo systemd-resolve --flush-caches

Buenas prácticas

  • Naming convention: Usa .internal, .private o .local para zonas privadas
  • Un zone por VNet: Evita múltiples zonas con el mismo nombre
  • RBAC: Separa permisos de DNS de permisos de red
  • Monitoring: Habilita diagnostic logs para audit
  • Terraform: Gestiona DNS zones como código
  • Private Endpoint DNS: Usa DNS Zone Groups para auto-configuración

Limitaciones

  • Máximo 25,000 records por zone
  • Máximo 1,000 VNet links por zone
  • No soporta DNSSEC
  • No soporta zone transfers (AXFR/IXFR)

Costos

  • Hosted zone: €0.45/zone/mes
  • Queries: Primeros 1B gratis, luego €0.36/millón
  • VNet links: €0.09/link/mes

En práctica: 1 zone + 5 VNet links = ~€0.90/mes

Referencias

Application Insights: Instrumentación completa en 10 minutos

Resumen

Application Insights te da observability completa de tu aplicación: traces, metrics, logs y dependencies. aquí está el setup mínimo para .NET, Python y Node.js.

¿Qué es Application Insights?

Application Insights es el APM (Application Performance Monitoring) nativo de Azure. Captura automáticamente:

  • Requests: HTTP requests con duración y status code
  • Dependencies: Llamadas a DBs, APIs externas, Redis, etc.
  • Exceptions: Stack traces completos
  • Custom events: Lo que tú quieras trackear
  • User telemetry: Sessions, page views, user flows

Crear recurso

# Variables
RG="my-rg"
LOCATION="westeurope"
APPINSIGHTS_NAME="my-appinsights"

# Crear Log Analytics Workspace (requerido)
az monitor log-analytics workspace create \
  --resource-group $RG \
  --workspace-name my-workspace \
  --location $LOCATION

WORKSPACE_ID=$(az monitor log-analytics workspace show \
  --resource-group $RG \
  --workspace-name my-workspace \
  --query id -o tsv)

# Crear Application Insights
az monitor app-insights component create \
  --app $APPINSIGHTS_NAME \
  --location $LOCATION \
  --resource-group $RG \
  --workspace $WORKSPACE_ID

Obtener connection string

# Connection string (nuevo método recomendado)
CONN_STRING=$(az monitor app-insights component show \
  --resource-group $RG \
  --app $APPINSIGHTS_NAME \
  --query connectionString -o tsv)

echo $CONN_STRING
# InstrumentationKey=abc-123;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/

Instrumentación por lenguaje

.NET 6+

# Instalar paquete NuGet
dotnet add package Microsoft.ApplicationInsights.AspNetCore

Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Añadir Application Insights
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = builder.Configuration["ApplicationInsights:ConnectionString"];
});

var app = builder.Build();

appsettings.json:

{
  "ApplicationInsights": {
    "ConnectionString": "InstrumentationKey=abc-123;..."
  }
}

Python (Flask/FastAPI)

# Instalar SDK
pip install opencensus-ext-azure
pip install opencensus-ext-flask  # O opencensus-ext-fastapi
from flask import Flask
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
import logging

app = Flask(__name__)

# Middleware para auto-instrumentación
middleware = FlaskMiddleware(
    app,
    exporter=AzureLogHandler(
        connection_string='InstrumentationKey=abc-123;...'
    )
)

# Logger
logger = logging.getLogger(__name__)
logger.addHandler(AzureLogHandler(
    connection_string='InstrumentationKey=abc-123;...'
))

@app.route('/')
def hello():
    logger.info('Home page accessed')
    return 'Hello World'

Node.js

# Instalar SDK
npm install applicationinsights
const appInsights = require('applicationinsights');

appInsights.setup('InstrumentationKey=abc-123;...')
    .setAutoDependencyCorrelation(true)
    .setAutoCollectRequests(true)
    .setAutoCollectPerformance(true)
    .setAutoCollectExceptions(true)
    .setAutoCollectDependencies(true)
    .setAutoCollectConsole(true)
    .start();

// Tu código Express
const express = require('express');
const app = express();

app.get('/', (req, res) => {
    appInsights.defaultClient.trackEvent({name: 'HomePage'});
    res.send('Hello World');
});

Custom telemetry

Tracking personalizado

// .NET
using Microsoft.ApplicationInsights;

private readonly TelemetryClient _telemetry;

public MyService(TelemetryClient telemetry)
{
    _telemetry = telemetry;
}

public void ProcessOrder(Order order)
{
    // Track evento
    _telemetry.TrackEvent("OrderProcessed", new Dictionary<string, string>
    {
        {"OrderId", order.Id},
        {"Amount", order.Total.ToString()}
    });

    // Track métrica
    _telemetry.TrackMetric("OrderValue", order.Total);

    // Track trace (log)
    _telemetry.TrackTrace($"Processing order {order.Id}", SeverityLevel.Information);
}
# Python
from opencensus.trace import tracer as tracer_module
from opencensus.ext.azure.trace_exporter import AzureExporter

tracer = tracer_module.Tracer(
    exporter=AzureExporter(connection_string='...'),
)

with tracer.span(name='ProcessOrder'):
    # Tu lógica
    tracer.add_attribute_to_current_span('orderId', order_id)
    tracer.add_attribute_to_current_span('amount', amount)

Queries útiles en Log Analytics

// Requests más lentos (P95)
requests
| where timestamp > ago(1h)
| summarize percentile(duration, 95) by name
| order by percentile_duration_95 desc

// Excepciones por tipo
exceptions
| where timestamp > ago(24h)
| summarize count() by type, outerMessage
| order by count_ desc

// Dependency calls fallando
dependencies
| where success == false
| where timestamp > ago(1h)
| summarize count() by name, resultCode

// User journey (funnels)
customEvents
| where timestamp > ago(7d)
| where name in ('PageView', 'AddToCart', 'Checkout', 'Purchase')
| summarize count() by name

Alertas automatizadas

# Alert para tasa de errores > 5%
az monitor metrics alert create \
  --name high-error-rate \
  --resource-group $RG \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/components/$APPINSIGHTS_NAME \
  --condition "avg requests/failed > 5" \
  --window-size 5m \
  --evaluation-frequency 1m \
  --action /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Insights/actionGroups/ops-team

Live Metrics Stream

Para debugging en tiempo real:

  1. Portal Azure → Application Insights → Live Metrics
  2. Ver requests, dependencies, exceptions en vivo
  3. Aplicar filtros por server, cloud role, etc.

Sampling para reducir costos

// .NET - adaptive sampling (recomendado)
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.EnableAdaptiveSampling = true;
    options.ConnectionString = connectionString;
});

// Fixed sampling (5% de telemetría)
builder.Services.AddApplicationInsightsTelemetryProcessor<AdaptiveSamplingTelemetryProcessor>(p =>
{
    p.SamplingPercentage = 5;
});

Costos

Application Insights cobra por GB ingestado: - Primeros 5 GB/mes: gratis - >5 GB: ~€2/GB

Con sampling al 10%, una app con 1M requests/día → ~15GB/mes → €20/mes

Distributed tracing

Para microservicios, Application Insights correlaciona automáticamente:

graph LR
    A[API Gateway] -->|operation_Id: abc123| B[Auth Service]
    A -->|operation_Id: abc123| C[Order Service]
    C -->|operation_Id: abc123| D[Payment API]

Query cross-service:

// Trace completo de una operación
union requests, dependencies
| where operation_Id == 'abc123'
| project timestamp, itemType, name, duration, success
| order by timestamp asc

Buenas prácticas

  • No loguees PII: Filtra datos sensibles antes de enviar
  • Usa sampling en producción: 10-20% es suficiente
  • Custom dimensions: Añade tenant_id, user_role para segmentar
  • Dependency tracking: Verifica que captura SQL, Redis, HTTP
  • Availability tests: Configura pings cada 5 minutos desde múltiples regiones

Referencias

Terraform Backend en Azure Storage: Setup completo

Resumen

Guardar el estado de Terraform en local es peligroso y no escala. Azure Storage con state locking es la solución estándar para equipos. Aquí el setup completo en 5 minutos.

¿Por qué usar remote backend?

Problemas con backend local: - ❌ Estado se pierde si borras tu laptop - ❌ Imposible colaborar en equipo - ❌ No hay locking → corrupciones en despliegues simultáneos - ❌ Secretos en plaintext en disco local

Ventajas con Azure Storage: - ✅ Estado centralizado y versionado - ✅ Locking automático con blob lease - ✅ Encriptación at-rest - ✅ Acceso controlado con RBAC

Setup del backend

1. Crear Storage Account

# Variables
RG="terraform-state-rg"
LOCATION="westeurope"
STORAGE_ACCOUNT="tfstate$(date +%s)"  # Nombre único
CONTAINER="tfstate"

# Crear resource group
az group create \
  --name $RG \
  --location $LOCATION

# Crear storage account
az storage account create \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG \
  --location $LOCATION \
  --sku Standard_LRS \
  --encryption-services blob \
  --min-tls-version TLS1_2

# Crear container
az storage container create \
  --name $CONTAINER \
  --account-name $STORAGE_ACCOUNT \
  --auth-mode login

Naming convention

Storage account solo acepta lowercase y números, máximo 24 caracteres. Usa un sufijo único para evitar colisiones.

2. Configurar backend en Terraform

backend.tf:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstate1234567890"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

3. Inicializar

# Login a Azure
az login

# Inicializar backend
terraform init

# Migrar estado existente (si aplica)
terraform init -migrate-state

State locking

Azure usa blob leases para locking automático:

# Ver si hay lock activo
az storage blob show \
  --container-name $CONTAINER \
  --name prod.terraform.tfstate \
  --account-name $STORAGE_ACCOUNT \
  --query "properties.lease.status"

Si alguien está ejecutando terraform apply, verás:

"locked"

Multi-entorno con workspaces

# Crear workspace por entorno
terraform workspace new dev
terraform workspace new staging  
terraform workspace new prod

# Listar workspaces
terraform workspace list

# Cambiar entre entornos
terraform workspace select prod

Cada workspace crea su propio state file:

prod.terraform.tfstate
dev.terraform.tfstate
staging.terraform.tfstate

Seguridad: Managed Identity

Evita usar access keys en pipelines:

# Crear managed identity
az identity create \
  --resource-group $RG \
  --name terraform-identity

# Asignar rol Storage Blob Data Contributor
IDENTITY_ID=$(az identity show \
  --resource-group $RG \
  --name terraform-identity \
  --query principalId -o tsv)

az role assignment create \
  --role "Storage Blob Data Contributor" \
  --assignee $IDENTITY_ID \
  --scope /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT

Backend config con managed identity:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstate1234567890"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
    use_msi              = true
    subscription_id      = "00000000-0000-0000-0000-000000000000"
    tenant_id            = "00000000-0000-0000-0000-000000000000"
  }
}

Versionado del estado

# Habilitar versioning en el container
az storage blob service-properties update \
  --account-name $STORAGE_ACCOUNT \
  --enable-versioning true

# Ver versiones anteriores
az storage blob list \
  --container-name $CONTAINER \
  --account-name $STORAGE_ACCOUNT \
  --include v \
  --query "[?name=='prod.terraform.tfstate'].{Version:versionId, LastModified:properties.lastModified}"

Pipeline CI/CD con Azure DevOps

azure-pipelines.yml:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  - group: terraform-variables

stages:
  - stage: Plan
    jobs:
      - job: TerraformPlan
        steps:
          - task: AzureCLI@2
            displayName: 'Terraform Init & Plan'
            inputs:
              azureSubscription: 'Azure Service Connection'
              scriptType: 'bash'
              scriptLocation: 'inlineScript'
              inlineScript: |
                terraform init
                terraform plan -out=tfplan

          - publish: '$(System.DefaultWorkingDirectory)/tfplan'
            artifact: tfplan

  - stage: Apply
    dependsOn: Plan
    condition: succeeded()
    jobs:
      - deployment: TerraformApply
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - download: current
                  artifact: tfplan

                - task: AzureCLI@2
                  displayName: 'Terraform Apply'
                  inputs:
                    azureSubscription: 'Azure Service Connection'
                    scriptType: 'bash'
                    scriptLocation: 'inlineScript'
                    inlineScript: |
                      terraform init
                      terraform apply tfplan

Troubleshooting

Error: "Failed to lock state"

# Forzar unlock (solo si estás seguro)
terraform force-unlock <LOCK_ID>

# O romper lease manualmente
az storage blob lease break \
  --container-name $CONTAINER \
  --blob-name prod.terraform.tfstate \
  --account-name $STORAGE_ACCOUNT

Error: "storage account not found"

# Verificar permisos
az storage account show \
  --name $STORAGE_ACCOUNT \
  --resource-group $RG

Buenas prácticas

  • State file por proyecto: No mezcles infraestructuras diferentes
  • Soft delete: Habilita soft delete en storage account
  • Network security: Restringe acceso desde VNet específicas
  • Audit logs: Habilita diagnostic settings para compliance
  • Backup externo: Exporta estados críticos a otro storage account

Never commit state files

Añade *.tfstate* a .gitignore. El state contiene secretos en plaintext.

Referencias

Azure Bastion: SSH y RDP sin exponer IPs públicas

Resumen

Azure Bastion te permite conectarte a tus VMs sin asignarles IP pública. Funciona como jump server managed que accedes desde el portal de Azure. Ideal para cumplir con políticas de seguridad estrictas.

¿Qué es Azure Bastion?

Azure Bastion es un servicio PaaS que despliegas en tu VNet y proporciona:

  • Conectividad RDP/SSH segura sobre SSL (puerto 443)
  • Sin necesidad de IP pública en las VMs
  • Sin agentes ni software cliente
  • Protección contra port scanning y zero-day exploits

Arquitectura

graph LR
    A[Usuario] -->|HTTPS 443| B[Azure Bastion]
    B -->|Private IP| C[VM Linux SSH]
    B -->|Private IP| D[VM Windows RDP]
    B -.subnet dedicado.- E[AzureBastionSubnet]

Despliegue básico

# Variables
RG="my-rg"
LOCATION="westeurope"
VNET_NAME="my-vnet"
BASTION_NAME="my-bastion"

# Crear subnet específica para Bastion (nombre obligatorio)
az network vnet subnet create \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name AzureBastionSubnet \
  --address-prefixes 10.0.255.0/26

# Crear IP pública para Bastion
az network public-ip create \
  --resource-group $RG \
  --name ${BASTION_NAME}-pip \
  --sku Standard \
  --location $LOCATION

# Crear Azure Bastion
az network bastion create \
  --resource-group $RG \
  --name $BASTION_NAME \
  --public-ip-address ${BASTION_NAME}-pip \
  --vnet-name $VNET_NAME \
  --location $LOCATION

Requisitos de subnet

  • El subnet DEBE llamarse AzureBastionSubnet
  • Mínimo /26 (64 IPs)
  • No puede tener NSG restrictivo

Conexión a VMs

Desde el Portal

  1. Ir a la VM → Connect → Bastion
  2. Introducir credenciales
  3. Conectar en el navegador

Desde CLI (requiere Standard SKU)

# Conectar a VM Linux
az network bastion ssh \
  --resource-group $RG \
  --name $BASTION_NAME \
  --target-resource-id /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Compute/virtualMachines/my-linux-vm \
  --auth-type password \
  --username azureuser

# Conectar a VM Windows (requiere native client)
az network bastion rdp \
  --resource-group $RG \
  --name $BASTION_NAME \
  --target-resource-id /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Compute/virtualMachines/my-windows-vm

SKUs disponibles

SKU Características Precio aprox.
Basic RDP/SSH desde portal ~140€/mes
Standard + CLI access, file copy, IP-based connection ~140€/mes + instancias
Premium + Kerberos auth, session recording En preview

Standard SKU: Funcionalidades avanzadas

# Actualizar a Standard SKU y habilitar features
az network bastion update \
  --resource-group $RG \
  --name $BASTION_NAME \
  --sku Standard \
  --enable-tunneling true \
  --enable-ip-connect true

# Escalar instancias (2-50) para alta disponibilidad
az network bastion update \
  --resource-group $RG \
  --name $BASTION_NAME \
  --scale-units 3

Nuevas capacidades con Standard: - Native client support: Conectar con tu cliente SSH/RDP local vía az network bastion - IP-based connection: Conectar a cualquier IP dentro de la VNet - File transfer: Upload/download archivos vía tunnel - Tunneling: Crear túneles SSH para port forwarding - Shareable link: Generar URLs para acceso temporal (requiere Premium)

Copiar archivos (Standard SKU con tunneling)

# Crear tunnel SSH para file transfer
az network bastion tunnel \
  --resource-group $RG \
  --name $BASTION_NAME \
  --target-resource-id /subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.Compute/virtualMachines/my-vm \
  --resource-port 22 \
  --port 2222

# En otra terminal: upload archivo
scp -P 2222 local-file.txt azureuser@localhost:/home/azureuser/

# Download archivo
scp -P 2222 azureuser@localhost:/home/azureuser/remote-file.txt ./

Tunnel para RDP:

# Crear tunnel RDP (puerto 3389)
az network bastion tunnel \
  --resource-group $RG \
  --name $BASTION_NAME \
  --target-resource-id /subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.Compute/virtualMachines/my-windows-vm \
  --resource-port 3389 \
  --port 13389

# Conectar con cliente RDP local a localhost:13389
mstsc /v:localhost:13389

Monitoreo

# Ver métricas de sesiones
az monitor metrics list \
  --resource /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Network/bastionHosts/$BASTION_NAME \
  --metric "Sessions"

# Diagnostic logs
az monitor diagnostic-settings create \
  --resource /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.Network/bastionHosts/$BASTION_NAME \
  --name bastion-logs \
  --workspace /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.OperationalInsights/workspaces/my-law \
  --logs '[{"category": "BastionAuditLogs", "enabled": true}]'

Buenas prácticas

  • NSG en subnets de VMs: Permite solo tráfico desde AzureBastionSubnet
  • JIT Access: Combina con Microsoft Defender for Cloud JIT
  • Session recording: Habilita en Premium SKU para compliance
  • Firewall rules: Bastion necesita acceso saliente a Internet (servicios Azure)
  • Disaster recovery: Despliega Bastion en múltiples regiones

Restricciones de red (NSG)

El subnet AzureBastionSubnet requiere reglas NSG específicas:

# Crear NSG para AzureBastionSubnet
az network nsg create \
  --resource-group $RG \
  --name ${BASTION_NAME}-nsg

# Inbound: HTTPS desde Internet
az network nsg rule create \
  --resource-group $RG \
  --nsg-name ${BASTION_NAME}-nsg \
  --name AllowHttpsInbound \
  --priority 100 \
  --source-address-prefixes Internet \
  --destination-port-ranges 443 \
  --protocol Tcp \
  --access Allow \
  --direction Inbound

# Inbound: GatewayManager
az network nsg rule create \
  --resource-group $RG \
  --nsg-name ${BASTION_NAME}-nsg \
  --name AllowGatewayManager \
  --priority 110 \
  --source-address-prefixes GatewayManager \
  --destination-port-ranges 443 \
  --protocol Tcp \
  --access Allow \
  --direction Inbound

# Inbound: Bastion internal communication
az network nsg rule create \
  --resource-group $RG \
  --nsg-name ${BASTION_NAME}-nsg \
  --name AllowBastionHostCommunication \
  --priority 120 \
  --source-address-prefixes VirtualNetwork \
  --destination-port-ranges 8080 5701 \
  --protocol Tcp \
  --access Allow \
  --direction Inbound

# Outbound: SSH/RDP a VMs
az network nsg rule create \
  --resource-group $RG \
  --nsg-name ${BASTION_NAME}-nsg \
  --name AllowSshRdpOutbound \
  --priority 100 \
  --destination-address-prefixes VirtualNetwork \
  --destination-port-ranges 22 3389 \
  --protocol Tcp \
  --access Allow \
  --direction Outbound

# Outbound: Azure Cloud (servicios de Azure)
az network nsg rule create \
  --resource-group $RG \
  --nsg-name ${BASTION_NAME}-nsg \
  --name AllowAzureCloudOutbound \
  --priority 110 \
  --destination-address-prefixes AzureCloud \
  --destination-port-ranges 443 \
  --protocol Tcp \
  --access Allow \
  --direction Outbound

# Asociar NSG al subnet
az network vnet subnet update \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name AzureBastionSubnet \
  --network-security-group ${BASTION_NAME}-nsg

Costos

Ejemplo en West Europe: - Basic: ~€140/mes (fijo) - Standard: ~€140/mes + ~€9/instancia adicional/mes - Tráfico outbound: Estándar Azure

Ahorro

Si solo necesitas acceso ocasional, considera apagar/encender Bastion. Pagas solo por las horas que está running.

Alternativas

  • Azure VPN Gateway: Para acceso permanente desde on-premises
  • Azure Virtual WAN: Para topologías hub-spoke complejas
  • Just-in-Time Access: Para exposición temporal de puertos

Referencias

Rotación automática de secretos en Azure Key Vault

Resumen

Rotar secretos manualmente es error-prone y tedioso. Azure Key Vault soporta rotación automatizada mediante Event Grid + Azure Functions. Aquí te muestro cómo implementarla paso a paso.

¿Por qué rotar secretos?

  • Compliance: PCI-DSS, SOC2 exigen rotación periódica
  • Seguridad: Limita ventana de exposición si hay leak
  • Best practice: NIST recomienda rotación cada 90 días

Keys vs Secrets

  • Keys criptográficas: Tienen rotación nativa con rotation policy
  • Secrets (passwords, API keys): Requieren Event Grid + Function App

Este artículo cubre secrets. Para keys, ver Configure key rotation.

Arquitectura de rotación automática

Azure Key Vault usa Event Grid para notificar cuando un secreto está cerca de expirar:

graph LR
    A[Key Vault Secret] -->|30 días antes expiración| B[Event Grid]
    B -->|SecretNearExpiry event| C[Function App]
    C -->|Genera nuevo secret| D[Servicio Externo/SQL]
    C -->|Actualiza secreto| A

Proceso: 1. Key Vault publica evento SecretNearExpiry 30 días antes de expiración 2. Event Grid llama a Function App vía HTTP POST 3. Function genera nuevo secreto y actualiza el servicio 4. Function actualiza Key Vault con nueva versión del secreto

Implementación: Rotar SQL Server password

1. Crear secreto con fecha de expiración

1. Crear secreto con fecha de expiración

# Variables
RG="my-rg"
KV_NAME="my-keyvault"
SECRET_NAME="sql-admin-password"
SQL_SERVER="my-sql-server"

# Crear secreto con expiración 90 días
EXPIRY_DATE=$(date -u -d "+90 days" +'%Y-%m-%dT%H:%M:%SZ')

az keyvault secret set \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --value "InitialP@ssw0rd!" \
  --expires $EXPIRY_DATE

2. Desplegar Function App de rotación

Usar template oficial de Microsoft:

# Deploy ARM template con Function App preconfigurada
az deployment group create \
  --resource-group $RG \
  --template-uri https://raw.githubusercontent.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp/main/ARM-Templates/Function/azuredeploy.json \
  --parameters \
    sqlServerName=$SQL_SERVER \
    keyVaultName=$KV_NAME \
    functionAppName="${KV_NAME}-rotation-func" \
    secretName=$SECRET_NAME \
    repoUrl="https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp.git"

Este template despliega: - Function App con managed identity - Event Grid subscription a SecretNearExpiry - Access policy de Key Vault para la función - Código de rotación preconfigura do

3. Código de la función (incluido en el template)

3. Código de la función (incluido en el template)

La función C# incluida en el template maneja: - Recibe evento SecretNearExpiry de Event Grid - Extrae nombre del secreto y versión - Genera nuevo password aleatorio - Actualiza SQL Server con nuevo password - Crea nueva versión del secreto en Key Vault

// Código simplificado (el template incluye implementación completa)
[FunctionName("AKVSQLRotation")]
public static void Run([EventGridTrigger]EventGridEvent eventGridEvent)
{
    var secretName = eventGridEvent.Subject;
    var keyVaultName = ExtractVaultName(eventGridEvent.Topic);

    // Rotar password
    SecretRotator.RotateSecret(log, secretName, keyVaultName);
}

Implementación: Rotar Storage Account keys

Para servicios con dos sets de credenciales (primary/secondary keys):

# Deploy template para Storage Account rotation
az deployment group create \
  --resource-group $RG \
  --template-uri https://raw.githubusercontent.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell/master/ARM-Templates/Function/azuredeploy.json \
  --parameters \
    storageAccountName=$STORAGE_ACCOUNT \
    keyVaultName=$KV_NAME \
    functionAppName="${KV_NAME}-storage-rotation"

# Crear secret con metadata para rotación
EXPIRY_DATE=$(date -u -d "+60 days" +'%Y-%m-%dT%H:%M:%SZ')
STORAGE_KEY=$(az storage account keys list -n $STORAGE_ACCOUNT --query "[0].value" -o tsv)

az keyvault secret set \
  --vault-name $KV_NAME \
  --name storageKey \
  --value "$STORAGE_KEY" \
  --tags CredentialId=key1 ProviderAddress="/subscriptions/{sub}/resourceGroups/$RG/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT" ValidityPeriodDays=60 \
  --expires $EXPIRY_DATE

Estrategia dual-key: 1. Key1 almacenada en Key Vault 2. Evento SecretNearExpiry activa rotación 3. Function regenera Key2 en Storage Account 4. Actualiza secreto en Key Vault con Key2 5. Próxima rotación alterna a Key1

Monitoreo de rotaciones

# Ver versiones de un secreto
az keyvault secret list-versions \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --query "[].{Version:id, Created:attributes.created, Expires:attributes.expires}"

# Ver última actualización
az keyvault secret show \
  --vault-name $KV_NAME \
  --name $SECRET_NAME \
  --query "attributes.{Updated:updated, Expires:expires, Enabled:enabled}"

# Ver logs de rotación en Function App
az monitor app-insights query \
  --app ${KV_NAME}-rotation-func \
  --analytics-query "traces | where message contains 'Rotation' | top 10 by timestamp desc"

Notificaciones por email

# Action Group para alertas
az monitor action-group create \
  --resource-group $RG \
  --name secret-rotation-alerts \
  --short-name SecRot \
  --email-receiver EmailAdmin admin@company.com

# Alert rule
az monitor metrics alert create \
  --resource-group $RG \
  --name secret-rotation-failed \
  --scopes /subscriptions/{sub-id}/resourceGroups/$RG/providers/Microsoft.KeyVault/vaults/$KV_NAME \
  --condition "count SecretRotationFailed > 0" \
  --action secret-rotation-alerts

Buenas prácticas

  • Overlap period: Mantén versión anterior válida 7-30 días
  • Testing: Rota primero en entorno de dev/staging
  • Documentación: Registra qué servicios usan cada secreto
  • Backup: Exporta secretos críticos a offline storage encriptado
  • Notificaciones: Configura alertas para rotaciones fallidas

Secretos hardcodeados

La rotación no sirve de nada si tienes secretos hardcodeados en código o config files. Usa referencias a Key Vault (@Microsoft.KeyVault(SecretUri=...) en App Settings).

Templates oficiales

Microsoft proporciona templates ARM completos para diferentes escenarios: - SQL password rotation - Storage Account keys rotation - Adaptables para otros servicios (Cosmos DB, Redis, APIs externas)

Referencias

Azure App Service Deployment Slots: Despliegues sin downtime

Resumen

Los Deployment Slots en Azure App Service te permiten desplegar nuevas versiones de tu aplicación sin tiempo de inactividad. es la forma más sencilla de implementar blue-green deployments en Azure.

¿Qué son los Deployment Slots?

Los Deployment Slots son entornos en vivo dentro de tu App Service donde puedes desplegar diferentes versiones de tu aplicación. Cada slot:

  • Tiene su propia URL
  • Puede tener configuración independiente
  • Permite swap instantáneo entre slots
  • Comparte el mismo plan de App Service

Caso de uso típico

# Variables
RG="my-rg"
APP_NAME="my-webapp"
LOCATION="westeurope"

# Crear App Service Plan (mínimo Standard)
az appservice plan create \
  --name ${APP_NAME}-plan \
  --resource-group $RG \
  --location $LOCATION \
  --sku S1

# Crear App Service
az webapp create \
  --name $APP_NAME \
  --resource-group $RG \
  --plan ${APP_NAME}-plan

# Crear slot de staging
az webapp deployment slot create \
  --name $APP_NAME \
  --resource-group $RG \
  --slot staging

Workflow de despliegue

1. Desplegar a staging:

# Desplegar código al slot staging
az webapp deployment source config-zip \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --src app.zip

2. Validar en staging:

URL del slot: https://{app-name}-staging.azurewebsites.net

3. Hacer swap a producción:

# Swap directo staging -> production
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production

# Swap con preview (recomendado)
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action preview

# Después de validar, completar swap
az webapp deployment slot swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --target-slot production \
  --action swap

Proceso de swap

Durante el swap, Azure sigue este proceso: 1. Aplica settings del target slot al source slot 2. Espera a que todas las instancias se reinicien y calienten 3. Si todas las instancias están healthy, intercambia el routing 4. El target slot (production) queda con la nueva app sin downtime

Tiempo total: 1-5 minutos dependiendo de warmup. La producción NO sufre downtime.

Configuración sticky vs swappable

No toda la configuración se intercambia en el swap:

Sticky (no se mueve con el código): - App settings marcadas como "Deployment slot setting" - Connection strings marcadas como "Deployment slot setting" - Custom domains - Nonpublic certificates y TLS/SSL settings - Scale settings - IP restrictions - Always On, Diagnostic settings, CORS

Swappable (se mueve con el código): - General settings (framework version, 32/64-bit) - App settings no marcadas - Handler mappings - Public certificates - WebJobs content - Hybrid connections - Virtual network integration

Configurar sticky settings:

Desde el portal Azure: 1. Ir a ConfigurationApplication settings del slot 2. Añadir/editar app setting 3. Marcar checkbox Deployment slot setting 4. Apply

# Crear app setting en slot staging
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings DATABASE_URL="staging-connection-string"

# Para hacerla sticky: usar portal o ARM template
# No hay flag CLI directo para marcar como slot-specific

Auto-swap para CI/CD

# Configurar auto-swap desde staging a production
az webapp deployment slot auto-swap \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --auto-swap-slot production

Con auto-swap habilitado: 1. Push a staging → despliegue automático 2. Warmup automático del slot 3. Swap a producción sin intervención manual

Customizar warmup path:

# Configurar URL de warmup personalizada
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_PATH="/health/ready"

# Validar solo códigos HTTP específicos
az webapp config appsettings set \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging \
  --settings WEBSITE_SWAP_WARMUP_PING_STATUSES="200,202"

Buenas prácticas

  • Usar staging para testing: Siempre valida en staging antes del swap
  • Configurar health checks: Azure verifica el slot antes de hacer swap
  • Mantener paridad: Staging debe replicar producción (misma configuración, DB de test similar)
  • Rollback rápido: Si falla, haz swap inverso inmediatamente
  • Limitar slots: Máximo 2-3 slots por app (staging, pre-production)

Ahorro de costos

Los slots comparten recursos del App Service Plan. No pagas más por tener staging, pero requieres tier Standard o superior.

Monitoring del swap

# Ver historial de swaps
az webapp deployment slot list \
  --resource-group $RG \
  --name $APP_NAME

# Logs durante el swap
az webapp log tail \
  --resource-group $RG \
  --name $APP_NAME \
  --slot staging

Referencias

Azure Cost Management: planifica tu presupuesto 2025

Resumen

enero es el mes para configurar presupuestos y alertas de costes en Azure. Con Azure Cost Management puedes establecer límites, recibir notificaciones y evitar sorpresas en la factura. En este post verás cómo crear presupuestos, configurar alertas automáticas y usar Azure Advisor para optimizar gastos desde el primer día del año.

GTD in Outlook with Microsoft To Do

In this post, we will see how to use GTD in Outlook.

Firs of all, let's define what GTD is. GTD stands for Getting Things Done, a productivity method created by David Allen. The GTD method is based on the idea that you should get things out of your head and into a trusted system, so you can focus on what you need to do.

Important

The GTD method is not about doing more things faster. It's about doing the right things at the right time. This method needs to be aligned with your purpose, objectives, and goals, so you can focus on what really matters to you.

The GTD workflow

The GTD workflow consists of five steps:

  1. Capture: Collect everything that has your attention.
  2. Clarify: Process what you've captured.
  3. Organize: Put everything in its place.
  4. Reflect: Review your system regularly.
  5. Engage: Choose what to do and do it.

The detailed flowchart of the GTD method is shown below:

graph TD;
    subgraph Capture    
    subgraph for_each_item_not_in_Inbox
    AA[Collect everything that has your attention in the Inbox];
    end    
    A[Inbox]
    AA-->A
    end
    subgraph Clarify        
    subgraph for_each_item_in_Inbox
    BB[  What is this?
            What should I do?
            Is this really worth doing?
            How does it fit in with all the other things I have to do?
            Is there any action that needs to be done about it or because of it?]
    end
    end
    A --> BB
    subgraph Organize
    BB --> B{Is it actionable?}
    B -- no --> C{Is it reference material?}
    C -- yes --> D[Archive]
    C -- no --> E{Is it trash?}
    E -- yes --> FF[Trash it]
    E -- no --> GG[Incubate it]
    GG --> HH[Review it weekly]
    B -- yes --> F{Can it be done in one step?}
    F -- no --> G[Project]
    G --> HHH[Define a next action at least]
    HHH --> H[Project Actions]    
    end
    subgraph Engage
    H --> I
    F -- yes --> I{Does it take more than 2 minutes?}
    I -- no --> J[DO IT]
    I -- yes --> K{Is it my responsibility?}
    K -- no --> L[Delegate]
    L --> M[Waiting For]    
    K -- yes --> N{Does it have a specific date?}
    N -- yes --> O[Add to Calendar]
    N -- no --> P[Next Actions]
    O --> Q[Schedule. Do it at a specific time]
    P --> R[Do it as soon as possible]
    end
graph TD;
subgraph Reflect
    S[**Daily Review**
            - Review your tasks daily
            - Calendar
            - Next  actions
            - Urgent tasks]
    T[**Weekly Review**
            - Review your projects weekly, make sure you're making progress
            - Next actions
            - Waiting for
            - Someday/Maybe
            - Calendar]
    U[**Monthly Review**
            - Focus on your goals
            - Reference archive
            - Someday/Maybe
            - Completed projects]
    V[**Review your purpose annually**
            - Goals and purposes
            - Big projects
            - Historical archive]
    end

It's important to note that the GTD method is not a one-size-fits-all solution. You can adapt it to your needs and preferences. The key is to find a system that works for you and stick to it.

And now, let's see how to use GTD in Outlook and Microsoft To Do.

How to use GTD in Outlook with Microsoft To Do

When it comes to implementing the GTD method in Outlook, the key is to use the right tools and techniques. Microsoft To Do is a great tool for managing your tasks and projects, and it integrates seamlessly with Outlook.

You can use Outlook to implement the GTD method by following these steps:

  1. Capture:
    • emails: Use the Inbox to collect everything that has your attention.
    • Other Things: Use Microsoft To Do Taks default list to capture tasks and projects.
  2. Clarify: Process what you've captured by asking yourself the following questions:
    • What is this?
    • What should I do?
    • Is this really worth doing?
    • How does it fit in with all the other things I have to do?
    • Is there any action that needs to be done about it or because of it?
  3. Organize: Put everything in its place by following these steps:
    • Inbox:
      • Move emails to the appropriate folder or delete them.
      • Categories: Use categories to organize your emails by context and folder to organize your emails by project or client.
      • Use search folders to find emails quickly by category or categories, you can clear categories after processing.
      • Flags emails to add to To Do.
      • Create rules to automate repetitive tasks when clarifying one type of email has allways the same action.
    • Tasks: Organize your tasks and projects in Microsoft To Do.
      • Lists: Create lists for different types of tasks, one by context or use #tags for that in one lists. For example:
        • In the case of lists: Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, Someday/Maybe.
        • In the case of tags, one list with: #Agendas, #Anywhere, #Calls, #Computed, #Errands, #Home, #Office, #WaitingFor, #SomedayMaybe.
      • Use tag #nextaction to identify the next task to do.
      • Use tag #urgent to identify urgent tasks.
    • Projects
      • Group Lists: Group lists by category of projects or client.
      • One list per project: Create a list for each project and add tasks to it.
      • Use #nextaction tag to identify the next task in each project.
    • Reference Material:
      • Store reference material in folders, better in OneDrive or SharePoint.
      • Use a folder structure to organize your reference material
      • Use search folders to find it quickly.
      • Use tags to identify the context of the reference material. You can use FileMeta to add tags to files in Windows for non-taggeable files.
  4. Reflect: Review your system regularly to make sure it's up to date.
    • Daily Review
    • Weekly Review
    • Monthly Review
    • Annual Review
  5. Engage: Choose what to do and do it.
    • Use the My Day Bar to see your tasks and events at a glance or in search bar type #nextaction to see all your next actions.

These are just some ideas to get you started. You can adapt the GTD method to your needs and preferences. The key is to find a system that works for you and stick to it.

My example of GTD in Outlook with Microsoft To Do:

Outlook:

alt text

To Do:

alt text

I'm using a mix of lists and tags with the same name to organize my tasks and projects. I have lists for different types of tasks, such as Agendas, Anywhere, Calls, Computed, Errands, Home, Office, Waiting For, and Someday/Maybe. I also use tags to identify the next action, urgent tasks and context of the task in projects.

In the case of emails, I use categories to organize them by context and folders to organize them by project or client. I also use search folders to find emails quickly by category or categories and filter by unreads. The reason for this is that I can clear categories after processing and in the mayorie of cases, I only need a quick review of the emails without the need to convert them into tasks.

By following these steps, you can implement the GTD method in Outlook and Microsoft To Do and improve your productivity and focus.

Good luck! 🍀

References

How to sign Git commits in Visual Studio Code in Windows Subsystem for Linux (WSL)

In this post, we will see how to sign Git commits in Visual Studio Code.

Prerequisites

  • Visual Studio Code
  • Git
  • gpg
  • gpg-agent
  • gpgconf
  • pinentry-gtk-2
  • Windows Subsystem for Linux (WSL) with Ubuntu 20.04

Steps

1. Install GPG

First, you need to install GPG ans agents. You can do this by running the following command:

sudo apt install gpg gpg-agent gpgconf pinentry-gtk2 -y

2. Generate a GPG key

To generate a GPG key, run the following command:

gpg --full-generate-key

You will be asked to enter your name, email, and passphrase. After that, the key will be generated.

3. List your GPG keys

To list your GPG keys, run the following command:

gpg --list-secret-keys --keyid-format LONG

You will see a list of your GPG keys. Copy the key ID of the key you want to use.

4. Configure Git to use your GPG key

To configure Git to use your GPG key, run the following command:

git config --global user.signingkey YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

5. Configure Git to sign commits by default

To configure Git to sign commits by default, run the following command:

git config --global commit.gpgsign true
git config --global gpg.program (which gpg)

6. EXport the GPG key

To export the GPG key, run the following command:

gpg --armor --export YOUR_KEY_ID

Replace YOUR_KEY_ID with the key ID you copied in the previous step.

7. Import to github

Go to your github account and add the exported GPG key in GPG keys section, create a new GPG key and paste the exported key.

Configure Visual Studio Code to use GPG

1. Configure gpg-agent

To configure gpg-agent, run the following command:

echo "default-cache-ttl" >> ~/.gnupg/gpg-agent.conf
echo "pinentry-program /usr/bin/pinentry-gtk-2" >> ~/.gnupg/gpg-agent.conf
echo "allow-preset-passphrase" >> ~/.gnupg/gpg-agent.conf

2. Restart the gpg-agent

To restart the gpg-agent, run the following command:

gpgconf --kill gpg-agent
gpgconf --launch gpg-agent

3. Sign a commit

To sign a commit, run the following command:

git commit -S -m "Your commit message"

4. Verify the signature

To verify the signature of a commit, run the following command:

git verify-commit HEAD

5. Configure Visual Studio Code to use GPG

To configure Visual Studio Code to use GPG, open the settings by pressing Ctrl + , and search for git.enableCommitSigning. Set the value to true.

6. Sign a commit

Make a commit in Visual Studio Code, and you will see a prompt asking you introduce your GPG passphrase. Enter your passphrase, and the commit will be signed.

That's it! Now you know how to sign Git commits in Visual Studio Code.

Some tips

For all repositories

  • Establish your email in git configuration:
git config --global user.email "petete@something.es"
  • Establish your name in git configuration:
git config --global user.name "Petete"
  • Establish your GPG key in git configuration:
git config --global user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config --global gpg.program (which gpg)

For a specific repository

  • Establish your email in git configuration:
git config user.email "petete@something.es"
  • Establish your name in git configuration:
git config user.name "Petete"
  • Establish your GPG key in git configuration:
git config user.signingkey YOUR_KEY_ID
  • Establish your GPG program in git configuration:
git config gpg.program (which gpg)

Conclusion

In this post, we saw how to sign Git commits in Visual Studio Code. This is useful if you want to verify the authenticity of your commits. I hope you found this post helpful. If you have any questions or comments, please let me know. Thank you for reading!