Skip to content

2024/04

Azure Network, Hub-and-Spoke Topology

Hub and Spoke is a network topology where a central Hub is connected to multiple Spokes. The Hub acts as a central point of connectivity and control, while the Spokes are isolated networks that connect to the Hub. This topology is common in Azure to simplify the connectivity and management of virtual networks.

graph TD
    HUB(("Central Hub"))
    SPOKE1[Spoke1]
    SPOKE2[Spoke2]
    SPOKE3[Spoke3]
    SPOKEN[Spoke...]
    HUB --- SPOKE1
    HUB --- SPOKE2
    HUB --- SPOKE3
    HUB --- SPOKEN

Key Features of the Hub and Spoke Topology

  1. Centralized Connectivity: The Hub centralizes the connectivity between the Spoke networks. This simplifies the administration and maintenance of the network.

  2. Traffic Control: The Hub acts as a traffic control point between the Spoke networks. This allows for centralized application of security and routing policies.

  3. Scalability: The Hub and Spoke topology is highly scalable and can grow to meet the organization's connectivity needs.

  4. Resilience: The Hub and Spoke topology provides redundancy and resilience in case of network failures.

How to Use the Hub and Spoke Topology in Azure

To implement the Hub and Spoke topology in Azure, follow these steps:

# Step 1: Create a virtual network for the Hub
az network vnet create --name HubVnet --resource-group MyResourceGroup --location eastus --address-prefix

# Step 2: Create virtual networks for the Spokes
az network vnet create --name Spoke1Vnet --resource-group MyResourceGroup --location eastus --address-prefix
az network vnet create --name Spoke2Vnet --resource-group MyResourceGroup --location eastus --address-prefix
az network vnet create --name Spoke3Vnet --resource-group MyResourceGroup --location eastus --address-prefix

# Step 3: Connect the Spokes to the Hub
az network vnet peering create --name Spoke1ToHub --resource-group MyResourceGroup --vnet-name Spoke1Vnet --remote-vnet HubVnet --allow-vnet-access
az network vnet peering create --name Spoke2ToHub --resource-group MyResourceGroup --vnet-name Spoke2Vnet --remote-vnet HubVnet --allow-vnet-access
az network vnet peering create --name Spoke3ToHub --resource-group MyResourceGroup --vnet-name Spoke3Vnet --remote-vnet HubVnet --allow-vnet-access

# Step 4: Configure routing between the Hub and the Spokes
az network vnet peering update --name Spoke1ToHub --resource-group MyResourceGroup --vnet-name Spoke1Vnet --set virtualNetworkGateway:AllowGatewayTransit=true
az network vnet peering update --name Spoke2ToHub --resource-group MyResourceGroup --vnet-name Spoke2Vnet --set virtualNetworkGateway:AllowGatewayTransit=true
az network vnet peering update --name Spoke3ToHub --resource-group MyResourceGroup --vnet-name Spoke3Vnet --set virtualNetworkGateway:AllowGatewayTransit=true

# Step 5: Configure routing in the Hub
az network vnet peering update --name HubToSpoke1 --resource-group MyResourceGroup --vnet-name HubVnet --set virtualNetworkGateway:UseRemoteGateways=true
az network vnet peering update --name HubToSpoke2 --resource-group MyResourceGroup --vnet-name HubVnet --set virtualNetworkGateway:UseRemoteGateways=true
az network vnet peering update --name HubToSpoke3 --resource-group MyResourceGroup --vnet-name HubVnet --set virtualNetworkGateway:UseRemoteGateways=true

Variant of the Hub and Spoke Topology

A variant of the Hub and Spoke topology is the Hub and Spoke with peering between spokes that is generally used to allow direct connectivity between the Spoke networks without going through the Hub. This can be useful in scenarios where direct connectivity between the Spoke networks is required, such as data replication or application communication.

graph TD
    HUB(("Central Hub"))
    SPOKE1[Spoke1]
    SPOKE2[Spoke2]
    SPOKE3[Spoke3]
    SPOKEN[Spoke...]
    HUB --- SPOKE1
    HUB --- SPOKE2
    HUB --- SPOKE3
    HUB --- SPOKEN
    SPOKE1 -.- SPOKE2    
In this case, it would be connecting the Spoke networks to each other via virtual network peering, for example:

# Connect Spoke1 to Spoke2
az network vnet peering create --name Spoke1ToSpoke2 --resource-group MyResourceGroup --vnet-name Spoke1Vnet --remote-vnet Spoke2Vnet --allow-vnet-access

Scalability and Performance

The Hub and Spoke topology in Azure is highly scalable and can handle thousands of virtual networks and subnets. In terms of performance, the Hub and Spoke topology provides efficient and low-latency connectivity between the Spoke networks and the Hub.

Security and Compliance

The Hub and Spoke topology in Azure provides centralized control over network security and compliance. Security and routing policies can be applied centrally at the Hub, ensuring consistency and compliance with the organization's network policies.

Monitoring and Logging

Use Network Watcher to monitor and diagnose network problems in the Hub and Spoke topology. Network Watcher provides the following tools:

  • Monitoring
    • Topology view shows you the resources in your virtual network and the relationships between them.
    • Connection monitor allows you to monitor connectivity and latency between endpoints within and outside of Azure.
  • Network diagnostic tools
    • IP flow verify helps you detect traffic filtering issues at the virtual machine level.
    • NSG diagnostics helps you detect traffic filtering issues at the virtual machine, virtual machine scale set, or application gateway level.
    • Next hop helps you verify traffic routes and detect routing issues.
    • Connection troubleshoot enables a one-time check of connectivity and latency between a virtual machine and the Bastion host, application gateway, or another virtual machine.
    • Packet capture allows you to capture traffic from your virtual machine.
    • VPN troubleshoot runs multiple diagnostic checks on your gateways and VPN connections to help debug issues.
  • Traffic

Virtual network flow logs have recently been released which allows for monitoring network traffic in Azure virtual networks.

Use Cases and Examples

The Hub and Spoke topology is ideal for organizations that require centralized connectivity and traffic control between multiple virtual networks in Azure. For example, an organization with multiple branches or departments can use the Hub and Spoke topology to securely and efficiently connect their virtual networks in the cloud.

Best Practices and Tips

When implementing the Hub and Spoke topology in Azure, it is recommended to follow these best practices:

  • Security: Apply consistent security policies at the Hub and Spokes to ensure network protection.
  • Resilience: Configure redundancy and resilience in the topology to ensure network availability in case of failures.
  • Monitoring: Use monitoring tools like Azure Monitor to monitor network traffic and detect potential performance issues.

Conclusion

The Hub and Spoke topology is an effective way to simplify the connectivity and management of virtual networks in Azure. It provides centralized control over network connectivity and traffic, making it easier to implement security and routing policies consistently across the network. By following the recommended best practices and tips, organizations can make the most of the Hub and Spoke topology to meet their cloud connectivity needs.

References

Azure Role-Based Access Control (RBAC)

Azure Role-Based Access Control (RBAC) is a system that provides fine-grained access management of resources in Azure. This allows administrators to grant only the amount of access that users need to perform their jobs.

Overview

In Azure RBAC, you can assign roles to user accounts, groups, service principals, and managed identities at different scopes. The scope could be a management group, subscription, resource group, or a single resource.

Here are some key terms you should know:

  • Role: A collection of permissions. For example, the "Virtual Machine Contributor" role allows the user to create and manage virtual machines.
  • Scope: The set of resources that the access applies to.
  • Assignment: The act of granting a role to a security principal at a particular scope.

Built-in Roles

Azure provides several built-in roles that you can assign to users, groups, service principals, and managed identities. Here are a few examples:

  • Owner: Has full access to all resources including the right to delegate access to others.
  • Contributor: Can create and manage all types of Azure resources but can’t grant access to others.
  • Reader: Can view existing Azure resources.
{
  "Name": "Contributor",
  "Id": "b24988ac-6180-42a0-ab88-20f7382dd24c",
  "IsCustom": false,
  "Description": "Lets you manage everything except access to resources.",
  "Actions": [
    "*"
  ],
  "NotActions": [
    "Microsoft.Authorization/*/Delete",
    "Microsoft.Authorization/*/Write",
    "Microsoft.Authorization/elevateAccess/Action"
  ],
  "DataActions": [],
  "NotDataActions": [],
  "AssignableScopes": [
    "/"
  ]
}

Custom Roles

If the built-in roles don't meet your specific needs, you can create your own custom roles. Just like built-in roles, you can assign permissions to custom roles and then assign those roles to users.

{
  "Name": "Custom Role",
  "Id": "00000000-0000-0000-0000-000000000000",
  "IsCustom": true,
  "Description": "Custom role description",
  "Actions": [
    "Microsoft.Compute/virtualMachines/start/action",
    "Microsoft.Compute/virtualMachines/restart/action"
  ],
  "NotActions": [],
  "DataActions": [],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/{subscriptionId}"
  ]
}
Custom Roles has the same structure as built-in roles:

  • Name: The name of the custom role.
  • Id: A unique identifier for the custom role.
  • IsCustom: Indicates whether the role is custom or built-in.
  • Description: A description of the custom role.
  • Actions: The list of actions that the role can perform.
  • NotActions: The list of actions that the role cannot perform.
  • DataActions: The list of data actions that the role can perform.
  • NotDataActions: The list of data actions that the role cannot perform.
  • AssignableScopes: The list of scopes where the role can be assigned.

You can check how to create a custom role here and not forget to check limitations here.

Recommendations

Here are some best practices for managing access with Azure RBAC:

  • Use the principle of least privilege: Only grant the permissions that users need to do their jobs.
  • Use built-in roles when possible: Built-in roles are already defined and tested by Microsoft. Only create custom roles when necessary.
  • Regularly review role assignments: Make sure that users have the appropriate level of access for their job. Remove any unnecessary role assignments.

Conclusion

Azure RBAC is a powerful tool for managing access to your Azure resources. By understanding its core concepts and how to apply them, you can ensure that users have the appropriate level of access for their job.

Kusto Query Language (KQL) for Azure Resource Graph

Azure Graph is a powerful tool provided by Microsoft to query data across all your Azure resources. It uses the Kusto Query Language (KQL), a read-only language similar to SQL, designed to query vast amounts of data in Azure services.

Important

Only a subset of KQL is supported in Azure Resource Graph. For more information, see the Azure Resource Graph Supported KQL Language Elements.

What is KQL?

KQL stands for Kusto Query Language. It's a request to process data and return results. The syntax is easy to read and author, making it ideal for data exploration and ad-hoc data mining tasks.

Using KQL with Azure Graph

Azure Graph allows you to use KQL to create complex queries that fetch information from your Azure resources. You can filter, sort, aggregate, and join data from different resources using KQL.

Here's an example of how you might use KQL to query Azure Graph:

Resources
| where type =~ 'microsoft.web/sites'
| project name, location, resourceGroup

This query retrieves all Azure Web Apps (websites) and projects their name, location, and resourceGroup.

Key Characteristics of KQL

  1. Case Sensitivity: Unlike SQL, KQL is case-sensitive. So 'Name' and 'name' would be considered different identifiers.
  2. Schema-Free: Kusto (Azure Data Explorer) doesn't require a fixed schema, allowing storage of diverse types of data.
  3. Extensibility: While KQL has a wide array of functions, you can also create custom functions as per your needs.

Common Operators in KQL

  • | : This operator creates a pipeline where the output of one command becomes the input of another.
  • where : Filters rows based on specified conditions.
  • summarize : Groups rows and calculates aggregate expressions.
  • join : Combines rows from two tables based on a common column.
  • project : Selects specific columns from the input.
  • extend : Adds new columns to the input.
  • order by : Sorts rows based on specified columns.

KQL Query Examples

1. List all Azure resources in a subscription

Resources

2. List all Azure resources in a resource group

Resources
| where resourceGroup == 'myResourceGroup'

3. List all Azure resources of a specific type

Resources
| where type =~ 'Microsoft.Compute/virtualMachines'

Pagination in KQL

KQL supports pagination using the limit and offset operators. You can use these operators to control the number of rows returned and skip a certain number of rows.

Resources
| limit 10
| offset 5

If you exceed payload limits, you can paginate Azure Resource Graph query results with powershell:

$kqlQuery = "policyResources | where type =~'Microsoft.Authorization/PolicySetDefinitions' or type =~'Microsoft.Authorization/PolicyDefinitions' | project definitionId = tolower(id), category = tostring(properties.metadata.category), definitionType = iff(type =~ 'Microsoft.Authorization/PolicysetDefinitions', 'initiative', 'policy'),PolicyDefinition=properties"

$batchSize = 5
$skipResult = 0

[System.Collections.Generic.List[string]]$kqlResult

while ($true) {

  if ($skipResult -gt 0) {
    $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken
  }
  else {
    $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize
  }

  $kqlResult += $graphResult.data

  if ($graphResult.data.Count -lt $batchSize) {
    break;
  }
  $skipResult += $skipResult + $batchSize
}

Best Practices for Writing KQL Queries

  1. Use project to Select Columns: Only select the columns you need to reduce the amount of data returned.
  2. Use where to Filter Rows: Apply filters to reduce the number of rows processed.
  3. Use summarize to Aggregate Data: Aggregate data to reduce the number of rows returned.
  4. Use join to Combine Data: Combine data from different tables using the join operator.
  5. Use order by to Sort Data: Sort data based on specific columns to make it easier to read.

Limitations of KQL

  1. No DDL Operations: KQL doesn't support Data Definition Language (DDL) operations like creating tables or indexes.
  2. No DML Operations: KQL doesn't support Data Manipulation Language (DML) operations like inserting, updating, or deleting data.
  3. Limited Data Types: KQL has a limited set of data types compared to SQL.
  4. No Transactions: KQL doesn't support transactions, so you can't group multiple operations into a single transaction.

Conclusion

KQL is a potent tool for querying large datasets in Azure. Its SQL-like syntax makes it accessible for anyone familiar with SQL, and its rich set of features makes it a flexible solution for a variety of data processing needs. Practice writing KQL queries to uncover valuable insights from your Azure resources!

References

Azure role assignment conditions

First of all, let's understand what is ABAc (Attribute-Based Access Control) and how it can be used in Azure.

What is ABAC?

Attribute-Based Access Control (ABAC) is an access control model that uses attributes to determine access rights. In ABAC, access decisions are based on the attributes of the user, the resource, and the environment. This allows for fine-grained access control based on a wide range of attributes, such as user roles, resource types, and time of day.

ABAC is a flexible and scalable access control model that can be used to enforce complex access policies. It allows organizations to define access control rules based on a wide range of attributes and to adapt those rules as their needs change.

ABAC is build on Azure RBAC.

What is Azure role assignment conditions?

Azure role assignment conditions allow you to define conditions that must be met for a role assignment to be effective.

How to configure Azure role assignment conditions?

To configure Azure role assignment conditions, configure the role assignment as usual, and then click on the "Conditions" tab. Here you can define the conditions that must be met for the role assignment to be effective.

alt text

Options for configuring Conditions:

  • Allow user to only assign selected roles to selected principals (fewer privileges)
  • Allow user to assign all roles except privileged administrator roles Owner, UAA, RBAC (Recommended)
  • Allow user to assign all roles (highly privileged)

The first one is the most restrictive, for example, allowing the user to only assign selected roles to selected principals. This is useful when you want to limit the privileges of a user to only a subset of roles and principals.

These are the options available for "Allow user to only assign selected roles to selected principals (fewer privileges)":

alt text

  • Constrain roles:
    • Allow user to only assign roles you select
  • Constrain roles and principal types:
    • Allow user to only assign roles you select
    • Allow user to only assign these roles to principal types you select (users, groups, or service principals)
  • Constrain roles and principals
    • Allow user to only assign roles you select
    • Allow user to only assign these roles to principals you select
    • Allow all except specific roles
    • Allow user to assign all roles except the roles you select

Conclusion

Azure role assignment conditions provide a flexible and powerful way to control access to Azure resources. By defining conditions that must be met for a role assignment to be effective, you can enforce fine-grained access control policies that meet the specific needs of your organization. This allows you to limit the privileges of users, assign roles to specific principals, and control access to sensitive resources. Azure role assignment conditions are a valuable tool for organizations that need to enforce strict access control policies and protect their critical resources.

References

Using Enterprise Azure Policy as Code (EPAC)

In this blog post, we will show how to use Enterprise Azure Policy as Code (EPAC) to manage your Azure environment.

Use case

  • Determine desired state strategy.
  • We have some existing Azure Policies that we want to manage as code.
  • For simplicity, we will suppose that we have a unique Centralized Team that manages the policies.
  • We will use a Git repository to store the policies and the CI/CD process to deploy them.
  • We doesn't have any exclude resources in the environment.
  • How to handle Defender for Cloud Policy Assignments:
  • We will use Defender for Cloud to manage the Policy Assignments for Defender Plans when a plan is enabled.
  • EPAC will manage Defender for Cloud Security Policy Assignments at the management group level. This is the default behavior.
  • Design your CI/CD process:
  • We will use Release Flow

Management Groups for Enterprise Scale Landing Zone

This is the common structure for the Management Groups in the Enterprise Scale Landing Zone, now Accelerator Landing Zone:

    graph TD
        A[Root Management Group] --> B[Intermediary-Management-Group]
        B --> C[Decommissioned]
        B --> D[Landing Zones]
        B --> E[Platform]
        B --> F[Sandboxes]
        D --> G[Corp]
        D --> H[Online]
        E --> I[Connectivity]
        E --> J[Identity]
        E --> K[Management]

For this use case, we will use the Landing Zones Management Group for duplicate and old Management Group hierarchy (manage-azure-policy):

  graph TD
      A[Root Management Group] --> B[epac-dev]
      B --> C[dev-decommissioned]
      B --> D[dev-landingzones]
      B --> E[dev-platform]
      B --> F[dev-sandbox]
      D --> G[dev-corp]
      D --> H[dev-online]
      E --> I[dev-connectivity]
      E --> J[dev-identity]
      E --> K[dev-management]
      A[Root Management Group] --> L[epac-prod]
      L --> M[prod-decommissioned]
      L --> N[prod-landingzones]
      L --> O[prod-platform]
      L --> P[prod-sandbox]
      N --> Q[prod-corp]
      N --> R[prod-online]
      O --> S[prod-connectivity]
      O --> T[prod-identity]
      O --> U[prod-management]
      A[Root Management Group] --> V[manage-azure-policy]

      classDef dev fill:#f90,stroke:#333,stroke-width:2px;
      classDef prod fill:#f9f,stroke:#333,stroke-width:2px;
      class dev A,B,C,D,E,F,G,H,I,J,K;
      class prod L,M,N,O,P,Q,R,S,T,U;   

Note

You could also use two different tenants for the different environments, but this is not the case for this use case.

You can create this Management Groups hierarcly using the Azure CLI with the following commands:

az account management-group create --name "MyManagementGroup"
az account management-group move --name "ChildGroup" --new-parent "NewParentGroup"

For the use case, we will use the following commands:

az account management-group create --name "epac-dev"
az account management-group create --name "dev-decommissioned" --parent "epac-dev"
az account management-group create --name "dev-landingzones" --parent "epac-dev"
az account management-group create --name "dev-platform" --parent "epac-dev"
az account management-group create --name "dev-sandbox" --parent "dev-landingzones"
az account management-group create --name "dev-corp" --parent "dev-landingzones"
az account management-group create --name "dev-online" --parent "dev-landingzones"
az account management-group create --name "dev-connectivity" --parent "dev-platform"
az account management-group create --name "dev-identity" --parent "dev-platform"
az account management-group create --name "dev-management" --parent "dev-platform"
az account management-group create --name "epac-prod"
az account management-group create --name "prod-decommissioned" --parent "epac-prod"
az account management-group create --name "prod-landingzones" --parent "epac-prod"
az account management-group create --name "prod-platform" --parent "epac-prod"
az account management-group create --name "prod-sandbox" --parent "prod-landingzones"
az account management-group create --name "prod-corp" --parent "prod-landingzones"
az account management-group create --name "prod-online" --parent "prod-landingzones"
az account management-group create --name "prod-connectivity" --parent "prod-platform"
az account management-group create --name "prod-identity" --parent "prod-platform"
az account management-group create --name "prod-management" --parent "prod-platform"

Installation

To install EPAC, follow these steps:

    Install-Module Az -Scope CurrentUser
    Install-Module EnterprisePolicyAsCode -Scope CurrentUser

Create an empty repository in github and clone it

Create a repository in github and clone it

    git clone https://github.com/user/demo-EPAC.git

Create a branch for the feature/firstcommit

    git checkout -b feature/firstcommit

Note

From this moment on, we will execute all commands within the repository directory.

Create Definitions

New-EPACDefinitionFolder -DefinitionsRootFolder Definitions

This command creates a folder structure for the definitions. The Definitions folder Structure is as follows:

  • Define the Azure environment(s) in file global-settings.jsonc
  • Create custom Policies (optional) in folder policyDefinitions
  • Create custom Policy Sets (optional) in folder policySetDefinitions
  • efine the Policy Assignments in folder policyAssignments
  • Define the Policy Exemptions (optional) in folder policyExemptions
  • Define Documentation in folder policyDocumentations]

Configure global-settings.jsonc

global-settings.jsonc is the file where you define the Azure environment(s) that you want to manage with EPAC. The file should be located in the Definitions folder. Here is an example of the content of the file:

{
    "$schema": "https://raw.githubusercontent.com/Azure/enterprise-azure-policy-as-code/main/Schemas/global-settings-schema.json",
    "pacOwnerId": "ff2ce5e1-da8a-4cfb-883b-aee9fbfb85d6",
    "pacEnvironments": [
        {
            "pacSelector": "epac-dev",
            "cloud": "AzureCloud",
            "tenantId": "e18e4e7e-d0cc-40af-9907-84923ca55499",
            "deploymentRootScope": "/providers/Microsoft.Management/managementGroups/epac-dev",
            "desiredState": {
                "strategy": "full",
                "keepDfcSecurityAssignments": false
            },
            "managedIdentityLocation": "france"
        },
        {
            "pacSelector": "tenant",
            "cloud": "AzureCloud",
            "tenantId": "e18e4e7e-d0cc-40af-9907-84923ca55499",
            "deploymentRootScope": "/providers/Microsoft.Management/managementGroups/epac-prod",
            "desiredState": {
                "strategy": "full",
                "keepDfcSecurityAssignments": false
            },
            "managedIdentityLocation": "france",
            "globalNotScopes": [
                "/providers/Microsoft.Management/managementGroups/mg-Epac-Dev",
                "/providers/Microsoft.Management/managementGroups/manage-azure-policy"
            ]
        },
        {
            "pacSelector": "manage-azure-policy",
            "cloud": "AzureCloud",
            "tenantId": "e18e4e7e-d0cc-40af-9907-84923ca55499",
            "deploymentRootScope": "/providers/Microsoft.Management/managementGroups/manage-azure-policy",
            "desiredState": {
                "strategy": "full",
                "keepDfcSecurityAssignments": false
            },
            "managedIdentityLocation": "france"
        }
    ]

}

Info

The pacOwner helps to identify who or what owns an Assignment or Policy definition deployment and needs to be unique to your EPAC environment. The pacOwnerId is used to identity policy resources that are deployed by your EPAC repository, or another EPAC isntance, legacy or another solution entirely.

You can generate a new id with New-Guid

Extracting existing Policy Resources

Export-AzPolicyResources

This command extracts all existing Policy Resources in the Azure environment(s) defined in the global-settings.jsonc file. The extracted resources are saved in the Output/Definitions folder.

You needs review the extracted resources and move them to the correct folder in the Definitions folder.

Syncing ALZ Definitions

Sync-ALZPolicies -DefinitionsRootFolder .\Definitions -CloudEnvironment AzureCloud # Also accepts AzureUSGovernment or AzureChinaCloud

You can sync the ALZ Definitions manually or use a GitHub action creating .github\workflows\alz-sync.yaml in your repository with the following content:

name: Sync ALZ Policy Objects

env:
  REVIEWER: anwather # Change this to your GitHub username
  DefinitionsRootFolder: Definitions # Change this to the folder where your policy definitions are stored

on:
  workflow_dispatch

jobs:
    sync:
        runs-on: ubuntu-latest
        steps:
        - name: Checkout
          uses: actions/checkout@v4
        - shell: pwsh
          name: Install Required Modules
          run: |
            Install-Module EnterprisePolicyAsCode -Force
            Sync-ALZPolicies -DefinitionsRootFolder $env:DefinitionsRootFolder
            $branchName = "caf-sync-$(Get-Date -Format yyyy-MM-dd-HH-mm)"
            git config user.name "GitHub Actions Bot"
            git config user.email "<>"
            git checkout -b $branchName
            git add .
            git commit -m "Updated ALZ policy objects"
            git push --set-upstream origin $branchName
            gh pr create -B main -H $branchName --title "Verify Synced Policies - $branchName" -b "Checkout this PR branch and validate changes before merging." --reviewer $env:REVIEWER
        env:
            GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

CI/CD with Github Flow

We will use the Github Flow to manage the CI/CD process. We will create a Github Actions to deploy the policies to the Azure environment(s) defined in the global-settings.jsonc file.

Github Flow

We can open a second terminal and execute the following command to create the Github Actions in one upper level folder of our repository. This command will create the Github Actions in the .github\workflows folder of the repository. :

git clone https://github.com/Azure/enterprise-azure-policy-as-code
cd enterprise-azure-policy-as-code
New-PipelinesFromStarterKit -StarterKitFolder .\StarterKit -PipelinesFolder ..\global-azure-2024-demo-EPAC\.github\workflows -PipelineType GitHubActions -BranchingFlow github -ScriptType Module 

Now, we need to create some environments with secrets in the repository to use in the Github Actions. We need to create the following environments:

Environment Purpose App Registration (SPN)
EPAC-DEV Plan and deploy to epac-dev ci-cd-epac-dev-owner
TENANT-PLAN Build deployment plan for tenant ci-cd-root-policy-reader
TENANT-DEPLOY-POLICY Deploy Policy resources for tenant ci-cd-root-policy-contributor
TENANT-DEPLOY-ROLES Deploy Roles for tenant ci-cd-root-user-assignments
TENANT-REMEDIATE-POLICY Remediate Policy resources for tenant ci-cd-root-policy-contributor

You need to Configure a federated identity credential on an app too.

First Commit

Now we can commit the changes to the repository and make a pull request to the main branch.

References

Cambio de nombres de los niveles de servicio de Microsoft Defender para Cloud

No es nuevo pero me gustaría recordar que Microsoft ha cambiado los nombres de los niveles de servicio de Microsoft Defender para Cloud. A continuación, se muestra una tabla con los nombres anteriores y los nuevos nombres de los niveles de servicio de Microsoft Defender para Cloud:

Nombre ANTERIOR del nivel de servicio 2 Nombre NUEVO del nivel de servicio 2 Nivel de servicio: nivel de servicio 4 (sin cambios)
Advanced Data Security Microsoft Defender for Cloud Defender para SQL
Advanced Threat Protection Microsoft Defender for Cloud Defender para registros de contenedor
Advanced Threat Protection Microsoft Defender for Cloud Defender para DNS
Advanced Threat Protection Microsoft Defender for Cloud Defender para Key Vault
Advanced Threat Protection Microsoft Defender for Cloud Defender para Kubernetes
Advanced Threat Protection Microsoft Defender for Cloud Defender para MySQL
Advanced Threat Protection Microsoft Defender for Cloud Defender para PostgreSQL
Advanced Threat Protection Microsoft Defender for Cloud Defender para Resource Manager
Advanced Threat Protection Microsoft Defender for Cloud Defender para Storage
Azure Defender Microsoft Defender for Cloud Administración de superficie expuesta a ataques externos de Defender
Azure Defender Microsoft Defender for Cloud Defender para Azure Cosmos DB
Azure Defender Microsoft Defender for Cloud Defender para contenedores
Azure Defender Microsoft Defender for Cloud Defender for MariaDB
Security Center Microsoft Defender for Cloud Defender para App Service
Security Center Microsoft Defender for Cloud Defender para servidores
Security Center Microsoft Defender for Cloud Administración de la posición de seguridad en la nube de Defender

Azure Policy useful queries

Policy assignments and information about each of its respective definitions

// Policy assignments and information about each of its respective definitions
// Gets policy assignments in your environment with the respective assignment name,definition associated, category of definition (if applicable), as well as whether the definition type is an initiative or a single policy.

policyResources
| where type =~'Microsoft.Authorization/PolicyAssignments'
| project policyAssignmentId = tolower(tostring(id)), policyAssignmentDisplayName = tostring(properties.displayName), policyAssignmentDefinitionId = tolower(properties.policyDefinitionId)
| join kind=leftouter(
 policyResources
 | where type =~'Microsoft.Authorization/PolicySetDefinitions' or type =~'Microsoft.Authorization/PolicyDefinitions'
 | project definitionId = tolower(id), category = tostring(properties.metadata.category), definitionType = iff(type =~ 'Microsoft.Authorization/PolicysetDefinitions', 'initiative', 'policy')
) on $left.policyAssignmentDefinitionId == $right.definitionId

List SubscriptionId and SubscriptionName

ResourceContainers
| where type =~ 'microsoft.resources/subscriptions'
| project subscriptionId, subscriptionName=name

List ManagementGroupId and ManagementGroupName

ResourceContainers
| where type =~ 'microsoft.management/managementgroups'
| project mgname = name, displayName = properties.displayName

Policy assignments and information about each of its respective definitions displaying the scope of the assignment, the subscription display name, the management group id, the resource group name, the definition type, the assignment name, the category of the definition, and the policy assignment ID.

policyResources
| where type =~'Microsoft.Authorization/PolicyAssignments'
| project policyAssignmentId = tolower(tostring(id)), policyAssignmentDisplayName = tostring(properties.displayName), policyAssignmentDefinitionId = tolower(properties.policyDefinitionId), subscriptionId = tostring(subscriptionId),resourceGroup=tostring(resourceGroup), AssignmentDefinition=properties
| join kind=leftouter(
    policyResources
    | where type =~'Microsoft.Authorization/PolicySetDefinitions' or type =~'Microsoft.Authorization/PolicyDefinitions'
    | project definitionId = tolower(id), category = tostring(properties.metadata.category), definitionType = iff(type =~ 'Microsoft.Authorization/PolicysetDefinitions', 'initiative', 'policy'),PolicyDefinition=properties
) on $left.policyAssignmentDefinitionId == $right.definitionId
| extend scope = iff(policyAssignmentId contains '/subscriptions/', 'Subscription', iff(policyAssignmentId contains '/providers/microsoft.management/managementgroups', 'Management Group', 'Resource Group'))
| join kind=leftouter (ResourceContainers
| where type =~ 'microsoft.resources/subscriptions'
| project subscriptionId, subscriptionName=name) on $left.subscriptionId == $right.subscriptionId
| extend SubscriptionDisplayName = iff(isnotempty(subscriptionId), subscriptionName, '')
| extend ManagementGroupName = iff(policyAssignmentId contains '/providers/microsoft.management/', split(policyAssignmentId, '/')[4],'')
| extend resourceGroupDisplayName = iff(isnotempty(resourceGroup), resourceGroup, '')
| project ManagementGroupName,SubscriptionDisplayName,resourceGroupDisplayName, scope,definitionType,policyAssignmentDisplayName, category,policyAssignmentId, AssignmentDefinition, PolicyDefinition
  • Add Management Group Display Name

How to use Azue ARC-enabled servers with managed identity to access to Azure Storage Account

In this demo we will show how to use Azure ARC-enabled servers with managed identity to access to Azure Storage Account.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.

Required permissions

You'll need the following Azure built-in roles for different aspects of managing connected machines:

  • To onboard machines, you must have the Azure Connected Machine Onboarding or Contributor role for the resource group where you're managing the servers.
  • To read, modify, and delete a machine, you must have the Azure Connected Machine Resource Administrator role for the resource group.
  • To select a resource group from the drop-down list when using the Generate script method, you'll also need the Reader role for that resource group (or another role that includes Reader access).

Register Azure resource providers

To use Azure Arc-enabled servers with managed identity, you need to register the following resource providers:

az account set --subscription "{Your Subscription Name}"
az provider register --namespace 'Microsoft.HybridCompute'
az provider register --namespace 'Microsoft.GuestConfiguration'
az provider register --namespace 'Microsoft.HybridConnectivity'
az provider register --namespace 'Microsoft.AzureArcData'

Info

Microsoft.AzureArcData (if you plan to Arc-enable SQL Servers) Microsoft.Compute (for Azure Update Manager and automatic extension upgrades)

Networking requirements

The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. In this demo, we have use Azure Private Link.

Azure ARC-enabled enabled server

We use Use Azure Private Link to securely connect networks to Azure Arc-enabled servers to achieve this.

Some tips:

  • If you have any issue registerin de VM: generate a script to register a machine with Azure Arc following that instructions here

  • If you have an error that says "Path C:\ProgramData\AzureConnectedMachineAgent\Log\himds.log is busy. Retrying..." you can use the following command to resolve it if you know that you are doing:

 (get-wmiobject -class win32_product | where {$_.name -like "Azure *"}).uninstall() 
- Review /etc/hosts file and add the following entries:

$Env:PEname = "myprivatelink"
$Env:resourceGroup = "myResourceGroup"
$file = "C:\Windows\System32\drivers\etc\hosts"

$gisfqdn = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query '[0].privateDnsZoneConfigs[0].recordSets[0].fqdn' -o json).replace('.privatelink','').replace("`"","")
$gisIP = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[0].recordSets[0].ipAddresses[0] -o json).replace("`"","")
$hisfqdn = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[0].recordSets[1].fqdn -o json).replace('.privatelink','').replace("`"","")
$hisIP = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[0].recordSets[1].ipAddresses[0] -o json).replace('.privatelink','').replace("`"","")
$agentfqdn = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[1].recordSets[0].fqdn -o json).replace('.privatelink','').replace("`"","")
$agentIp = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[1].recordSets[0].ipAddresses[0] -o json).replace('.privatelink','').replace("`"","")
$gasfqdn = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[1].recordSets[1].fqdn -o json).replace('.privatelink','').replace("`"","")
$gasIp = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[1].recordSets[1].ipAddresses[0] -o json).replace('.privatelink','').replace("`"","")
$dpfqdn = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[2].recordSets[0].fqdn -o json).replace('.privatelink','').replace("`"","")
$dpIp = (az network private-endpoint dns-zone-group list --endpoint-name $Env:PEname --resource-group $Env:resourceGroup -o json --query [0].privateDnsZoneConfigs[2].recordSets[0].ipAddresses[0] -o json).replace('.privatelink','').replace("`"","")

$hostfile += "$gisIP $gisfqdn"
$hostfile += "$hisIP $hisfqdn"
$hostfile += "$agentIP $agentfqdn"
$hostfile += "$gasIP $gasfqdn"
$hostfile += "$dpIP $dpfqdn"

Storage Account configuration

Create a Storage Account with static website enabled

$resourceGroup = "myResourceGroup"
$location = "eastus"
$storageAccount = "mystorageaccount"
$indexDocument = "index.html"
az group create --name $resourceGroup --location $location
az storage account create --name $storageAccount --resource-group $resourceGroup --location $location --sku Standard_LRS
az storage blob service-properties update --account-name $storageAccount --static-website --index-document $indexDocument

Add private endpoints to the storage accoun for blob and static website

$resourceGroup = "myResourceGroup"
$storageAccount = "mystorageaccount"
$privateEndpointName = "myprivatelink"
$location = "eastus"
$vnetName = "myVnet"
$subnetName = "mySubnet"
$subscriptionId = "{subscription-id}"
az network private-endpoint create --name $privateEndpointName --resource-group $resourceGroup --vnet-name $vnetName --subnet $subnetName --private-connection-resource-id "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Storage/storageAccounts/$storageAccount" --group-id blob --connection-name $privateEndpointName --location $location
az network private-endpoint create --name $privateEndpointName --resource-group $resourceGroup --vnet-name $vnetName --subnet $subnetName --private-connection-resource-id "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Storage/storageAccounts/$storageAccount" --group-id web --connection-name $privateEndpointName --location $location

Disable public access to the storage account except for your ip

$resourceGroup = "myResourceGroup"
$storageAccount = "mystorageaccount"
$ipAddress = "myIpAddress"
az storage account update --name $storageAccount --resource-group $resourceGroup --bypass "AzureServices,Logging,Metrics" --default-action Deny
az storage account network-rule add --account-name $storageAccount --resource-group $resourceGroup --ip-address $ipAddress

Assign the Storage Blob Data Contributor role to the managed identity of the Azure ARC-enabled server

$resourceGroup = "myResourceGroup"
$storageAccount = "mystorageaccount"
$serverName = "myserver"
$managedIdentity = az resource show --resource-group $resourceGroup --name $serverName --resource-type "Microsoft.HybridCompute/machines" --query "identity.principalId" --output tsv
az role assignment create --role "Storage Blob Data Contributor" --assignee-object-id $managedIdentity --scope "/subscriptions/{subscription-id}/resourceGroups/$resourceGroup/providers/Microsoft.Storage/storageAccounts/$storageAccount"

Download azcopy, install it and copy something to $web in the storage account

Download azcopy in the vm

Invoke-WebRequest -Uri "https://aka.ms/downloadazcopy-v10-windows" -OutFile AzCopy.zip

Expand-Archive AzCopy.zip -DestinationPath $env:ProgramFiles

$env:Path += ";$env:ProgramFiles\azcopy"

Copy something to $web in the storage account

$storageAccount = "mystorageaccount"
$source = "C:\Users\Public\Documents\myFile.txt"
$destination = "https://$storageAccount.blob.core.windows.net/\$web/myFile.txt"
azcopy login --identity
azcopy copy $source $destination

Now you can check the file in the static website of the storage account.

Azure ARC

Azure ARC is a service that extends Azure management capabilities to any infrastructure. It allows you to manage resources running on-premises, at the edge, or in multi-cloud environments using the same Azure management tools, security, and compliance policies that you use in Azure. Azure ARC enables you to manage and govern your resources consistently across all environments, providing a unified control plane for your hybrid cloud infrastructure. Let's explore how Azure ARC works and how you can leverage it to manage your resources effectively.

Azure ARC Overview

Azure ARC is a service that extends Azure management capabilities to any infrastructure. It allows you to manage resources running outside of Azure using the same Azure management tools, security, and compliance policies that you use in Azure. Azure ARC provides a unified control plane for managing resources across on-premises, multi-cloud, and edge environments, enabling you to govern your resources consistently.

Azure ARC enables you to:

  • Manage resources: Azure ARC allows you to manage resources running on-premises, at the edge, or in multi-cloud environments using Azure management tools like Azure Policy, Azure Monitor, and Microsoft Defender for Cloud.
  • Governance: Azure ARC provides a unified control plane for managing and governing resources across all environments, enabling you to enforce security and compliance policies consistently.
  • Security: Azure ARC extends Azure security capabilities to resources running outside of Azure, enabling you to protect your resources with Azure security features like Azure Security Center and Azure Defender.
  • Compliance: Azure ARC enables you to enforce compliance policies across all environments, ensuring that your resources meet regulatory requirements and organizational standards.

Azure ARC Components

Azure ARC consists of the following components:

  • Azure ARC-enabled servers: Azure ARC-enabled servers allow you to manage and govern servers running on-premises or at the edge using Azure management tools. You can connect your servers to Azure ARC to manage them using Azure Policy, Azure Monitor, and Microsoft Defender for Cloud.
  • Azure ARC-enabled Kubernetes clusters: Azure ARC-enabled Kubernetes clusters allow you to manage and govern Kubernetes clusters running on-premises or in other clouds using Azure management tools. You can connect your Kubernetes clusters to Azure ARC to manage them using Azure Policy, Azure Monitor, and Microsoft Defender for Cloud.
  • Azure ARC-enabled data services: Azure ARC-enabled data services allow you to manage and govern data services running on-premises or in other clouds using Azure management tools. You can connect your data services to Azure ARC to manage them using Azure Policy, Azure Monitor, and Microsoft Defender for Cloud.
  • SQL Server enabled by Azure Arc: SQL Server enabled by Azure Arc allows you to run SQL Server on any infrastructure using Azure management tools. You can connect your SQL Server instances to Azure ARC to manage them using Azure Policy, Azure Monitor, and Microsoft Defender for Cloud.
  • Azure Arc-enabled private clouds: Azure Arc resource bridge hosts other components such as custom locations, cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports.

Azure ARC Use Cases

Azure ARC can be used in a variety of scenarios to manage and govern resources across on-premises, multi-cloud, and edge environments. Some common use cases for Azure ARC include:

  • Hybrid cloud management: Azure ARC enables you to manage resources consistently across on-premises, multi-cloud, and edge environments using the same Azure management tools and policies.
  • Security and compliance: Azure ARC allows you to enforce security and compliance policies consistently across all environments, ensuring that your resources meet regulatory requirements and organizational standards.
  • Resource governance: Azure ARC provides a unified control plane for managing and governing resources across all environments, enabling you to enforce policies and monitor resource health and performance.
  • Application modernization: Azure ARC enables you to manage and govern Kubernetes clusters and data services running on-premises or in other clouds, allowing you to modernize your applications and infrastructure.

Getting Started with Azure ARC

To get started with Azure ARC, you need to:

  1. Connect your resources: Connect your servers, Kubernetes clusters, or data services to Azure ARC using the Azure ARC agent.
  2. Manage your resources: Use Azure management tools like Azure Policy, Azure Monitor, and Microsoft Defender for Cloud to manage and govern your resources consistently across all environments.
  3. Enforce security and compliance: Use Azure security features like Microsoft Defender for Cloud to protect your resources and enforce security and compliance policies.

By leveraging Azure ARC, you can manage and govern your resources consistently across on-premises, multi-cloud, and edge environments, providing a unified control plane for your hybrid cloud infrastructure. Azure ARC enables you to enforce security and compliance policies consistently, ensuring that your resources meet regulatory requirements and organizational standards.

Conclusion

Azure ARC is a powerful service that extends Azure management capabilities to any infrastructure, enabling you to manage and govern resources consistently across on-premises, multi-cloud, and edge environments. By leveraging Azure ARC, you can enforce security and compliance policies consistently, ensuring that your resources meet regulatory requirements and organizational standards. Azure ARC provides a unified control plane for managing and governing resources, enabling you to manage your hybrid cloud infrastructure effectively.

For more information on Azure ARC, visit the Azure ARC documentation.

Microsoft Azure Certifications

Microsoft offers a wide range of certifications for IT professionals who want to demonstrate their expertise in Microsoft technologies. These certifications cover a variety of topics, including Azure, Office 365, Windows Server, and more.

Microsoft divide this certifications into different categories, such as:

  • Infrastructure
  • Data and AI
  • Digital app and innovation
  • Modern work
  • Business applications
  • Security

Inside of each category, you can find different certification levels:

  • Fundamentals: This level is designed for individuals who are new to the technology and want to demonstrate their knowledge of the basics.
  • Role-based: This level is designed for individuals who want to demonstrate their expertise in a specific role, such as Azure Administrator or Data Engineer.
  • Specialty: This level is designed for individuals who want to demonstrate their expertise in a specific skill, such as Azure Virtual Desktop or Azure SAP.

In the case of role-based certifications, Microsoft offers different levels of certification, such as:

  • Associate: This level is designed for individuals who have some experience in the technology and want to demonstrate their expertise in a specific role.
  • Expert: This level is designed for individuals who have extensive experience in the technology and want to demonstrate their expertise in a specific role.

Allways is a good idea to start with the fundamentals certifications, and then move on to the role-based certifications that are relevant to your career goals.

In the majority of cases, you need associate certifications to get expert certifications.

Azure Certifications

Here's a table summarizing the Azure Certifications and their description:

Certification Exam required Description url
Azure Administrator Associate AZ-104 The Azure Administrator certification is designed for individuals who want to demonstrate their expertise in managing Azure resources. This certification is ideal for IT professionals who are responsible for implementing, monitoring, and maintaining Azure solutions. https://learn.microsoft.com/en-us/certifications/azure-administrator
Azure Developer Associate AZ-204 The Azure Developer certification is designed for individuals who want to demonstrate their expertise in developing applications on Azure. This certification is ideal for software developers who want to build and deploy cloud-based applications using Azure services. https://learn.microsoft.com/en-us/certifications/azure-developer
Azure Data Engineer Associate DP-203 The Azure Data Engineer certification is designed for individuals who want to demonstrate their expertise in designing and implementing data solutions on Azure. This certification is ideal for data professionals who are responsible for building and maintaining data pipelines and data warehouses on Azure. https://learn.microsoft.com/en-us/certifications/azure-data-engineer
Azure Database Administrator Associate DP-300 The Azure Database Administrator certification is designed for individuals who want to demonstrate their expertise in managing Azure databases. This certification is ideal for database administrators who are responsible for designing, implementing, and maintaining databases on Azure. https://learn.microsoft.com/en-us/certifications/azure-database-administrator
DevOps Engineer Expert AZ-400 The Azure DevOps Engineer certification is designed for individuals who want to demonstrate their expertise in implementing DevOps practices on Azure. This certification is ideal for IT professionals who are responsible for building, testing, and deploying applications using Azure DevOps. https://learn.microsoft.com/en-us/certifications/devops-engineer
Azure Security Engineer Associate AZ-500 The Azure Security Engineer certification is designed for individuals who want to demonstrate their expertise in securing Azure resources. This certification is ideal for IT professionals who are responsible for implementing security controls and monitoring security events on Azure. https://learn.microsoft.com/en-us/certifications/azure-security-engineer
Azure Network Engineer Associate AZ-700 The Azure Network Engineer certification is designed for individuals who want to demonstrate their expertise in designing and implementing network solutions on Azure. This certification is ideal for network engineers who are responsible for building and maintaining network infrastructure on Azure. https://learn.microsoft.com/en-us/certifications/azure-network-engineer
Windows Server Hybrid Administrator Associate AZ-800 AZ-801 The Windows Server Hybrid Administrator certification is designed for individuals who want to demonstrate their expertise in managing Windows Server resources on Azure. This certification is ideal for IT professionals who are responsible for implementing, monitoring, and maintaining Windows Server solutions on Azure. https://learn.microsoft.com/en-us/certifications/windows-server-hybrid-administrator
Fabric Analytics Engineer Associate DP-600 The Fabric Analytics Engineer certification is designed for individuals who want to demonstrate their expertise in designing and implementing analytics solutions on Azure. This certification is ideal for data professionals who are responsible for building and maintaining analytics solutions on Azure. https://learn.microsoft.com/en-us/certifications/fabric-analytics-engineer
Azure AI Engineer Associate AI-102 The Azure AI Engineer certification is designed for individuals who want to demonstrate their expertise in designing and implementing AI solutions on Azure. This certification is ideal for data scientists and AI developers who want to build and deploy AI models using Azure services. https://learn.microsoft.com/en-us/certifications/azure-ai-engineer
Azure Data Scientist Associate DP-100 The Azure Data Scientist certification is designed for individuals who want to demonstrate their expertise in designing and implementing data science solutions on Azure. This certification is ideal for data scientists who are responsible for building and maintaining data science solutions on Azure. https://learn.microsoft.com/en-us/certifications/azure-data-scientist
Azure Enterprise Data Analyst Associate DP-500 The Azure Enterprise Data Analyst certification is designed for individuals who want to demonstrate their expertise in designing and implementing data analysis solutions on Azure. This certification is ideal for data analysts who are responsible for building and maintaining data analysis solutions on Azure. https://learn.microsoft.com/en-us/certifications/azure-enterprise-data-analyst
Azure Solutions Architect Expert AZ-305 The Azure Solutions Architect certification is designed for individuals who want to demonstrate their expertise in designing and implementing solutions on Azure. This certification is ideal for IT professionals who are responsible for designing and implementing cloud-based solutions using Azure services. https://learn.microsoft.com/en-us/certifications/azure-solutions-architect
Azure for SAP Workloads Specialty AZ-120 The Azure for SAP Workloads certification is designed for individuals who want to demonstrate their expertise in deploying and managing SAP workloads on Azure. This certification is ideal for IT professionals who are responsible for implementing and maintaining SAP solutions on Azure. https://learn.microsoft.com/en-us/certifications/azure-for-sap-workloads
Azure Virtual Desktop Specialty AZ-140 The Azure Virtual Desktop certification is designed for individuals who want to demonstrate their expertise in deploying and managing virtual desktop solutions on Azure. This certification is ideal for IT professionals who are responsible for implementing and maintaining virtual desktop solutions on Azure. https://learn.microsoft.com/en-us/certifications/azure-virtual-desktop
Azure Cosmos DB Developer Specialty DP-420 The Azure Cosmos DB Developer certification is designed for individuals who want to demonstrate their expertise in developing applications that use Azure Cosmos DB. This certification is ideal for software developers who want to build and deploy applications that use Azure Cosmos DB. https://learn.microsoft.com/en-us/certifications/azure-cosmos-db-developer
Azure Fundamentals AZ-900 The Azure Fundamentals certification is designed for individuals who are new to Azure and want to demonstrate their knowledge of the platform. This certification is a great starting point for anyone who wants to learn more about Azure and how it can help them build and deploy applications in the cloud. https://learn.microsoft.com/en-us/certifications/azure-fundamentals
Azure AI Fundamentals AI-900 The Azure AI Fundamentals certification is designed for individuals who want to demonstrate their knowledge of AI concepts and how they can be applied to Azure services. This certification is ideal for anyone who wants to learn more about AI and how it can be used to build intelligent applications. https://learn.microsoft.com/en-us/certifications/azure-ai-fundamentals
Azure Data Fundamentals DP-900 The Azure Data Fundamentals certification is designed for individuals who want to demonstrate their knowledge of data concepts and how they can be applied to Azure services. This certification is ideal for anyone who wants to learn more about data and how it can be used to build data-driven applications. https://learn.microsoft.com/en-us/certifications/azure-data-fundamentals

You can find more information about Microsoft certifications on the Microsoft Certification Poster and in the Microsoft Learning website.