Skip to content

Blog

Custom Azure Policy for Kubernetes

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to create a custom Azure Policy for Kubernetes.

How Azure Policy works in kubernetes

Azure Policy for Kubernetes is an extension of Azure Policy that allows you to enforce policies on your Kubernetes clusters. You can use Azure Policy to define policies that apply to your Kubernetes resources, such as pods, deployments, and services. These policies can help you ensure that your Kubernetes clusters are compliant with your organization's standards and best practices.

Azure Policy for Kubernetes uses Gatekeeper, an open-source policy controller for Kubernetes, to enforce policies on your clusters. Gatekeeper uses the Open Policy Agent (OPA) policy language to define policies and evaluate them against your Kubernetes resources. You can use Gatekeeper to create custom policies that enforce specific rules and effects on your clusters.

graph TD
    A[Azure Policy] -->|Enforce policies| B["add-on azure-policy(Gatekeeper)"]
    B -->|Evaluate policies| C[Kubernetes resources]

Azure Policy for Kubernetes supports the following cluster environments:

  • Azure Kubernetes Service (AKS), through Azure Policy's Add-on for AKS
  • Azure Arc enabled Kubernetes, through Azure Policy's Extension for Arc

Prepare your environment

Before you can create custom Azure Policy for Kubernetes, you need to set up your environment. You will need an Azure Kubernetes Service (AKS) cluster with the Azure Policy add-on enabled. You will also need the Azure CLI and the Azure Policy extension for Visual Studio Code.

To set up your environment, follow these steps:

  1. Create a resource group

    az group create --name myResourceGroup --location spaincentral
    
  2. Create an Azure Kubernetes Service (AKS) cluster with default settings and one node:

    az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
    
  3. Enable Azure Policies for the cluster:

    az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-policy
    
  4. Check the status of the add-on:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.azurepolicy.enabled
    
  5. Check the status of gatekeeper:

    # Install kubectl and kubelogin
    az aks install-cli --install-location .local/bin/kubectl --kubelogin-install-location .local/bin/kubelogin
    # Get the credentials for the AKS cluster
    az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing
    # azure-policy pod is installed in kube-system namespace
    kubectl get pods -n kube-system
    # gatekeeper pod is installed in gatekeeper-system namespace
    kubectl get pods -n gatekeeper-system
    
  6. Install vscode and the Azure Policy extension

    code --install-extension ms-azuretools.vscode-azurepolicy
    

Once you have set up your environment, you can create custom Azure Policy for Kubernetes.

How to create a custom Azure Policy for Kubernetes

To create a custom Azure Policy for Kubernetes, you need to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. You can define policies that enforce specific rules and effects on your Kubernetes resources, such as pods, deployments, and services.

Info

It`s recommended to review Constraint Templates in How to use Gatekeeper

To create a custom Azure Policy for Kubernetes, follow these steps:

  1. Define a constraint template for the policy, I will use an existing constraint template from the Gatekeeper library that requires Ingress resources to be HTTPS only:

    gatekeeper-library/library/general/httpsonly/template.yaml
    apiVersion: templates.gatekeeper.sh/v1
    kind: ConstraintTemplate
    metadata:
        name: k8shttpsonly
        annotations:
            metadata.gatekeeper.sh/title: "HTTPS Only"
            metadata.gatekeeper.sh/version: 1.0.2
            description: >-
            Requires Ingress resources to be HTTPS only.  Ingress resources must
            include the `kubernetes.io/ingress.allow-http` annotation, set to `false`.
            By default a valid TLS {} configuration is required, this can be made
            optional by setting the `tlsOptional` parameter to `true`.
    
            https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    spec:
    crd:
        spec:
        names:
            kind: K8sHttpsOnly
        validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
            type: object
            description: >-
                Requires Ingress resources to be HTTPS only.  Ingress resources must
                include the `kubernetes.io/ingress.allow-http` annotation, set to
                `false`. By default a valid TLS {} configuration is required, this
                can be made optional by setting the `tlsOptional` parameter to
                `true`.
            properties:
                tlsOptional:
                type: boolean
                description: "When set to `true` the TLS {} is optional, defaults 
                to false."
    targets:
        - target: admission.k8s.gatekeeper.sh
        rego: |
            package k8shttpsonly
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not https_complete(ingress)
            not tls_is_optional
            msg := sprintf("Ingress should be https. tls configuration and allow-http=false annotation are required for %v", [ingress.metadata.name])
            }
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not annotation_complete(ingress)
            tls_is_optional
            msg := sprintf("Ingress should be https. The allow-http=false annotation is required for %v", [ingress.metadata.name])
            }
    
            https_complete(ingress) = true {
            ingress.spec["tls"]
            count(ingress.spec.tls) > 0
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            annotation_complete(ingress) = true {
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            tls_is_optional {
            parameters := object.get(input, "parameters", {})
            object.get(parameters, "tlsOptional", false) == true
            }
    

This constrains requires Ingress resources to be HTTPS only

  1. Create an Azure Policy for this constraint template

    1. Open the restriction template created earlier in Visual Studio Code.
    2. Click on Azure Policy icon in the Activity Bar.
    3. Click on View > Command Palette.
    4. Search for the command "Azure Policy for Kubernetes: Create Policy Definition from Constraint Template or Mutation", select base64, this command will create a policy definition from the constraint template.
      Untitled.json
      {
      "properties": {
          "policyType": "Custom",
          "mode": "Microsoft.Kubernetes.Data",
          "displayName": "/* EDIT HERE */",
          "description": "/* EDIT HERE */",
          "policyRule": {
          "if": {
              "field": "type",
              "in": [
              "Microsoft.ContainerService/managedClusters"
              ]
          },
          "then": {
              "effect": "[parameters('effect')]",
              "details": {
              "templateInfo": {
                  "sourceType": "Base64Encoded",
                  "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ=="
              },
              "apiGroups": [
                  "/* EDIT HERE */"
              ],
              "kinds": [
                  "/* EDIT HERE */"
              ],
              "namespaces": "[parameters('namespaces')]",
              "excludedNamespaces": "[parameters('excludedNamespaces')]",
              "labelSelector": "[parameters('labelSelector')]",
              "values": {
                  "tlsOptional": "[parameters('tlsOptional')]"
              }
              }
          }
          },
          "parameters": {
          "effect": {
              "type": "String",
              "metadata": {
              "displayName": "Effect",
              "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
              },
              "allowedValues": [
              "audit",
              "deny",
              "disabled"
              ],
              "defaultValue": "audit"
          },
          "excludedNamespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace exclusions",
              "description": "List of Kubernetes namespaces to exclude from policy evaluation."
              },
              "defaultValue": [
              "kube-system",
              "gatekeeper-system",
              "azure-arc"
              ]
          },
          "namespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace inclusions",
              "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces."
              },
              "defaultValue": []
          },
          "labelSelector": {
              "type": "Object",
              "metadata": {
              "displayName": "Kubernetes label selector",
              "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
              },
              "defaultValue": {},
              "schema": {
              "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
              "type": "object",
              "properties": {
                  "matchLabels": {
                  "description": "matchLabels is a map of {key,value} pairs.",
                  "type": "object",
                  "additionalProperties": {
                      "type": "string"
                  },
                  "minProperties": 1
                  },
                  "matchExpressions": {
                  "description": "matchExpressions is a list of values, a key, and an operator.",
                  "type": "array",
                  "items": {
                      "type": "object",
                      "properties": {
                      "key": {
                          "description": "key is the label key that the selector applies to.",
                          "type": "string"
                      },
                      "operator": {
                          "description": "operator represents a key's relationship to a set of values.",
                          "type": "string",
                          "enum": [
                          "In",
                          "NotIn",
                          "Exists",
                          "DoesNotExist"
                          ]
                      },
                      "values": {
                          "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
                          "type": "array",
                          "items": {
                          "type": "string"
                          }
                      }
                      },
                      "required": [
                      "key",
                      "operator"
                      ],
                      "additionalProperties": false
                  },
                  "minItems": 1
                  }
              },
              "additionalProperties": false
              }
          },
          "tlsOptional": {
              "type": "Boolean",
              "metadata": {
              "displayName": "/* EDIT HERE */",
              "description": "/* EDIT HERE */"
              }
          }
          }
      }
      }
      
    5. Fill the fields with "/ EDIT HERE /" in the policy definition JSON file with the appropriate values, such as the display name, description, API groups, and kinds. For example, in this case you must configure apiGroups: ["extensions", "networking.k8s.io"] and kinds: ["Ingress"]
    6. Save the policy definition JSON file.

This is the complete policy:

json title="https-only.json" { "properties": { "policyType": "Custom", "mode": "Microsoft.Kubernetes.Data", "displayName": "Require HTTPS for Ingress resources", "description": "This policy requires Ingress resources to be HTTPS only. Ingress resources must include the `kubernetes.io/ingress.allow-http` annotation, set to `false`. By default a valid TLS configuration is required, this can be made optional by setting the `tlsOptional` parameter to `true`.", "policyRule": { "if": { "field": "type", "in": [ "Microsoft.ContainerService/managedClusters" ] }, "then": { "effect": "[parameters('effect')]", "details": { "templateInfo": { "sourceType": "Base64Encoded", "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ==" }, "apiGroups": [ "extensions", "networking.k8s.io" ], "kinds": [ "Ingress" ], "namespaces": "[parameters('namespaces')]", "excludedNamespaces": "[parameters('excludedNamespaces')]", "labelSelector": "[parameters('labelSelector')]", "values": { "tlsOptional": "[parameters('tlsOptional')]" } } } }, "parameters": { "effect": { "type": "String", "metadata": { "displayName": "Effect", "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy." }, "allowedValues": [ "audit", "deny", "disabled" ], "defaultValue": "audit" }, "excludedNamespaces": { "type": "Array", "metadata": { "displayName": "Namespace exclusions", "description": "List of Kubernetes namespaces to exclude from policy evaluation." }, "defaultValue": [ "kube-system", "gatekeeper-system", "azure-arc" ] }, "namespaces": { "type": "Array", "metadata": { "displayName": "Namespace inclusions", "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces." }, "defaultValue": [] }, "labelSelector": { "type": "Object", "metadata": { "displayName": "Kubernetes label selector", "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources." }, "defaultValue": {}, "schema": { "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.", "type": "object", "properties": { "matchLabels": { "description": "matchLabels is a map of {key,value} pairs.", "type": "object", "additionalProperties": { "type": "string" }, "minProperties": 1 }, "matchExpressions": { "description": "matchExpressions is a list of values, a key, and an operator.", "type": "array", "items": { "type": "object", "properties": { "key": { "description": "key is the label key that the selector applies to.", "type": "string" }, "operator": { "description": "operator represents a key's relationship to a set of values.", "type": "string", "enum": [ "In", "NotIn", "Exists", "DoesNotExist" ] }, "values": { "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.", "type": "array", "items": { "type": "string" } } }, "required": [ "key", "operator" ], "additionalProperties": false }, "minItems": 1 } }, "additionalProperties": false } }, "tlsOptional": { "type": "Boolean", "metadata": { "displayName": "TLS Optional", "description": "Set to true to make TLS optional" } } } } }

Now you have created a custom Azure Policy for Kubernetes that enforces the HTTPS only constraint on your Kubernetes cluster. You can apply this policy to your cluster to ensure that all Ingress resources are HTTPS only creating a policy definition and assigning it to the management group, subscription or resource group where the AKS cluster is located.

Conclusion

In this article, we discussed how to create a custom Azure Policy for Kubernetes. We showed you how to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. We also showed you how to create a constraint template for the policy and create an Azure Policy for the constraint template. By following these steps, you can create custom policies that enforce specific rules and effects on your Kubernetes resources.

References

Do you need to check if a private endpoint is connected to an external private link service in Azure or just don't know how to do it?

Check this blog post to learn how to do it: Find cross-tenant private endpoint connections in Azure

This is a copy of the script used in the blog post in case it disappears:

scan-private-endpoints-with-manual-connections.ps1
$ErrorActionPreference = "Stop"

class SubscriptionInformation {
    [string] $SubscriptionID
    [string] $Name
    [string] $TenantID
}

class TenantInformation {
    [string] $TenantID
    [string] $DisplayName
    [string] $DomainName
}

class PrivateEndpointData {
    [string] $ID
    [string] $Name
    [string] $Type
    [string] $Location
    [string] $ResourceGroup
    [string] $SubscriptionName
    [string] $SubscriptionID
    [string] $TenantID
    [string] $TenantDisplayName
    [string] $TenantDomainName
    [string] $TargetResourceId
    [string] $TargetSubscriptionName
    [string] $TargetSubscriptionID
    [string] $TargetTenantID
    [string] $TargetTenantDisplayName
    [string] $TargetTenantDomainName
    [string] $Description
    [string] $Status
    [string] $External
}

$installedModule = Get-Module -Name "Az.ResourceGraph" -ListAvailable
if ($null -eq $installedModule) {
    Install-Module "Az.ResourceGraph" -Scope CurrentUser
}
else {
    Import-Module "Az.ResourceGraph"
}

$kqlQuery = @"
resourcecontainers | where type == 'microsoft.resources/subscriptions'
| project  subscriptionId, name, tenantId
"@

$batchSize = 1000
$skipResult = 0

$subscriptions = @{}

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {
        $s = [SubscriptionInformation]::new()
        $s.SubscriptionID = $row.subscriptionId
        $s.Name = $row.name
        $s.TenantID = $row.tenantId

        $subscriptions.Add($s.SubscriptionID, $s) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

"Found $($subscriptions.Count) subscriptions"

function Get-SubscriptionInformation($SubscriptionID) {
    if ($subscriptions.ContainsKey($SubscriptionID)) {
        return $subscriptions[$SubscriptionID]
    } 

    Write-Warning "Using fallback subscription information for '$SubscriptionID'"
    $s = [SubscriptionInformation]::new()
    $s.SubscriptionID = $SubscriptionID
    $s.Name = "<unknown>"
    $s.TenantID = [Guid]::Empty.Guid
    return $s
}

$tenantCache = @{}
$subscriptionToTenantCache = @{}

function Get-TenantInformation($TenantID) {
    $domain = $null
    if ($tenantCache.ContainsKey($TenantID)) {
        $domain = $tenantCache[$TenantID]
    } 
    else {
        try {
            $tenantResponse = Invoke-AzRestMethod -Uri "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$TenantID')"
            $tenantInformation = ($tenantResponse.Content | ConvertFrom-Json)

            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = $tenantInformation.displayName
            $ti.DomainName = $tenantInformation.defaultDomainName

            $domain = $ti
        }
        catch {
            Write-Warning "Failed to get domain information for '$TenantID'"
        }

        if ([string]::IsNullOrEmpty($domain)) {
            Write-Warning "Using fallback domain information for '$TenantID'"
            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = "<unknown>"
            $ti.DomainName = "<unknown>"

            $domain = $ti
        }

        $tenantCache.Add($TenantID, $domain) | Out-Null
    }

    return $domain
}

function Get-TenantFromSubscription($SubscriptionID) {
    $tenant = $null
    if ($subscriptionToTenantCache.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptionToTenantCache[$SubscriptionID]
    }
    elseif ($subscriptions.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptions[$SubscriptionID].TenantID
        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }
    else {
        try {

            $subscriptionResponse = Invoke-AzRestMethod -Path "/subscriptions/$($SubscriptionID)?api-version=2022-12-01"
            $startIndex = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.IndexOf("https://login.windows.net/")
            $tenantID = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.Substring($startIndex + "https://login.windows.net/".Length, 36)

            $tenant = $tenantID
        }
        catch {
            Write-Warning "Failed to get tenant from subscription '$SubscriptionID'"
        }

        if ([string]::IsNullOrEmpty($tenant)) {
            Write-Warning "Using fallback tenant information for '$SubscriptionID'"

            $tenant = [Guid]::Empty.Guid
        }

        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }

    return $tenant
}

$kqlQuery = @"
resources
| where type == "microsoft.network/privateendpoints"
| where isnotnull(properties) and properties contains "manualPrivateLinkServiceConnections"
| where array_length(properties.manualPrivateLinkServiceConnections) > 0
| mv-expand properties.manualPrivateLinkServiceConnections
| extend status = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.status
| extend description = coalesce(properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.description, "")
| extend privateLinkServiceId = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceId
| extend privateLinkServiceSubscriptionId = tostring(split(privateLinkServiceId, "/")[2])
| project id, name, location, type, resourceGroup, subscriptionId, tenantId, privateLinkServiceId, privateLinkServiceSubscriptionId, status, description
"@

$batchSize = 1000
$skipResult = 0

$privateEndpoints = New-Object System.Collections.ArrayList

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {

        $si1 = Get-SubscriptionInformation -SubscriptionID $row.SubscriptionID
        $ti1 = Get-TenantInformation -TenantID $row.TenantID

        $si2 = Get-SubscriptionInformation -SubscriptionID $row.PrivateLinkServiceSubscriptionId
        $tenant2 = Get-TenantFromSubscription -SubscriptionID $si2.SubscriptionID
        $ti2 = Get-TenantInformation -TenantID $tenant2

        $peData = [PrivateEndpointData]::new()
        $peData.ID = $row.ID
        $peData.Name = $row.Name
        $peData.Type = $row.Type
        $peData.Location = $row.Location
        $peData.ResourceGroup = $row.ResourceGroup

        $peData.SubscriptionName = $si1.Name
        $peData.SubscriptionID = $si1.SubscriptionID
        $peData.TenantID = $ti1.TenantID
        $peData.TenantDisplayName = $ti1.DisplayName
        $peData.TenantDomainName = $ti1.DomainName

        $peData.TargetResourceId = $row.PrivateLinkServiceId
        $peData.TargetSubscriptionName = $si2.Name
        $peData.TargetSubscriptionID = $si2.SubscriptionID
        $peData.TargetTenantID = $ti2.TenantID
        $peData.TargetTenantDisplayName = $ti2.DisplayName
        $peData.TargetTenantDomainName = $ti2.DomainName

        $peData.Description = $row.Description
        $peData.Status = $row.Status

        if ($ti2.DomainName -eq "MSAzureCloud.onmicrosoft.com") {
            $peData.External = "Managed by Microsoft"
        }
        elseif ($si2.TenantID -eq [Guid]::Empty.Guid) {
            $peData.External = "Yes"
        }
        else {
            $peData.External = "No"
        }

        $privateEndpoints.Add($peData) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

$privateEndpoints | Format-Table
$privateEndpoints | Export-CSV "private-endpoints.csv" -Delimiter ';' -Force

"Found $($privateEndpoints.Count) private endpoints with manual connections"

if ($privateEndpoints.Count -ne 0) {
    Start-Process "private-endpoints.csv"
}

Conclusion

Now you know how to check if a private endpoint is connected to an external private link service in Azure.

That's all folks!

How to check if a role permission is good or bad in Azure RBAC

Do you need to check if a role permission is good or bad or just don't know what actions a provider has in Azure RBAC?

Get-AzProviderOperation * is your friend and you can always export everything to csv:

Get-AzProviderOperation | select Operation , OperationName , ProviderNamespace , ResourceName , Description , IsDataAction | Export-csv AzProviderOperation.csv

This command will give you a list of all the operations that you can perform on Azure resources, including the operation name, provider namespace, resource name, description, and whether it is a data action or not. You can use this information to check if a role permission is good or bad, or to find out what actions a provider has in Azure RBAC.

Script to check if a role permission is good or bad on tf files

You can use the following script to check if a role permission is good or bad on tf files:

<#
.SYNOPSIS
Script to check if a role permission is good or bad in Azure RBAC using Terraform files.

.DESCRIPTION
This script downloads Azure provider operations to a CSV file, reads the CSV file, extracts text from Terraform (.tf and .tfvars) files, and compares the extracted text with the CSV data to find mismatches.

.PARAMETER csvFilePath
The path to the CSV file where Azure provider operations will be downloaded.

.PARAMETER tfFolderPath
The path to the folder containing Terraform (.tf and .tfvars) files.

.PARAMETER DebugMode
Switch to enable debug mode for detailed output.

.EXAMPLE
.\Check-RBAC.ps1 -csvFilePath ".\petete.csv" -tfFolderPath ".\"

.EXAMPLE
.\Check-RBAC.ps1 -csvFilePath ".\petete.csv" -tfFolderPath ".\" -DebugMode

.NOTES
For more information, refer to the following resources:
- Azure RBAC Documentation: https://docs.microsoft.com/en-us/azure/role-based-access-control/
- Get-AzProviderOperation Cmdlet: https://docs.microsoft.com/en-us/powershell/module/az.resources/get-azprovideroperation
- Export-Csv Cmdlet: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-csv
#>

param(
    [string]$csvFilePath = ".\petete.csv",
    [string]$tfFolderPath = ".\",
    [switch]$DebugMode
)

# Download petete.csv using Get-AzProviderOperation
function Download-CSV {
    param(
        [string]$filename,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Downloading petete.csv using Get-AzProviderOperation" }
    Get-AzProviderOperation | select Operation, OperationName, ProviderNamespace, ResourceName, Description, IsDataAction | Export-Csv -Path $filename -NoTypeInformation
    if ($DebugMode) { Write-Host "CSV file downloaded: $filename" }
}

# Function to read the CSV file
function Read-CSV {
    param(
        [string]$filename,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Reading CSV file: $filename" }
    $csv = Import-Csv -Path $filename
    $csvData = $csv | ForEach-Object {
        [PSCustomObject]@{
            Provider = $_.Operation.Split('/')[0].Trim()
            Operation = $_.Operation
            OperationName = $_.OperationName
            ProviderNamespace = $_.ProviderNamespace
            ResourceName = $_.ResourceName
            Description = $_.Description
            IsDataAction = $_.IsDataAction
        }
    }
    if ($DebugMode) { Write-Host "Data read from CSV:"; $csvData | Format-Table -AutoSize }
    return $csvData
}

# Function to extract text from the Terraform files
function Extract-Text-From-TF {
    param(
        [string]$folderPath,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Reading TF and TFVARS files in folder: $folderPath" }
    $tfTexts = @()
    $files = Get-ChildItem -Path $folderPath -Filter *.tf,*.tfvars
    foreach ($file in $files) {
        $content = Get-Content -Path $file.FullName
        $tfTexts += $content | Select-String -Pattern '"Microsoft\.[^"]*"' -AllMatches | ForEach-Object { $_.Matches.Value.Trim('"').Trim() }
        $tfTexts += $content | Select-String -Pattern '"\*"' -AllMatches | ForEach-Object { $_.Matches.Value.Trim('"').Trim() }
        $tfTexts += $content | Select-String -Pattern '^\s*\*/' -AllMatches | ForEach-Object { $_.Matches.Value.Trim() }
    }
    if ($DebugMode) { Write-Host "Texts extracted from TF and TFVARS files:"; $tfTexts | Format-Table -AutoSize }
    return $tfTexts
}

# Function to compare extracted text with CSV data
function Compare-Text-With-CSV {
    param(
        [array]$csvData,
        [array]$tfTexts,
        [switch]$DebugMode
    )
    $mismatches = @()
    foreach ($tfText in $tfTexts) {
        if ($tfText -eq "*" -or $tfText -match '^\*/') {
            continue
        }
        $tfTextPattern = $tfText -replace '\*', '.*'
        if (-not ($csvData | Where-Object { $_.Operation -match "^$tfTextPattern$" })) {
            $mismatches += $tfText
        }
    }
    if ($DebugMode) { Write-Host "Mismatches found:"; $mismatches | Format-Table -AutoSize }
    return $mismatches
}

# Main script execution
Download-CSV -filename $csvFilePath -DebugMode:$DebugMode
$csvData = Read-CSV -filename $csvFilePath -DebugMode:$DebugMode
$tfTexts = Extract-Text-From-TF -folderPath $tfFolderPath -DebugMode:$DebugMode
$mismatches = Compare-Text-With-CSV -csvData $csvData -tfTexts $tfTexts -DebugMode:$DebugMode

if ($mismatches.Count -eq 0) {
    Write-Host "All extracted texts match the CSV data."
} else {
    Write-Host "Mismatches found:"
    $mismatches | Format-Table -AutoSize
}

This script downloads Azure provider operations to a CSV file, reads the CSV file, extracts text from Terraform files, and compares the extracted text with the CSV data to find mismatches. You can use this script to check if a role permission is good or bad in Azure RBAC using Terraform files.

I hope this post has given you a good introduction to how you can check if a role permission is good or bad in Azure RBAC and how you can use Terraform files to automate this process.

Happy coding! 🚀

terraform import block

Sometimes you need to import existing infrastructure into Terraform. This is useful when you have existing resources that you want to manage with Terraform, or when you want to migrate from another tool to Terraform.

Other times, you may need to import resources that were created outside of Terraform, such as manually created resources or resources created by another tool. For example:

"Error: unexpected status 409 (409 Conflict) with error: RoleDefinitionWithSameNameExists: A custom role with the same name already exists in this directory. Use a different name"

In my case, I had to import a custom role that was created outside of Terraform. Here's how I did it:

  1. Create a new Terraform configuration file for the resource you want to import. In my case, I created a new file called custom_role.tf with the following content:
resource "azurerm_role_definition" "custom_role" {
  name        = "CustomRole"
  scope       = "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"
  permissions {
    actions     = [
      "Microsoft.Storage/storageAccounts/listKeys/action",
      "Microsoft.Storage/storageAccounts/read"
    ]

    data_actions = []

    not_data_actions = []
  }
  assignable_scopes = [
    "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"
  ]
}
  1. Add a import block to the configuration file with the resource type and name you want to import. In my case, I added the following block to the custom_role.tf file:
import {
  to = azurerm_role_definition.custom_role
  id = "/providers/Microsoft.Authorization/roleDefinitions/11111111-1111-1111-1111-111111111111|/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"

}
  1. Run the terraformplan` command to see the changes that Terraform will make to the resource. In my case, the output looked like this:
.
.
.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
.
.
.
  1. Run the terraform apply command to import the resource into Terraform. In my case, the output looked like this after a long 9 minutes:
...
Apply complete! Resources: 1 imported, 0 added, 1 changed, 0 destroyed.
  1. Verify that the resource was imported successfully by running the terraform show command. In my case, the output looked like this:
terraform show

You can use the terraform import command to import existing infrastructure into Terraform too but I prefer to use the import block because it's more readable and easier to manage.

With terraform import the command would look like this:

terraform import azurerm_role_definition.custom_role "/providers/Microsoft.Authorization/roleDefinitions/11111111-1111-1111-1111-111111111111|/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000"

Conclusion

That's it! You've successfully imported an existing resource into Terraform. Now you can manage it with Terraform just like any other resource.

Happy coding! 🚀

Microsoft Entra ID Protection

Microsoft Entra offers a comprehensive set of security features to protect your organization's data and resources. One of these features is ID Protection, which helps you secure your users' identities and prevent unauthorized access to your organization's data. Here are some key benefits of using ID Protection in Microsoft Entra:

  • Multi-factor authentication (MFA): ID Protection enables you to enforce multi-factor authentication for all users in your organization. This adds an extra layer of security to your users' accounts and helps prevent unauthorized access.

  • Conditional access policies: With ID Protection, you can create conditional access policies that define the conditions under which users can access your organization's resources. For example, you can require users to use multi-factor authentication when accessing sensitive data or restrict access to certain applications based on the user's location.

  • Risk-based policies: ID Protection uses advanced machine learning algorithms to detect suspicious activities and risky sign-in attempts. You can create risk-based policies that automatically block or allow access based on the risk level associated with the sign-in attempt.

  • Identity protection reports: ID Protection provides detailed reports and insights into your organization's identity security posture. You can use these reports to identify security risks, monitor user activity, and take proactive measures to protect your organization's data.

By using ID Protection in Microsoft Entra, you can enhance the security of your organization's data and resources and protect your users' identities from cyber threats. If you want to learn more about ID Protection and other security features in Microsoft Entra, contact us today!

I hope this helps!

Microsoft Entra Attribute Duplicate Attribute Resiliency

Microsoft Entra Attribute Duplicate Attribute Resiliency feature is also being rolled out as the default behavior of Microsoft Entra ID. This will reduce the number of synchronization errors seen by Microsoft Entra Connect (as well as other sync clients) by making Microsoft Entra ID more resilient in the way it handles duplicated ProxyAddresses and UserPrincipalName attributes present in on premises AD environments. This feature does not fix the duplication errors. So the data still needs to be fixed. But it allows provisioning of new objects which are otherwise blocked from being provisioned due to duplicated values in Microsoft Entra ID. This will also reduce the number of synchronization errors returned to the synchronization client. If this feature is enabled for your Tenant, you will not see the InvalidSoftMatch synchronization errors seen during provisioning of new objects.

Behavior with Duplicate Attribute Resiliency

graph TD
    A[Start] --> B[Provision or Update Object]
    B --> C{Duplicate Attribute?}
    C -- Yes --> D[Quarantine Duplicate Attribute]
    D --> E{Is Attribute Required?}
    E -- Yes --> F[Assign Placeholder Value]
    F --> G[Send Error Report Email]
    E -- No --> H[Proceed with Object Creation/Update]
    H --> G
    G --> I[Export Succeeds]
    I --> J[Sync Client Does Not Log Error]
    J --> K[Sync Client Does Not Retry Operation]
    K --> L[Background Timer Task Every Hour]
    L --> M[Check for Resolved Conflicts]
    M --> N[Remove Attributes from Quarantine]
    C -- No --> H

Differences between B2B Direct Connect and B2B Collaboration in Microsoft Entra

Microsoft Entra offers two ways to collaborate with external users: B2B Direct Connect and B2B Collaboration. Both features allow organizations to share resources with external users while maintaining control over access and security. However, they differ in functionality, access, and integration. Here is a comparison between B2B Direct Connect and B2B Collaboration:

Feature B2B Direct Connect B2B Collaboration
Definition Mutual trust relationship between two Microsoft Entra organizations Invite external users to access resources using their own credentials
Functionality Seamless collaboration using origin credentials and shared channels in Teams External users receive an invitation and access resources after authentication
Applications Shared channels in Microsoft Teams Wide range of applications and services within the Microsoft ecosystem
Access Single sign-on (SSO) with origin credentials Authentication each time resources are accessed, unless direct federation is set up
Integration Deep and continuous integration between two organizations Flexible way to invite and manage external users

I hope this helps!

Microsoft Defender for Storage

Microsoft Defender for Storage is part of the Microsoft Defender for Cloud suite of security solutions.

Introduction

Microsoft Defender for Storage is a cloud-native security solution that provides advanced threat protection for your Azure Storage accounts.

Microsoft Defender for Storage provides comprehensive security by analyzing the data plane and control plane telemetry generated by Azure Blob Storage, Azure Files, and Azure Data Lake Storage services. It uses advanced threat detection capabilities powered by Microsoft Threat Intelligence, Microsoft Defender Antivirus, and Sensitive Data Discovery to help you discover and mitigate potential threats.

Defender for Storage includes:

  • Activity Monitoring
  • Sensitive data threat detection (new plan only)
  • Malware Scanning (new plan only)

How it works

Microsoft Defender for Storage uses advanced threat detection capabilities powered by Microsoft Threat Intelligence, Microsoft Defender Antivirus, and Sensitive Data Discovery to help you discover and mitigate potential threats.

Activity Monitoring

Activity Monitoring provides insights into the operations performed on your storage accounts. It helps you understand the access patterns and operations performed on your storage accounts, and provides insights into the data plane and control plane activities.

Sensitive data threat detection

Sensitive data threat detection helps you discover and protect sensitive data stored in your storage accounts. It uses advanced machine learning models to detect sensitive data patterns and provides recommendations to help you protect your sensitive data.

Malware Scanning

Malware Scanning helps you detect and mitigate malware threats in your storage accounts. It uses advanced threat detection capabilities powered by Microsoft Defender Antivirus to scan your storage accounts for malware threats and provides recommendations to help you mitigate these threats.

Pricing

The pricing for Microsoft Defender for Storage is as follows:

Resource Type Resource Price
Storage Microsoft Defender for Storage €9 per storage account/month6
Storage Malware Scanning (add-on to Defender for Storage) €0.135/GB of data scanned

For more information about pricing, see the Microsoft Defender for Cloud pricing.

Conclusion

Microsoft Defender for Storage is a cloud-native security solution that provides advanced threat protection for your Azure Storage accounts. It uses advanced threat detection capabilities powered by Microsoft Threat Intelligence, Microsoft Defender Antivirus, and Sensitive Data Discovery to help you discover and mitigate potential threats.

For more information about Microsoft Defender for Storage, see the Overview of Microsoft Defender for Storage

Security Score System

The Common Vulnerability Scoring System (CVSS) is a framework for scoring the severity of security vulnerabilities. It provides a standardized method for assessing the impact of vulnerabilities and helps organizations prioritize their response to security threats. In this article, we will discuss the CVSS and how it can be used to calculate the severity of security vulnerabilities.

What is CVSS?

The Common Vulnerability Scoring System (CVSS) is an open framework for scoring the severity of security vulnerabilities. It was developed by the Forum of Incident Response and Security Teams (FIRST) to provide a standardized method for assessing the impact of vulnerabilities. CVSS assigns a numerical score to vulnerabilities based on their characteristics, such as the impact on confidentiality, integrity, and availability, and the complexity of the attack vector.

CVSS is widely used by security researchers, vendors, and organizations to prioritize their response to security threats. It helps organizations understand the severity of vulnerabilities and allocate resources to address the most critical issues first.

How is CVSS calculated?

In CVSS Version 4.0, vulnerabilities are scored on a scale of 0.0 to 10.0, with 10.0 being the most severe. The CVSS score is calculated based on several metrics groups, including:

  • Base Metric: The Base metric group represents the intrinsic characteristics of a vulnerability that are constant over time and across user environments. It is composed of two sets of metrics: the Exploitability metrics and the Impact metrics.

  • Threat metric group: The Threat metric group reflects the characteristics of a vulnerability related to threat that may change over time but not necessarily across user environments.

  • Environmental metric group: The Environmental metric group represents the characteristics of a vulnerability that are relevant and unique to a particular user's environment.

  • The Supplementary metric group: The Supplemental metric group includes metrics that provide context as well as describe and measure additional extrinsic attributes of a vulnerability.

CVSS Version 4.0 Metrics

Base Metrics

The Base metric group includes the following metrics:

  • Exploitability Metrics: These metrics describe the characteristics of the vulnerability that affect how easy it is to exploit. They include the Attack Vector (AV), Attack Complexity (AC), Privileges Required (PR), and User Interaction (UI).
  • Vulnerable System Impact Metrics: These metrics describe the impact on the system if the vulnerability is exploited. They include the Confidentiality(VC), Integrity (VI), and Availability (VA) impacts.
  • Subsequent System Impact Metrics: These metrics describe the impact on the system if the vulnerability is exploited. They include the Confidentiality(SC), Integrity (II), and Availability (SA) impacts.
Exploitability Metrics
  • Attack Vector (AV): This metric describes the context where vulnerability is exploited. It can be either Local (L), Adjacent Network (A), Network (N), or Physical (P).
  • Attack Complexity (AC): This metric describes the complexity of the attack required to exploit the vulnerability. It can be either Low (L), High (H).
  • Privileges Required (PR): This metric describes the level of privileges required to exploit the vulnerability. It can be either None (N), Low (L), or High (H).
  • User Interaction (UI): This metric describes whether user interaction is required to exploit the vulnerability. It can be either None (N), Required (R).
Vulnerable System Impact Metrics
  • Confidentiality Impact (VC): This metric measures the impact on the confidentiality of the information managed by the vulnerable system due to a successful exploit of the vulnerability. It can be either Low (L), High (H), or None (N).

  • Integrity Impact (VI): This metric measures the impact on the integrity of the information managed by the vulnerable system due to a successful exploit of the vulnerability. It can be either Low (L), High (H), or None (N).

  • Availability Impact (VA): This metric measures the impact on the availability of the services of the vulnerable system due to a successful exploit of the vulnerability. It can be either Low (L), High (H), or None (N).

Subsequent System Impact Metrics
  • Confidentiality Impact (SC): This metric measures the impact to the confidentiality of the information managed by the Subsequent System due to a successful exploit of the vulnerability. It can be either Low (L), High (H), or None (N).

References

markmap

markmap is a visualisation tool that allows you to create mindmaps from markdown files. It is based on the mermaid library and can be used to create a visual representation of a markdown file.

Installation in mkdocs

To install markmap in mkdocs, you need install the plugin using pip:

pip install mkdocs-markmap

Then, you need to add the following lines to your mkdocs.yml file:

plugins:
  - markmap

Usage

To use markmap, you need to add the following code block to your markdown file:

```markmap  
# Root

## Branch 1

* Branchlet 1a
* Branchlet 1b

## Branch 2

* Branchlet 2a
* Branchlet 2b
```

And this will generate the following mindmap:

alt text

That is for the future, because in my mkdocs not work as expected:

# Root

## Branch 1

* Branchlet 1a
* Branchlet 1b

## Branch 2

* Branchlet 2a
* Branchlet 2b

Visual Studio Code Extension

There is also a Visual Studio Code extension that allows you to create mindmaps from markdown files. You can install it from the Visual Studio Code marketplace.

    Name: Markdown Preview Markmap Support
    Id: phoihos.markdown-markmap
    Description: Visualize Markdown as Mindmap (A.K.A Markmap) to VS Code's built-in markdown preview
    Version: 1.4.6
    Publisher: phoihos
    VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=phoihos.markdown-markmap
VS Marketplace Link

Conclusion

I don't like too much this plugin because it not work as expected in my mkdocs but it's a good tool for documentation.

References