Skip to content

Azure Services

Develop my firts policy for Kubernetes with minikube and gatekeeper

Now that we have our development environment, we can start developing our first policy for Kubernetes with minikube and gatekeeper.

First at all, we need some visual code editor to write our policy. I recommend using Visual Studio Code, but you can use any other editor. Exists a plugin for Visual Studio Code that helps you to write policies for gatekeeper. You can install it from the marketplace: Open Policy Agent.

Once you have your editor ready, you can start writing your policy. In this example, we will create a policy that denies the creation of pods with the image nginx:latest.

For that we need two files:

  • constraint.yaml: This file defines the constraint that we want to apply.
  • constraint_template.yaml: This file defines the template that we will use to create the constraint.

Let's start with the constraint_template.yaml file:

constraint_template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdenypodswithnginxlatest
spec:
  crd:
    spec:
      names:
        kind: K8sDenyPodsWithNginxLatest
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdenypodswithnginxlatest

        violation[{"msg": msg}] {
          input.review.object.spec.containers[_].image == "nginx:latest"
          msg := "Containers cannot use the nginx:latest image"
        }

Now, let's create the constraint.yaml file:

constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithNginxLatest
metadata:
  name: deny-pods-with-nginx-latest
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    msg: "Containers cannot use the nginx:latest image"

Now, we can apply the files to our cluster:

# Create the constraint template
kubectl apply -f constraint_template.yaml

# Create the constraint
kubectl apply -f constraint.yaml

Now, we can test the constraint. Let's create a pod with the image nginx:latest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
EOF

We must see an error message like this:

Error from server (Forbidden): error when creating "STDIN": admission webhook "validation.gatekeeper.sh" denied the request: [k8sdenypodswithnginxlatest] Containers cannot use the nginx:latest image

Now, let's create a pod with a different image:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25.5
EOF

We must see a message like this:

pod/nginx-pod created

For cleaning up, you can delete pod,the constraint and the constraint template:

# Delete the pod
kubectl delete pod nginx-pod
# Delete the constraint
kubectl delete -f constraint.yaml

# Delete the constraint template
kubectl delete -f constraint_template.yaml

And that's it! We have developed our first policy for Kubernetes with minikube and gatekeeper. Now you can start developing more complex policies and test them in your cluster.

Happy coding!

How to create a local environment to write policies for Kubernetes with minikube and gatekeeper

minikube in wsl2

Enable systemd in WSL2

sudo nano /etc/wsl.conf

Add the following:

[boot]
systemd=true

Restart WSL2 in command:

wsl --shutdown
wsl

Install docker

Install docker using repository

Minikube

Install minikube

# Download the latest Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# Make it executable
chmod +x ./minikube

# Move it to your user's executable PATH
sudo mv ./minikube /usr/local/bin/

#Set the driver version to Docker
minikube config set driver docker

Test minikube

# Enable completion
source <(minikube completion bash)
# Start minikube
minikube start
# Check the status
minikube status
# set context
kubectl config use-context minikube
# get pods
kubectl get pods --all-namespaces

Install OPA Gatekeeper

# Install OPA Gatekeeper
# check version in https://open-policy-agent.github.io/gatekeeper/website/docs/install#deploying-a-release-using-prebuilt-image
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

# wait and check the status
sleep 60
kubectl get pods -n gatekeeper-system

Test constraints

First, we need to create a constraint template and a constraint.

# Create a constraint template
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/templates/k8srequiredlabels_template.yaml

# Create a constraint
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/demo/basic/constraints/k8srequiredlabels_constraint.yaml

Now, we can test the constraint.

# Create a deployment without the required label
kubectl create namespace petete 
We must see an error message like this:

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}

# Create a deployment with the required label
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: petete
  labels:
    gatekeeper: "true"
EOF
kubectl get namespaces petete
We must see a message like this:

NAME     STATUS   AGE
petete   Active   3s

Conclusion

We have created a local environment to write policies for Kubernetes with minikube and gatekeeper. We have tested the environment with a simple constraint. Now we can write our own policies and test them in our local environment.

References

Trigger an on-demand Azure Policy compliance evaluation scan

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to trigger a scan with Azure Policy.

What is a scan in Azure Policy

A scan in Azure Policy is a process that evaluates your resources against a set of policies to determine if they are compliant. When you trigger a scan, Azure Policy evaluates your resources and generates a compliance report that shows the results of the evaluation. The compliance report includes information about the policies that were evaluated, the resources that were scanned, and the compliance status of each resource.

You can trigger a scan in Azure Policy using the Azure CLI, PowerShell, or the Azure portal. When you trigger a scan, you can specify the scope of the scan, the policies to evaluate, and other parameters that control the behavior of the scan.

Trigger a scan with the Azure CLI

To trigger a scan with the Azure CLI, you can use the az policy state trigger-scan command. This command triggers a policy compliance evaluation for a scope

How to trigger a scan with the Azure CLI for active subscription:

az policy state trigger-scan 

How to trigger a scan with the Azure CLI for a specific resource group:

az policy state trigger-scan --resource-group myResourceGroup

Trigger a scan with PowerShell

To trigger a scan with PowerShell, you can use the Start-AzPolicyComplianceScan cmdlet. This cmdlet triggers a policy compliance evaluation for a scope.

How to trigger a scan with PowerShell for active subscription:

Start-AzPolicyComplianceScan
$job = Start-AzPolicyComplianceScan -AsJob

How to trigger a scan with PowerShell for a specific resource group:

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

Conclusion

In this article, we discussed how to trigger a scan with Azure Policy. We covered how to trigger a scan using the Azure CLI and PowerShell. By triggering a scan, you can evaluate your resources against a set of policies to determine if they are compliant. This can help you ensure that your resources are compliant with your organization's standards and best practices.

References

Custom Azure Policy for Kubernetes

Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce different rules and effects over your resources. These policies can help you stay compliant with your corporate standards and service-level agreements. In this article, we will discuss how to create a custom Azure Policy for Kubernetes.

How Azure Policy works in kubernetes

Azure Policy for Kubernetes is an extension of Azure Policy that allows you to enforce policies on your Kubernetes clusters. You can use Azure Policy to define policies that apply to your Kubernetes resources, such as pods, deployments, and services. These policies can help you ensure that your Kubernetes clusters are compliant with your organization's standards and best practices.

Azure Policy for Kubernetes uses Gatekeeper, an open-source policy controller for Kubernetes, to enforce policies on your clusters. Gatekeeper uses the Open Policy Agent (OPA) policy language to define policies and evaluate them against your Kubernetes resources. You can use Gatekeeper to create custom policies that enforce specific rules and effects on your clusters.

graph TD
    A[Azure Policy] -->|Enforce policies| B["add-on azure-policy(Gatekeeper)"]
    B -->|Evaluate policies| C[Kubernetes resources]

Azure Policy for Kubernetes supports the following cluster environments:

  • Azure Kubernetes Service (AKS), through Azure Policy's Add-on for AKS
  • Azure Arc enabled Kubernetes, through Azure Policy's Extension for Arc

Prepare your environment

Before you can create custom Azure Policy for Kubernetes, you need to set up your environment. You will need an Azure Kubernetes Service (AKS) cluster with the Azure Policy add-on enabled. You will also need the Azure CLI and the Azure Policy extension for Visual Studio Code.

To set up your environment, follow these steps:

  1. Create a resource group

    az group create --name myResourceGroup --location spaincentral
    
  2. Create an Azure Kubernetes Service (AKS) cluster with default settings and one node:

    az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
    
  3. Enable Azure Policies for the cluster:

    az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-policy
    
  4. Check the status of the add-on:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.azurepolicy.enabled
    
  5. Check the status of gatekeeper:

    # Install kubectl and kubelogin
    az aks install-cli --install-location .local/bin/kubectl --kubelogin-install-location .local/bin/kubelogin
    # Get the credentials for the AKS cluster
    az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing
    # azure-policy pod is installed in kube-system namespace
    kubectl get pods -n kube-system
    # gatekeeper pod is installed in gatekeeper-system namespace
    kubectl get pods -n gatekeeper-system
    
  6. Install vscode and the Azure Policy extension

    code --install-extension ms-azuretools.vscode-azurepolicy
    

Once you have set up your environment, you can create custom Azure Policy for Kubernetes.

How to create a custom Azure Policy for Kubernetes

To create a custom Azure Policy for Kubernetes, you need to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. You can define policies that enforce specific rules and effects on your Kubernetes resources, such as pods, deployments, and services.

Info

It`s recommended to review Constraint Templates in How to use Gatekeeper

To create a custom Azure Policy for Kubernetes, follow these steps:

  1. Define a constraint template for the policy, I will use an existing constraint template from the Gatekeeper library that requires Ingress resources to be HTTPS only:

    gatekeeper-library/library/general/httpsonly/template.yaml
    apiVersion: templates.gatekeeper.sh/v1
    kind: ConstraintTemplate
    metadata:
        name: k8shttpsonly
        annotations:
            metadata.gatekeeper.sh/title: "HTTPS Only"
            metadata.gatekeeper.sh/version: 1.0.2
            description: >-
            Requires Ingress resources to be HTTPS only.  Ingress resources must
            include the `kubernetes.io/ingress.allow-http` annotation, set to `false`.
            By default a valid TLS {} configuration is required, this can be made
            optional by setting the `tlsOptional` parameter to `true`.
    
            https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
    spec:
    crd:
        spec:
        names:
            kind: K8sHttpsOnly
        validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
            type: object
            description: >-
                Requires Ingress resources to be HTTPS only.  Ingress resources must
                include the `kubernetes.io/ingress.allow-http` annotation, set to
                `false`. By default a valid TLS {} configuration is required, this
                can be made optional by setting the `tlsOptional` parameter to
                `true`.
            properties:
                tlsOptional:
                type: boolean
                description: "When set to `true` the TLS {} is optional, defaults 
                to false."
    targets:
        - target: admission.k8s.gatekeeper.sh
        rego: |
            package k8shttpsonly
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not https_complete(ingress)
            not tls_is_optional
            msg := sprintf("Ingress should be https. tls configuration and allow-http=false annotation are required for %v", [ingress.metadata.name])
            }
    
            violation[{"msg": msg}] {
            input.review.object.kind == "Ingress"
            regex.match("^(extensions|networking.k8s.io)/", input.review.object.apiVersion)
            ingress := input.review.object
            not annotation_complete(ingress)
            tls_is_optional
            msg := sprintf("Ingress should be https. The allow-http=false annotation is required for %v", [ingress.metadata.name])
            }
    
            https_complete(ingress) = true {
            ingress.spec["tls"]
            count(ingress.spec.tls) > 0
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            annotation_complete(ingress) = true {
            ingress.metadata.annotations["kubernetes.io/ingress.allow-http"] == "false"
            }
    
            tls_is_optional {
            parameters := object.get(input, "parameters", {})
            object.get(parameters, "tlsOptional", false) == true
            }
    

This constrains requires Ingress resources to be HTTPS only

  1. Create an Azure Policy for this constraint template

    1. Open the restriction template created earlier in Visual Studio Code.
    2. Click on Azure Policy icon in the Activity Bar.
    3. Click on View > Command Palette.
    4. Search for the command "Azure Policy for Kubernetes: Create Policy Definition from Constraint Template or Mutation", select base64, this command will create a policy definition from the constraint template.
      Untitled.json
      {
      "properties": {
          "policyType": "Custom",
          "mode": "Microsoft.Kubernetes.Data",
          "displayName": "/* EDIT HERE */",
          "description": "/* EDIT HERE */",
          "policyRule": {
          "if": {
              "field": "type",
              "in": [
              "Microsoft.ContainerService/managedClusters"
              ]
          },
          "then": {
              "effect": "[parameters('effect')]",
              "details": {
              "templateInfo": {
                  "sourceType": "Base64Encoded",
                  "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ=="
              },
              "apiGroups": [
                  "/* EDIT HERE */"
              ],
              "kinds": [
                  "/* EDIT HERE */"
              ],
              "namespaces": "[parameters('namespaces')]",
              "excludedNamespaces": "[parameters('excludedNamespaces')]",
              "labelSelector": "[parameters('labelSelector')]",
              "values": {
                  "tlsOptional": "[parameters('tlsOptional')]"
              }
              }
          }
          },
          "parameters": {
          "effect": {
              "type": "String",
              "metadata": {
              "displayName": "Effect",
              "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy."
              },
              "allowedValues": [
              "audit",
              "deny",
              "disabled"
              ],
              "defaultValue": "audit"
          },
          "excludedNamespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace exclusions",
              "description": "List of Kubernetes namespaces to exclude from policy evaluation."
              },
              "defaultValue": [
              "kube-system",
              "gatekeeper-system",
              "azure-arc"
              ]
          },
          "namespaces": {
              "type": "Array",
              "metadata": {
              "displayName": "Namespace inclusions",
              "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces."
              },
              "defaultValue": []
          },
          "labelSelector": {
              "type": "Object",
              "metadata": {
              "displayName": "Kubernetes label selector",
              "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
              },
              "defaultValue": {},
              "schema": {
              "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
              "type": "object",
              "properties": {
                  "matchLabels": {
                  "description": "matchLabels is a map of {key,value} pairs.",
                  "type": "object",
                  "additionalProperties": {
                      "type": "string"
                  },
                  "minProperties": 1
                  },
                  "matchExpressions": {
                  "description": "matchExpressions is a list of values, a key, and an operator.",
                  "type": "array",
                  "items": {
                      "type": "object",
                      "properties": {
                      "key": {
                          "description": "key is the label key that the selector applies to.",
                          "type": "string"
                      },
                      "operator": {
                          "description": "operator represents a key's relationship to a set of values.",
                          "type": "string",
                          "enum": [
                          "In",
                          "NotIn",
                          "Exists",
                          "DoesNotExist"
                          ]
                      },
                      "values": {
                          "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
                          "type": "array",
                          "items": {
                          "type": "string"
                          }
                      }
                      },
                      "required": [
                      "key",
                      "operator"
                      ],
                      "additionalProperties": false
                  },
                  "minItems": 1
                  }
              },
              "additionalProperties": false
              }
          },
          "tlsOptional": {
              "type": "Boolean",
              "metadata": {
              "displayName": "/* EDIT HERE */",
              "description": "/* EDIT HERE */"
              }
          }
          }
      }
      }
      
    5. Fill the fields with "/ EDIT HERE /" in the policy definition JSON file with the appropriate values, such as the display name, description, API groups, and kinds. For example, in this case you must configure apiGroups: ["extensions", "networking.k8s.io"] and kinds: ["Ingress"]
    6. Save the policy definition JSON file.

This is the complete policy:

json title="https-only.json" { "properties": { "policyType": "Custom", "mode": "Microsoft.Kubernetes.Data", "displayName": "Require HTTPS for Ingress resources", "description": "This policy requires Ingress resources to be HTTPS only. Ingress resources must include the `kubernetes.io/ingress.allow-http` annotation, set to `false`. By default a valid TLS configuration is required, this can be made optional by setting the `tlsOptional` parameter to `true`.", "policyRule": { "if": { "field": "type", "in": [ "Microsoft.ContainerService/managedClusters" ] }, "then": { "effect": "[parameters('effect')]", "details": { "templateInfo": { "sourceType": "Base64Encoded", "content": "YXBpVmVyc2lvbjogdGVtcGxhdGVzLmdhdGVrZWVwZXIuc2gvdjEKa2luZDogQ29uc3RyYWludFRlbXBsYXRlCm1ldGFkYXRhOgogIG5hbWU6IGs4c2h0dHBzb25seQogIGFubm90YXRpb25zOgogICAgbWV0YWRhdGEuZ2F0ZWtlZXBlci5zaC90aXRsZTogIkhUVFBTIE9ubHkiCiAgICBtZXRhZGF0YS5nYXRla2VlcGVyLnNoL3ZlcnNpb246IDEuMC4yCiAgICBkZXNjcmlwdGlvbjogPi0KICAgICAgUmVxdWlyZXMgSW5ncmVzcyByZXNvdXJjZXMgdG8gYmUgSFRUUFMgb25seS4gIEluZ3Jlc3MgcmVzb3VyY2VzIG11c3QKICAgICAgaW5jbHVkZSB0aGUgYGt1YmVybmV0ZXMuaW8vaW5ncmVzcy5hbGxvdy1odHRwYCBhbm5vdGF0aW9uLCBzZXQgdG8gYGZhbHNlYC4KICAgICAgQnkgZGVmYXVsdCBhIHZhbGlkIFRMUyB7fSBjb25maWd1cmF0aW9uIGlzIHJlcXVpcmVkLCB0aGlzIGNhbiBiZSBtYWRlCiAgICAgIG9wdGlvbmFsIGJ5IHNldHRpbmcgdGhlIGB0bHNPcHRpb25hbGAgcGFyYW1ldGVyIHRvIGB0cnVlYC4KCiAgICAgIGh0dHBzOi8va3ViZXJuZXRlcy5pby9kb2NzL2NvbmNlcHRzL3NlcnZpY2VzLW5ldHdvcmtpbmcvaW5ncmVzcy8jdGxzCnNwZWM6CiAgY3JkOgogICAgc3BlYzoKICAgICAgbmFtZXM6CiAgICAgICAga2luZDogSzhzSHR0cHNPbmx5CiAgICAgIHZhbGlkYXRpb246CiAgICAgICAgIyBTY2hlbWEgZm9yIHRoZSBgcGFyYW1ldGVyc2AgZmllbGQKICAgICAgICBvcGVuQVBJVjNTY2hlbWE6CiAgICAgICAgICB0eXBlOiBvYmplY3QKICAgICAgICAgIGRlc2NyaXB0aW9uOiA+LQogICAgICAgICAgICBSZXF1aXJlcyBJbmdyZXNzIHJlc291cmNlcyB0byBiZSBIVFRQUyBvbmx5LiAgSW5ncmVzcyByZXNvdXJjZXMgbXVzdAogICAgICAgICAgICBpbmNsdWRlIHRoZSBga3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHBgIGFubm90YXRpb24sIHNldCB0bwogICAgICAgICAgICBgZmFsc2VgLiBCeSBkZWZhdWx0IGEgdmFsaWQgVExTIHt9IGNvbmZpZ3VyYXRpb24gaXMgcmVxdWlyZWQsIHRoaXMKICAgICAgICAgICAgY2FuIGJlIG1hZGUgb3B0aW9uYWwgYnkgc2V0dGluZyB0aGUgYHRsc09wdGlvbmFsYCBwYXJhbWV0ZXIgdG8KICAgICAgICAgICAgYHRydWVgLgogICAgICAgICAgcHJvcGVydGllczoKICAgICAgICAgICAgdGxzT3B0aW9uYWw6CiAgICAgICAgICAgICAgdHlwZTogYm9vbGVhbgogICAgICAgICAgICAgIGRlc2NyaXB0aW9uOiAiV2hlbiBzZXQgdG8gYHRydWVgIHRoZSBUTFMge30gaXMgb3B0aW9uYWwsIGRlZmF1bHRzCiAgICAgICAgICAgICAgdG8gZmFsc2UuIgogIHRhcmdldHM6CiAgICAtIHRhcmdldDogYWRtaXNzaW9uLms4cy5nYXRla2VlcGVyLnNoCiAgICAgIHJlZ286IHwKICAgICAgICBwYWNrYWdlIGs4c2h0dHBzb25seQoKICAgICAgICB2aW9sYXRpb25beyJtc2ciOiBtc2d9XSB7CiAgICAgICAgICBpbnB1dC5yZXZpZXcub2JqZWN0LmtpbmQgPT0gIkluZ3Jlc3MiCiAgICAgICAgICByZWdleC5tYXRjaCgiXihleHRlbnNpb25zfG5ldHdvcmtpbmcuazhzLmlvKS8iLCBpbnB1dC5yZXZpZXcub2JqZWN0LmFwaVZlcnNpb24pCiAgICAgICAgICBpbmdyZXNzIDo9IGlucHV0LnJldmlldy5vYmplY3QKICAgICAgICAgIG5vdCBodHRwc19jb21wbGV0ZShpbmdyZXNzKQogICAgICAgICAgbm90IHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiB0bHMgY29uZmlndXJhdGlvbiBhbmQgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGFyZSByZXF1aXJlZCBmb3IgJXYiLCBbaW5ncmVzcy5tZXRhZGF0YS5uYW1lXSkKICAgICAgICB9CgogICAgICAgIHZpb2xhdGlvblt7Im1zZyI6IG1zZ31dIHsKICAgICAgICAgIGlucHV0LnJldmlldy5vYmplY3Qua2luZCA9PSAiSW5ncmVzcyIKICAgICAgICAgIHJlZ2V4Lm1hdGNoKCJeKGV4dGVuc2lvbnN8bmV0d29ya2luZy5rOHMuaW8pLyIsIGlucHV0LnJldmlldy5vYmplY3QuYXBpVmVyc2lvbikKICAgICAgICAgIGluZ3Jlc3MgOj0gaW5wdXQucmV2aWV3Lm9iamVjdAogICAgICAgICAgbm90IGFubm90YXRpb25fY29tcGxldGUoaW5ncmVzcykKICAgICAgICAgIHRsc19pc19vcHRpb25hbAogICAgICAgICAgbXNnIDo9IHNwcmludGYoIkluZ3Jlc3Mgc2hvdWxkIGJlIGh0dHBzLiBUaGUgYWxsb3ctaHR0cD1mYWxzZSBhbm5vdGF0aW9uIGlzIHJlcXVpcmVkIGZvciAldiIsIFtpbmdyZXNzLm1ldGFkYXRhLm5hbWVdKQogICAgICAgIH0KCiAgICAgICAgaHR0cHNfY29tcGxldGUoaW5ncmVzcykgPSB0cnVlIHsKICAgICAgICAgIGluZ3Jlc3Muc3BlY1sidGxzIl0KICAgICAgICAgIGNvdW50KGluZ3Jlc3Muc3BlYy50bHMpID4gMAogICAgICAgICAgaW5ncmVzcy5tZXRhZGF0YS5hbm5vdGF0aW9uc1sia3ViZXJuZXRlcy5pby9pbmdyZXNzLmFsbG93LWh0dHAiXSA9PSAiZmFsc2UiCiAgICAgICAgfQoKICAgICAgICBhbm5vdGF0aW9uX2NvbXBsZXRlKGluZ3Jlc3MpID0gdHJ1ZSB7CiAgICAgICAgICBpbmdyZXNzLm1ldGFkYXRhLmFubm90YXRpb25zWyJrdWJlcm5ldGVzLmlvL2luZ3Jlc3MuYWxsb3ctaHR0cCJdID09ICJmYWxzZSIKICAgICAgICB9CgogICAgICAgIHRsc19pc19vcHRpb25hbCB7CiAgICAgICAgICBwYXJhbWV0ZXJzIDo9IG9iamVjdC5nZXQoaW5wdXQsICJwYXJhbWV0ZXJzIiwge30pCiAgICAgICAgICBvYmplY3QuZ2V0KHBhcmFtZXRlcnMsICJ0bHNPcHRpb25hbCIsIGZhbHNlKSA9PSB0cnVlCiAgICAgICAgfQ==" }, "apiGroups": [ "extensions", "networking.k8s.io" ], "kinds": [ "Ingress" ], "namespaces": "[parameters('namespaces')]", "excludedNamespaces": "[parameters('excludedNamespaces')]", "labelSelector": "[parameters('labelSelector')]", "values": { "tlsOptional": "[parameters('tlsOptional')]" } } } }, "parameters": { "effect": { "type": "String", "metadata": { "displayName": "Effect", "description": "'audit' allows a non-compliant resource to be created or updated, but flags it as non-compliant. 'deny' blocks the non-compliant resource creation or update. 'disabled' turns off the policy." }, "allowedValues": [ "audit", "deny", "disabled" ], "defaultValue": "audit" }, "excludedNamespaces": { "type": "Array", "metadata": { "displayName": "Namespace exclusions", "description": "List of Kubernetes namespaces to exclude from policy evaluation." }, "defaultValue": [ "kube-system", "gatekeeper-system", "azure-arc" ] }, "namespaces": { "type": "Array", "metadata": { "displayName": "Namespace inclusions", "description": "List of Kubernetes namespaces to only include in policy evaluation. An empty list means the policy is applied to all resources in all namespaces." }, "defaultValue": [] }, "labelSelector": { "type": "Object", "metadata": { "displayName": "Kubernetes label selector", "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources." }, "defaultValue": {}, "schema": { "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.", "type": "object", "properties": { "matchLabels": { "description": "matchLabels is a map of {key,value} pairs.", "type": "object", "additionalProperties": { "type": "string" }, "minProperties": 1 }, "matchExpressions": { "description": "matchExpressions is a list of values, a key, and an operator.", "type": "array", "items": { "type": "object", "properties": { "key": { "description": "key is the label key that the selector applies to.", "type": "string" }, "operator": { "description": "operator represents a key's relationship to a set of values.", "type": "string", "enum": [ "In", "NotIn", "Exists", "DoesNotExist" ] }, "values": { "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.", "type": "array", "items": { "type": "string" } } }, "required": [ "key", "operator" ], "additionalProperties": false }, "minItems": 1 } }, "additionalProperties": false } }, "tlsOptional": { "type": "Boolean", "metadata": { "displayName": "TLS Optional", "description": "Set to true to make TLS optional" } } } } }

Now you have created a custom Azure Policy for Kubernetes that enforces the HTTPS only constraint on your Kubernetes cluster. You can apply this policy to your cluster to ensure that all Ingress resources are HTTPS only creating a policy definition and assigning it to the management group, subscription or resource group where the AKS cluster is located.

Conclusion

In this article, we discussed how to create a custom Azure Policy for Kubernetes. We showed you how to define a policy in the Open Policy Agent (OPA) policy language and apply it to your Kubernetes cluster. We also showed you how to create a constraint template for the policy and create an Azure Policy for the constraint template. By following these steps, you can create custom policies that enforce specific rules and effects on your Kubernetes resources.

References

Do you need to check if a private endpoint is connected to an external private link service in Azure or just don't know how to do it?

Check this blog post to learn how to do it: Find cross-tenant private endpoint connections in Azure

This is a copy of the script used in the blog post in case it disappears:

scan-private-endpoints-with-manual-connections.ps1
$ErrorActionPreference = "Stop"

class SubscriptionInformation {
    [string] $SubscriptionID
    [string] $Name
    [string] $TenantID
}

class TenantInformation {
    [string] $TenantID
    [string] $DisplayName
    [string] $DomainName
}

class PrivateEndpointData {
    [string] $ID
    [string] $Name
    [string] $Type
    [string] $Location
    [string] $ResourceGroup
    [string] $SubscriptionName
    [string] $SubscriptionID
    [string] $TenantID
    [string] $TenantDisplayName
    [string] $TenantDomainName
    [string] $TargetResourceId
    [string] $TargetSubscriptionName
    [string] $TargetSubscriptionID
    [string] $TargetTenantID
    [string] $TargetTenantDisplayName
    [string] $TargetTenantDomainName
    [string] $Description
    [string] $Status
    [string] $External
}

$installedModule = Get-Module -Name "Az.ResourceGraph" -ListAvailable
if ($null -eq $installedModule) {
    Install-Module "Az.ResourceGraph" -Scope CurrentUser
}
else {
    Import-Module "Az.ResourceGraph"
}

$kqlQuery = @"
resourcecontainers | where type == 'microsoft.resources/subscriptions'
| project  subscriptionId, name, tenantId
"@

$batchSize = 1000
$skipResult = 0

$subscriptions = @{}

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {
        $s = [SubscriptionInformation]::new()
        $s.SubscriptionID = $row.subscriptionId
        $s.Name = $row.name
        $s.TenantID = $row.tenantId

        $subscriptions.Add($s.SubscriptionID, $s) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

"Found $($subscriptions.Count) subscriptions"

function Get-SubscriptionInformation($SubscriptionID) {
    if ($subscriptions.ContainsKey($SubscriptionID)) {
        return $subscriptions[$SubscriptionID]
    } 

    Write-Warning "Using fallback subscription information for '$SubscriptionID'"
    $s = [SubscriptionInformation]::new()
    $s.SubscriptionID = $SubscriptionID
    $s.Name = "<unknown>"
    $s.TenantID = [Guid]::Empty.Guid
    return $s
}

$tenantCache = @{}
$subscriptionToTenantCache = @{}

function Get-TenantInformation($TenantID) {
    $domain = $null
    if ($tenantCache.ContainsKey($TenantID)) {
        $domain = $tenantCache[$TenantID]
    } 
    else {
        try {
            $tenantResponse = Invoke-AzRestMethod -Uri "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$TenantID')"
            $tenantInformation = ($tenantResponse.Content | ConvertFrom-Json)

            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = $tenantInformation.displayName
            $ti.DomainName = $tenantInformation.defaultDomainName

            $domain = $ti
        }
        catch {
            Write-Warning "Failed to get domain information for '$TenantID'"
        }

        if ([string]::IsNullOrEmpty($domain)) {
            Write-Warning "Using fallback domain information for '$TenantID'"
            $ti = [TenantInformation]::new()
            $ti.TenantID = $TenantID
            $ti.DisplayName = "<unknown>"
            $ti.DomainName = "<unknown>"

            $domain = $ti
        }

        $tenantCache.Add($TenantID, $domain) | Out-Null
    }

    return $domain
}

function Get-TenantFromSubscription($SubscriptionID) {
    $tenant = $null
    if ($subscriptionToTenantCache.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptionToTenantCache[$SubscriptionID]
    }
    elseif ($subscriptions.ContainsKey($SubscriptionID)) {
        $tenant = $subscriptions[$SubscriptionID].TenantID
        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }
    else {
        try {

            $subscriptionResponse = Invoke-AzRestMethod -Path "/subscriptions/$($SubscriptionID)?api-version=2022-12-01"
            $startIndex = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.IndexOf("https://login.windows.net/")
            $tenantID = $subscriptionResponse.Headers.WwwAuthenticate.Parameter.Substring($startIndex + "https://login.windows.net/".Length, 36)

            $tenant = $tenantID
        }
        catch {
            Write-Warning "Failed to get tenant from subscription '$SubscriptionID'"
        }

        if ([string]::IsNullOrEmpty($tenant)) {
            Write-Warning "Using fallback tenant information for '$SubscriptionID'"

            $tenant = [Guid]::Empty.Guid
        }

        $subscriptionToTenantCache.Add($SubscriptionID, $tenant) | Out-Null
    }

    return $tenant
}

$kqlQuery = @"
resources
| where type == "microsoft.network/privateendpoints"
| where isnotnull(properties) and properties contains "manualPrivateLinkServiceConnections"
| where array_length(properties.manualPrivateLinkServiceConnections) > 0
| mv-expand properties.manualPrivateLinkServiceConnections
| extend status = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.status
| extend description = coalesce(properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceConnectionState.description, "")
| extend privateLinkServiceId = properties_manualPrivateLinkServiceConnections.properties.privateLinkServiceId
| extend privateLinkServiceSubscriptionId = tostring(split(privateLinkServiceId, "/")[2])
| project id, name, location, type, resourceGroup, subscriptionId, tenantId, privateLinkServiceId, privateLinkServiceSubscriptionId, status, description
"@

$batchSize = 1000
$skipResult = 0

$privateEndpoints = New-Object System.Collections.ArrayList

while ($true) {

    if ($skipResult -gt 0) {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken -UseTenantScope
    }
    else {
        $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -UseTenantScope
    }

    foreach ($row in $graphResult.data) {

        $si1 = Get-SubscriptionInformation -SubscriptionID $row.SubscriptionID
        $ti1 = Get-TenantInformation -TenantID $row.TenantID

        $si2 = Get-SubscriptionInformation -SubscriptionID $row.PrivateLinkServiceSubscriptionId
        $tenant2 = Get-TenantFromSubscription -SubscriptionID $si2.SubscriptionID
        $ti2 = Get-TenantInformation -TenantID $tenant2

        $peData = [PrivateEndpointData]::new()
        $peData.ID = $row.ID
        $peData.Name = $row.Name
        $peData.Type = $row.Type
        $peData.Location = $row.Location
        $peData.ResourceGroup = $row.ResourceGroup

        $peData.SubscriptionName = $si1.Name
        $peData.SubscriptionID = $si1.SubscriptionID
        $peData.TenantID = $ti1.TenantID
        $peData.TenantDisplayName = $ti1.DisplayName
        $peData.TenantDomainName = $ti1.DomainName

        $peData.TargetResourceId = $row.PrivateLinkServiceId
        $peData.TargetSubscriptionName = $si2.Name
        $peData.TargetSubscriptionID = $si2.SubscriptionID
        $peData.TargetTenantID = $ti2.TenantID
        $peData.TargetTenantDisplayName = $ti2.DisplayName
        $peData.TargetTenantDomainName = $ti2.DomainName

        $peData.Description = $row.Description
        $peData.Status = $row.Status

        if ($ti2.DomainName -eq "MSAzureCloud.onmicrosoft.com") {
            $peData.External = "Managed by Microsoft"
        }
        elseif ($si2.TenantID -eq [Guid]::Empty.Guid) {
            $peData.External = "Yes"
        }
        else {
            $peData.External = "No"
        }

        $privateEndpoints.Add($peData) | Out-Null
    }

    if ($graphResult.data.Count -lt $batchSize) {
        break;
    }
    $skipResult += $skipResult + $batchSize
}

$privateEndpoints | Format-Table
$privateEndpoints | Export-CSV "private-endpoints.csv" -Delimiter ';' -Force

"Found $($privateEndpoints.Count) private endpoints with manual connections"

if ($privateEndpoints.Count -ne 0) {
    Start-Process "private-endpoints.csv"
}

Conclusion

Now you know how to check if a private endpoint is connected to an external private link service in Azure.

That's all folks!

How to check if a role permission is good or bad in Azure RBAC

Do you need to check if a role permission is good or bad or just don't know what actions a provider has in Azure RBAC?

Get-AzProviderOperation * is your friend and you can always export everything to csv:

Get-AzProviderOperation | select Operation , OperationName , ProviderNamespace , ResourceName , Description , IsDataAction | Export-csv AzProviderOperation.csv

This command will give you a list of all the operations that you can perform on Azure resources, including the operation name, provider namespace, resource name, description, and whether it is a data action or not. You can use this information to check if a role permission is good or bad, or to find out what actions a provider has in Azure RBAC.

Script to check if a role permission is good or bad on tf files

You can use the following script to check if a role permission is good or bad on tf files:

<#
.SYNOPSIS
Script to check if a role permission is good or bad in Azure RBAC using Terraform files.

.DESCRIPTION
This script downloads Azure provider operations to a CSV file, reads the CSV file, extracts text from Terraform (.tf and .tfvars) files, and compares the extracted text with the CSV data to find mismatches.

.PARAMETER csvFilePath
The path to the CSV file where Azure provider operations will be downloaded.

.PARAMETER tfFolderPath
The path to the folder containing Terraform (.tf and .tfvars) files.

.PARAMETER DebugMode
Switch to enable debug mode for detailed output.

.EXAMPLE
.\Check-RBAC.ps1 -csvFilePath ".\petete.csv" -tfFolderPath ".\"

.EXAMPLE
.\Check-RBAC.ps1 -csvFilePath ".\petete.csv" -tfFolderPath ".\" -DebugMode

.NOTES
For more information, refer to the following resources:
- Azure RBAC Documentation: https://docs.microsoft.com/en-us/azure/role-based-access-control/
- Get-AzProviderOperation Cmdlet: https://docs.microsoft.com/en-us/powershell/module/az.resources/get-azprovideroperation
- Export-Csv Cmdlet: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-csv
#>

param(
    [string]$csvFilePath = ".\petete.csv",
    [string]$tfFolderPath = ".\",
    [switch]$DebugMode
)

# Download petete.csv using Get-AzProviderOperation
function Download-CSV {
    param(
        [string]$filename,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Downloading petete.csv using Get-AzProviderOperation" }
    Get-AzProviderOperation | select Operation, OperationName, ProviderNamespace, ResourceName, Description, IsDataAction | Export-Csv -Path $filename -NoTypeInformation
    if ($DebugMode) { Write-Host "CSV file downloaded: $filename" }
}

# Function to read the CSV file
function Read-CSV {
    param(
        [string]$filename,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Reading CSV file: $filename" }
    $csv = Import-Csv -Path $filename
    $csvData = $csv | ForEach-Object {
        [PSCustomObject]@{
            Provider = $_.Operation.Split('/')[0].Trim()
            Operation = $_.Operation
            OperationName = $_.OperationName
            ProviderNamespace = $_.ProviderNamespace
            ResourceName = $_.ResourceName
            Description = $_.Description
            IsDataAction = $_.IsDataAction
        }
    }
    if ($DebugMode) { Write-Host "Data read from CSV:"; $csvData | Format-Table -AutoSize }
    return $csvData
}

# Function to extract text from the Terraform files
function Extract-Text-From-TF {
    param(
        [string]$folderPath,
        [switch]$DebugMode
    )
    if ($DebugMode) { Write-Host "Reading TF and TFVARS files in folder: $folderPath" }
    $tfTexts = @()
    $files = Get-ChildItem -Path $folderPath -Filter *.tf,*.tfvars
    foreach ($file in $files) {
        $content = Get-Content -Path $file.FullName
        $tfTexts += $content | Select-String -Pattern '"Microsoft\.[^"]*"' -AllMatches | ForEach-Object { $_.Matches.Value.Trim('"').Trim() }
        $tfTexts += $content | Select-String -Pattern '"\*"' -AllMatches | ForEach-Object { $_.Matches.Value.Trim('"').Trim() }
        $tfTexts += $content | Select-String -Pattern '^\s*\*/' -AllMatches | ForEach-Object { $_.Matches.Value.Trim() }
    }
    if ($DebugMode) { Write-Host "Texts extracted from TF and TFVARS files:"; $tfTexts | Format-Table -AutoSize }
    return $tfTexts
}

# Function to compare extracted text with CSV data
function Compare-Text-With-CSV {
    param(
        [array]$csvData,
        [array]$tfTexts,
        [switch]$DebugMode
    )
    $mismatches = @()
    foreach ($tfText in $tfTexts) {
        if ($tfText -eq "*" -or $tfText -match '^\*/') {
            continue
        }
        $tfTextPattern = $tfText -replace '\*', '.*'
        if (-not ($csvData | Where-Object { $_.Operation -match "^$tfTextPattern$" })) {
            $mismatches += $tfText
        }
    }
    if ($DebugMode) { Write-Host "Mismatches found:"; $mismatches | Format-Table -AutoSize }
    return $mismatches
}

# Main script execution
Download-CSV -filename $csvFilePath -DebugMode:$DebugMode
$csvData = Read-CSV -filename $csvFilePath -DebugMode:$DebugMode
$tfTexts = Extract-Text-From-TF -folderPath $tfFolderPath -DebugMode:$DebugMode
$mismatches = Compare-Text-With-CSV -csvData $csvData -tfTexts $tfTexts -DebugMode:$DebugMode

if ($mismatches.Count -eq 0) {
    Write-Host "All extracted texts match the CSV data."
} else {
    Write-Host "Mismatches found:"
    $mismatches | Format-Table -AutoSize
}

This script downloads Azure provider operations to a CSV file, reads the CSV file, extracts text from Terraform files, and compares the extracted text with the CSV data to find mismatches. You can use this script to check if a role permission is good or bad in Azure RBAC using Terraform files.

I hope this post has given you a good introduction to how you can check if a role permission is good or bad in Azure RBAC and how you can use Terraform files to automate this process.

Happy coding! 🚀

Data threat modeling in Azure storage accounts

Info

I apologize in advance if this is a crazy idea and there is some mistake!! I am just trying to learn and share knowledge.

Azure Storage Account is a service that provides scalable, secure, and reliable storage for data. It is used to store data such as blobs, files, tables, and queues. However, it is important to ensure that the data stored in Azure Storage Account is secure and protected from security threats. In this article, we will discuss how to perform data threat modeling in Azure storage accounts.

What is data threat modeling?

Data threat modeling is a process of identifying and analyzing potential threats to data security. It helps organizations understand the risks to their data and develop strategies to mitigate those risks. Data threat modeling involves the following steps:

  1. Identify assets: Identify the data assets stored in Azure storage accounts, such as blobs, files, tables, and queues.
  2. Identify threats: Identify potential threats to the data assets, such as unauthorized access, data breaches, data leakage, malware, phishing attacks, insider threats, and data loss.
  3. Assess risks: Assess the risks associated with each threat, such as the likelihood of the threat occurring and the impact of the threat on the data assets.
  4. Develop mitigation strategies: Develop strategies to mitigate the risks, such as implementing security controls, access controls, encryption, monitoring, and auditing.

By performing data threat modeling, organizations can identify and address security vulnerabilities in Azure storage accounts and protect their data from security threats.

Identify assets in Azure storage accounts

Azure storage accounts can store various types of data assets, including:

  • Blobs: Binary large objects (blobs) are used to store unstructured data, such as images, videos, and documents.
  • Files: Files are used to store structured data, such as text files, configuration files, and log files.
  • Tables: Tables are used to store structured data in a tabular format, such as customer information, product information, and transaction data.
  • Queues: Queues are used to store messages for communication between applications, such as task messages, notification messages, and status messages.
  • Disks: Disks are used to store virtual machine disks, such as operating system disks and data disks.

Identifying the data assets stored in Azure storage accounts is the first step in data threat modeling. It helps organizations understand the types of data stored in Azure storage accounts and the potential risks to those data assets.

Identify threats to data in Azure storage accounts

There are several threats to data stored in Azure storage accounts, including:

  • Unauthorized access: Unauthorized users gaining access to Azure storage accounts and stealing data.
  • Data breaches: Data breaches can expose sensitive data stored in Azure storage accounts.
  • Data leakage: Data leakage can occur due to misconfigured access controls or insecure data transfer protocols.
  • Data loss: Data loss can occur due to accidental deletion, corruption, or hardware failure.
  • Ransomware: Ransomware can encrypt data stored in Azure storage accounts and demand a ransom for decryption.
  • DDoS attacks: DDoS attacks can disrupt access to data stored in Azure storage accounts.
  • Phishing attacks: Phishing attacks can trick users into providing their login credentials, which can be used to access and steal data.
  • Malware: Malware can be used to steal data from Azure storage accounts and transfer it to external servers.
  • Insider threats: Employees or contractors with access to sensitive data may intentionally or unintentionally exfiltrate data.
  • Data exfiltration: Unauthorized transfer of data from Azure storage accounts to external servers.

For example, the flow of data exfiltration in Azure storage accounts can be summarized as follows:

sequenceDiagram
    participant User
    participant Azure Storage Account
    participant External Server

    User->>Azure Storage Account: Upload data
    Azure Storage Account->>External Server: Unauthorized transfer of data

In this flow, the user uploads data to the Azure Storage Account, and the data is then transferred to an external server without authorization. This unauthorized transfer of data is known as data exfiltration.

Assess risks to data in Azure storage accounts

Assessing the risks associated with threats to data in Azure storage accounts is an important step in data threat modeling. Risks can be assessed based on the likelihood of the threat occurring and the impact of the threat on the data assets. Risks can be categorized as low, medium, or high based on the likelihood and impact of the threat.

For example, the risk of unauthorized access to Azure storage accounts may be categorized as high if the likelihood of unauthorized access is high and the impact of unauthorized access on the data assets is high. Similarly, the risk of data leakage may be categorized as medium if the likelihood of data leakage is medium and the impact of data leakage on the data assets is medium.

By assessing risks to data in Azure storage accounts, organizations can prioritize security measures and develop strategies to mitigate the risks.

For example, the risk of data exfiltration in Azure storage accounts can be assessed as follows:

pie
    title Data Exfiltration Risk Assessment
    "Unauthorized Access" : 30
    "Data Breaches" : 20
    "Data Leakage" : 15
    "Malware" : 10
    "Phishing Attacks" : 10
    "Insider Threats" : 15

Develop mitigation strategies for data in Azure storage accounts

Developing mitigation strategies is an essential step in data threat modeling. Mitigation strategies help organizations protect their data assets from security threats and reduce the risks associated with those threats. Mitigation strategies could include the following:

  1. Implement access controls: Implement access controls to restrict access to Azure storage accounts based on user roles and permissions.
  2. Encrypt data: Encrypt data stored in Azure storage accounts to protect it from unauthorized access.
  3. Monitor and audit access: Monitor and audit access to Azure storage accounts to detect unauthorized access and data exfiltration.
  4. Implement security controls: Implement security controls, such as firewalls, network security groups, and intrusion detection systems, to protect data in Azure storage accounts.
  5. Use secure transfer protocols: Use secure transfer protocols, such as HTTPS, to transfer data to and from Azure storage accounts.
  6. Implement multi-factor authentication: Implement multi-factor authentication to protect user accounts from unauthorized access.
  7. Train employees: Train employees on data security best practices to prevent data exfiltration and other security threats.
  8. Backup data: Backup data stored in Azure storage accounts to prevent data loss due to accidental deletion or corruption.
  9. Update software: Keep software and applications up to date to protect data stored in Azure storage accounts from security vulnerabilities.
  10. Implement data loss prevention (DLP) policies: Implement DLP policies to prevent data leakage and unauthorized transfer of data from Azure storage accounts.

As it is not an easy task, Microsoft provides us with tools for this, in the case of using a security framework we can always use the MCSB (Microsoft Cloud Security Baseline) which is a set of guidelines and best practices for securing Azure services, including Azure storage accounts. The MCSB provides recommendations for securing Azure storage accounts, such as enabling encryption, implementing access controls, monitoring access, and auditing activities:

Control Domain ASB Control ID ASB Control Title Responsibility Feature Name
Asset Management AM-2 Use only approved services Customer Azure Policy Support
Backup and recovery BR-1 Ensure regular automated backups Customer Azure Backup
Backup and recovery BR-1 Ensure regular automated backups Customer Service Native Backup Capability
Data Protection DP-1 Discover, classify, and label sensitive data Customer Sensitive Data Discovery and Classification
Data Protection DP-2 Monitor anomalies and threats targeting sensitive data Customer Data Leakage/Loss Prevention
Data Protection DP-3 Encrypt sensitive data in transit Microsoft Data in Transit Encryption
Data Protection DP-4 Enable data at rest encryption by default Microsoft Data at Rest Encryption Using Platform Keys
Data Protection DP-5 Use customer-managed key option in data at rest encryption when required Customer Data at Rest Encryption Using CMK
Data Protection DP-6 Use a secure key management process Customer Key Management in Azure Key Vault
Identity Management IM-1 Use centralized identity and authentication system Microsoft Azure AD Authentication Required for Data Plane Access
Identity Management IM-1 Use centralized identity and authentication system Customer Local Authentication Methods for Data Plane Access
Identity Management IM-3 Manage application identities securely and automatically Customer Managed Identities
Identity Management IM-3 Manage application identities securely and automatically Customer Service Principals
Identity Management IM-7 Restrict resource access based on conditions Customer Conditional Access for Data Plane
Identity Management IM-8 Restrict the exposure of credential and secrets Customer Service Credential and Secrets Support Integration and Storage in Azure Key Vault
Logging and threat detection LT-1 Enable threat detection capabilities Customer Microsoft Defender for Service / Product Offering
Logging and threat detection LT-4 Enable network logging for security investigation Customer Azure Resource Logs
Network Security NS-2 Secure cloud services with network controls Customer Disable Public Network Access
Network Security NS-2 Secure cloud services with network controls Customer Azure Private Link
Privileged Access PA-7 Follow just enough administration(least privilege) principle Customer Azure RBAC for Data Plane
Privileged Access PA-8 Choose approval process for third-party support Customer Customer Lockbox

And part of MCSB can be complemented with Azure Well Architected Framework, which provides guidance on best practices for designing and implementing secure, scalable, and reliable cloud solutions. The Well Architected Framework includes security best practices for Azure storage accounts, such as implementing security controls, access controls, encryption, monitoring, and auditing:

  1. Enable Azure Defender for all your storage accounts: Azure Defender for Storage provides advanced threat protection for Azure storage accounts. It helps detect and respond to security threats in real-time.
  2. Turn on soft delete for blob data: Soft delete helps protect your blob data from accidental deletion. It allows you to recover deleted data within a specified retention period.
  3. Use Microsoft Entitlement Management to authorize access to blob data: Microsoft Entitlement Management provides fine-grained access control for Azure storage accounts. It allows you to define and enforce access policies based on user roles and permissions.
  4. Consider the principle of least privilege: When assigning permissions to a Microsoft Entitlement security principal through Azure RBAC, follow the principle of least privilege. Only grant the minimum permissions required to perform the necessary tasks.
  5. Use managed identities to access blob and queue data: Managed identities provide a secure way to access Azure storage accounts without storing credentials in your code.
  6. Use blob versioning or immutable blobs: Blob versioning and immutable blobs help protect your business-critical data from accidental deletion or modification.
  7. Restrict default internet access for storage accounts: Limit default internet access to Azure storage accounts to prevent unauthorized access.
  8. Enable firewall rules: Use firewall rules to restrict network access to Azure storage accounts. Only allow trusted IP addresses to access the storage account.
  9. Limit network access to specific networks: Limit network access to specific networks or IP ranges to prevent unauthorized access.
  10. Allow trusted Microsoft services to access the storage account: Allow only trusted Microsoft services to access the storage account to prevent unauthorized access.
  11. Enable the Secure transfer required option: Enable the Secure transfer required option on all your storage accounts to enforce secure connections.
  12. Limit shared access signature (SAS) tokens to HTTPS connections only: Limit shared access signature (SAS) tokens to HTTPS connections only to prevent unauthorized access.
  13. Avoid using Shared Key authorization: Avoid using Shared Key authorization to access storage accounts. Use Azure AD or SAS tokens instead.
  14. Regenerate your account keys periodically: Regenerate your account keys periodically to prevent unauthorized access.
  15. Create a revocation plan for SAS tokens: Create a revocation plan and have it in place for any SAS tokens that you issue to clients. This will help you revoke access to the storage account if necessary.
  16. Use near-term expiration times on SAS tokens: Use near-term expiration times on impromptu SAS, service SAS, or account SAS to limit the exposure of the token.

Mixed strategies for data protection in Azure storage accounts

Diagram of the mixed strategies for data protection in Azure storage accounts:

graph LR
    A[Asset Management] -->B(AM-2)
    B --> C[Use only approved services]
    C --> D[Azure Policy] 
    E[Backup and recovery] -->F(BR-1)
    F --> G[Ensure regular automated backups]
    G --> H[Azure Backup]
    G --> I[Service Native Backup Capability]
    I --> I1["Azure Storage Account Configuration"]
    I1 --> I11["Turn on soft delete for blob data"]
    I1 --> I12["Use blob versioning or immutable blobs"]
graph LR
    J[Data Protection] -->K(DP-1)
    K --> L[Discover, classify, and label sensitive data]    
    L --> M[Sensitive Data Discovery and Classification]
    M --> M1["Microsoft Pureview"]       
    J --> N(DP-2)
    N --> O[Monitor anomalies and threats targeting sensitive data]
    O --> P[Data Leakage/Loss Prevention]
    P --> P1["Microsoft Defender for Storage"]
    J --> Q(DP-3)
    Q --> R[Encrypt sensitive data in transit]
    R --> S[Data in Transit Encryption]
    S --> S1["Azure Storage Account Configuration"]
    S1 --> S2["Enforce minimum TLS version"]
    J --> T(DP-4)
    T --> U[Enable data at rest encryption by default]
    U --> V[Data at Rest Encryption Using Platform Keys]
    V --> WW["Azure Storage Account Configuration"]    
    J --> W(DP-5)
    W --> X[Use customer-managed key option in data at rest encryption when required]
    X --> Y[Data at Rest Encryption Using CMK]
    Y --> WW["Azure Storage Account Configuration"]    
    J --> Z(DP-6)
    Z --> AA[Use a secure key management process]
    AA --> AB[Key Management in Azure Key Vault]
    AB --> AC["DEPRECATED"]
graph LR  
    AC[Identity Management] -->AD(IM-1)
    AD --> AE[Use centralized identity and authentication system]
    AE --> AE1["Microsoft Entra ID"]
    AE --> AF[Microsoft Entra ID Authentication Required for Data Plane Access]
    AF --> AF1["Azure Storage Account Configuration"]
    AF1 --> AF2["Disable Allow Shared Key authorization"]
    AD --> AG[Local Authentication Methods for Data Plane Access]
    AG --> AG1["Azure Storage Account Configuration"]
    AG1 --> AG2["Don't use SFTP if you don't need it"]
    AC --> AH(IM-3)
    AH --> AI[Manage application identities securely and automatically]
    AI --> AJ[Managed Identities]
    AI --> AK[Service Principals]
    AK --> AK1["Rotate or regenerate service principal credentials"]
    AC --> AL(IM-7)
    AL --> AM[Restrict resource access based on conditions]
    AM --> AN[Microsoft Entra Conditional Access for Data Plane]
    AC --> AO(IM-8)
    AO --> AP[Restrict the exposure of credential and secrets]    
    AP --> AQ[Service Credential and Secrets Support Integration and Storage in Azure Key Vault]    
    AQ --> AK1
    click AK1 "https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell" "Open this in a new tab" _blank

graph LR
AR[Logging and threat detection] -->AS(LT-1)
    AS --> AT[Enable threat detection capabilities]
    AT --> AU[Microsoft Defender for Service / Product Offering]
    AU --> AU1["Microsoft Defender for Storage"]
    AR --> AV(LT-4)
    AV --> AW[Enable network logging for security investigation]
    AW --> AX[Azure Resource Logs]
    AX --> AX1["Azure Monitor"]
    AX --> AX2["Azure Activity Log"]
    click AU1 "https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-storage-introduction" "Open this in a new tab" _blank
graph LR
    AY[Network Security] -->AZ(NS-2)
    AZ --> BA[Secure cloud services with network controls]
    BA --> BB["Azure Storage Account Configuration"]
    BB --> BB1[Disable Public Network Access]
    BB --> BB2[Allow trusted Microsoft services to access the storage account]
    BA --> BC[Azure Private Link]
    BC --> BC1["Azure Private Endpoint"]
    BA --> BD[Azure Network]
    BD --> BD1["Azure Service Endpoint"]
    BA --> BE["Network Security Perimeter"]

graph LR
    BD[Privileged Access] -->BE(PA-7)
    BE --> BF["Follow just enough administration(least privilege) principle"]
    BF --> BG[Azure RBAC for Data Plane]
    BG --> BG1["Azure RBAC"]
    BG1 --> BG2["Azure RBAC Roles"]
    BD --> BH(PA-8)
    BH --> BI[Choose approval process for third-party support]
    BI --> BJ[Customer Lockbox]
click BG2 "https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles/storage" "Open this in a new tab" _blank

Example of mixed strategies for data protection in Azure storage accounts

The following example illustrates how to implement mixed strategies for data protection in Azure storage accounts:

Conclusion

In conclusion, data threat modeling is an important process for identifying and addressing security vulnerabilities in Azure storage accounts. By identifying assets, threats, risks, and developing mitigation strategies, organizations can protect their data from security threats and ensure the security and integrity of their data assets. By following best practices and implementing security measures, organizations can prevent and detect data threats in Azure storage accounts and protect their data from security threats.

References

Tagging best practices in Azure

In this post, I will show you some best practices for tagging resources in Azure.

What are tags?

Tags are key-value pairs that you can assign to your Azure resources to organize and manage them more effectively. Tags allow you to categorize resources in different ways, such as by environment, owner, or cost center, and to apply policies and automation based on these categories.

If you don't know anything about tags, you can read the official documentation to learn more about them.

Why use tags?

There are several reasons to use tags:

  • Organization: Tags allow you to organize your resources in a way that makes sense for your organization. You can use tags to group resources by environment, project, or department, making it easier to manage and monitor them.

  • Cost management: Tags allow you to track and manage costs more effectively. You can use tags to identify resources that are part of a specific project or department, and to allocate costs accordingly.

  • Automation: Tags allow you to automate tasks based on resource categories. You can use tags to apply policies, trigger alerts, or enforce naming conventions, making it easier to manage your resources at scale.

Best practices for tagging resources in Azure

Here are some best practices for tagging resources in Azure:

  • Use consistent naming conventions: Define a set of standard tags that you will use across all your resources. This will make it easier to search for and manage resources, and to apply policies and automation consistently.

  • Apply tags at resource creation: Apply tags to resources when you create them, rather than adding them later. This will ensure that all resources are tagged correctly from the start, and will help you avoid missing or incorrect tags.

  • Use tags to track costs: Use tags to track costs by project, department, or environment. This will help you allocate costs more effectively, and will make it easier to identify resources that are not being used or are costing more than expected.

  • Define tags by hierarchy: Define tags in a hierarchy that makes sense for your organization, from more general at level subscription to more specific at resource group level.

  • Use inherited tags: Use inherited tags to apply tags to resources automatically based on their parent resources. This will help you ensure that all resources are tagged consistently, and will reduce the risk of missing or incorrect tags. Exist Azure Policy to enforce inherited tags for example, you can check all in Assign policy definitions for tag compliance

  • Don't use tags for policy filtering: If you use Azure Policy, it's highly recommended not to use tag filtering in your policy rules when the policy relates to security setting, when you use tags to filter, resources without tag appear Compliance. Azure Policy exemptions or Azure Policy exclusions are recommended.

  • Don't use tags for replace naming convention gaps: Tags are not a replacement for naming conventions. Use tags to categorize resources, and use naming conventions to identify resources uniquely.

  • Use tags for automation: Use tags to trigger automation tasks, such as scaling, backup, or monitoring. You can use tags to define policies that enforce specific actions based on resource categories.

  • Don't go crazy adding tags: Don't add too many tags to your resources. Keep it simple and use tags that are meaningful and useful. Too many tags can make it difficult to manage. You can begin with a small set of tags and expand as needed, for example: Minimum Suggested Tags

  • Not all Azure services support tags: Keep in mind that not all Azure services support tags. You can check in the Tag support for Azure resources to see which services support tags.

Conclusion

By using tags, you can organize and manage your resources more effectively, track and manage costs more efficiently, and automate tasks based on resource categories. I hope this post has given you a good introduction to tagging best practices in Azure and how you can use tags to optimize your cloud environment.

Restrict managed disks from being imported or exported

In this post, I will show you how to restrict managed disks from being imported or exported in Azure.

What are managed disks?

Azure Managed Disks are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed Disks are designed for high availability and durability, and they provide a simple and scalable way to manage your storage.

If you don't know anything about Azue Managed Disks, grab a cup of coffee( it will take you a while), you can read the official documentation to learn more about them.

Why restrict managed disks from being imported or exported?

There are several reasons to restrict managed disks from being imported or exported:

  • Security: By restricting managed disks from being imported or exported, you can reduce the risk of unauthorized access to your data.
  • Compliance: By restricting managed disks from being imported or exported, you can help ensure that your organization complies with data protection regulations.

How to restrict managed disks from being imported or exported

At deployment time

An example with azcli:

Create a managed disk with public network access disabled
## Create a managed disk with public network access disabled
az disk create --resource-group myResourceGroup --name myDisk --size-gb 128 --location eastus --sku Standard_LRS --no-wait --public-network-access disabled 
Create a managed disk with public network access disabled and private endpoint enabled

Follow Azure CLI - Restrict import/export access for managed disks with Private Links

At Scale

If you want to restrict managed disks from being imported or exported, you can use Azure Policy to enforce this restriction. Azure Policy is a service in Azure that you can use to create, assign, and manage policies that enforce rules and effects over your resources. By using Azure Policy, you can ensure that your resources comply with your organization's standards and service-level agreements.

To restrict managed disks from being imported or exported using Azure Policy, you can use or create a policy definition that specifies the conditions under which managed disks can be imported or exported. You can then assign this policy definition to a scope, such as a management group, subscription, or resource group, to enforce the restriction across your resources.

In this case we have a Built-in policy definition that restricts managed disks from being imported or exported Configure disk access resources with private endpoints

Conclusion

In this post, I showed you how to restrict managed disks from being imported or exported in Azure. By restricting managed disks from being imported or exported, you can reduce the risk of unauthorized access to your data and help ensure that your organization complies with data protection regulations.

Curiosly, restrict managed disks from being imported or exported, it's not a compliance check in the Microsoft cloud security benchmark but it's a good practice to follow.

Securely connect Power BI to data sources with a VNet data gateway

In this post, I will show you how to securely connect Power BI to your Azure data services using a Virtual Network (VNet) data gateway.

What is a Virtual Network (VNet) data gateway?

The virtual network (VNet) data gateway helps you to connect from Microsoft Cloud services to your Azure data services within a VNet without the need of an on-premises data gateway. The VNet data gateway securely communicates with the data source, executes queries, and transmits results back to the service.

The Role of a VNet Data Gateway

A VNet data gateway acts as a bridge that allows for the secure flow of data between the Power Platform and external data sources that reside within a virtual network. This includes services such as SQL databases, file storage solutions, and other cloud-based resources. The gateway ensures that data can be transferred securely and reliably, without exposing the network to potential threats or breaches.

How It Works

graph LR
    User([User]) -->|Semantic Models| SM[Semantic Models]
    User -->|"Dataflows (Gen2)"| DF["Dataflows (Gen2)"]
    User -->|Paginated Reports| PR[Paginated Reports]
    SM --> PPVS[Power Platform VNET Service]
    DF --> PPVS
    PR --> PPVS
    PPVS --> MCVG[Managed Container
for VNet Gateway] MCVG -->|Interfaces with| SQLDB[(SQL Databases)] MCVG -->|Interfaces with| FS[(File Storage)] MCVG -->|Interfaces with| CS[(Cloud Services)] MCVG -.->|Secured by| SEC{{Security Features}} subgraph VNET_123 SQLDB FS CS SEC end classDef filled fill:#f96,stroke:#333,stroke-width:2px; classDef user fill:#bbf,stroke:#f66,stroke-width:2px,stroke-dasharray: 5, 5; class User user class SM,DF,PR,PPVS,MCVG,SQLDB,FS,CS,SEC filled

The process begins with a user leveraging Power Platform services like Semantic Models, Dataflows (Gen2), and Paginated Reports. These services are designed to handle various data-related tasks, from analysis to visualization. They connect to the Power Platform VNET Service, which is the heart of the operation, orchestrating the flow of data through the managed container for the VNet gateway.

This managed container is a secure environment specifically designed for the VNet gateway’s operations. It’s isolated from the public internet, ensuring that the data remains protected within the confines of the virtual network. Within this secure environment, the VNet gateway interfaces with the necessary external resources, such as SQL databases and cloud storage, all while maintaining strict security protocols symbolized by the padlock icon in our diagram.

If you need to connect to services on others VNets, you can use VNet peering to connect the VNets, and maybe access to on-premises resources using ExpressRoute or other VPN solutions.

The Benefits

By utilizing a VNet data gateway, organizations can enjoy several benefits:

  • Enhanced Security: The gateway provides a secure path for data, safeguarding sensitive information and complying with organizational security policies.
  • Network Isolation: The managed container and the virtual network setup ensure that the data does not traverse public internet spaces, reducing exposure to vulnerabilities.
  • Seamless Integration: The gateway facilitates smooth integration between Power Platform services and external data sources, enabling efficient data processing and analysis.

Getting Started

To set up a VNet data gateway, follow these steps:

Register Microsoft.PowerPlatform as a resource provider

Before you can create a VNet data gateway, you need to register the Microsoft.PowerPlatform resource provider. This can be done using the Azure portal or the Azure CLI.

az provider register --namespace Microsoft.PowerPlatform

Associate the subnet to Microsoft Power Platform

Create a VNet in your Azure subscription and a subnet where the VNet data gateway will be deployed. Next, you need to delegate subnet to service Microsoft.PowerPlatform/vnetaccesslinks.

Note

  • This subnet can't be shared with other services.
  • Five IP addresses are reserved in the subnet for basic functionality. You need to reserve additional IP addresses for the VNet data gateway, add more IPs for future gateways.
  • You need a role with the Microsoft.Network/virtualNetworks/subnets/join/action permission

This can be done using the Azure portal or the Azure CLI.

# Create a VNet and address prefix 10.0.0.0/24
az network vnet create --name MyVNet --resource-group MyResourceGroup --location eastus --address-prefixes 10.0.0.0/24


# Create a Netwrok Security Group
az network nsg create --name MyNsg --resource-group MyResourceGroup --location eastus

# Create a subnet with delegation to Microsoft.PowerPlatform/vnetaccesslinks and associate the NSG
az network vnet subnet create --name MySubnet --vnet-name MyVNet --resource-group MyResourceGroup --address-prefixes 10.0.0.1/27 --network-security-group MyNsg --delegations Microsoft.PowerPlatform/vnetaccesslinks

Create a VNet data gateway

Note

A Microsoft Power Platform User with with Microsoft.Network/virtualNetworks/subnets/join/action permission on the VNet is required. Network Contributor role is not necessary.

  1. Sign in to the Power BI homepage.
  2. In the top navigation bar, select the settings gear icon on the right.
  3. From the drop down, select the Manage connections and gateways page, in Resources and extensions.
  4. Select Create a virtual network data gateway..
  5. Select the license capacity, subscription, resource group, VNet and the Subnet. Only subnets that are delegated to Microsoft Power Platform are displayed in the drop-down list. VNET data gateways require a Power BI Premium capacity license (A4 SKU or higher or any P SKU) or Fabric license to be used (any SKU).
  6. By default, we provide a unique name for this data gateway, but you could optionally update it.
  7. Select Save. This VNet data gateway is now displayed in your Virtual network data gateways tab. A VNet data gateway is a managed gateway that could be used for controlling access to this resource for Power platform users.

Conclusion

The VNet data gateway is a powerful tool that enables secure data transfer between the Power Platform and external data sources residing within a virtual network. By leveraging this gateway, organizations can ensure that their data remains protected and compliant with security standards, all while facilitating seamless integration and analysis of data. If you are looking to enhance the security and reliability of your data connections, consider implementing a VNet data gateway in your environment.