Skip to content

2024/05

Repository Strategy: How to test Branching Strategy in local repository

If you don't want to test in github, gitlab, or azure devops, you can test in your local desktop.

Step 1: Create a new local bare repository

To create a new local bare repository, open a terminal window and run the following command:

mkdir localrepo
cd localrepo
git init --bare my-repo.git

This command creates a new directory called localrepo and initializes a new bare repository called my-repo.git inside it.

Step 2: Create a local repository

To create a new local repository, open a terminal window and run the following command:

mkdir my-repo
cd my-repo
git init

This command creates a new directory called my-repo and initializes a new repository inside it.

Step 3: Add the remote repository

To add the remote repository to your local repository, run the following command:

git remote add origin ../my-repo.git

In mi case, I have used absolute path, c:\users\myuser\localrepo\my-repo.git:

git remote add origin c:\users\myuser\localrepo\my-repo.git

This command adds the remote repository as the origin remote.

*Step 4: Create a new file, make first commit and push

To create a new file in your local repository, run the following command:

echo "Hello, World!" > hello.txt

This command creates a new file called hello.txt with the content Hello, World!.

To make the first commit to your local repository, run the following command:

git add hello.txt
git commit -m "Initial commit"

This command stages the hello.txt file and commits it to the repository with the message Initial commit.

To push the changes to the remote repository, run the following command:

git push -u origin master

This command pushes the changes to the master branch of the remote repository.

Step 5: Create a new branch and push it to the remote repository

To create a new branch in your local repository, run the following command:

git checkout -b feature-branch

This command creates a new branch called feature-branch and switches to it.

Conclusion

By following these steps, you can test your branching strategy in a local repository before pushing changes to a remote repository. This allows you to experiment with different branching strategies and workflows without affecting your production codebase.

Configure your Teams meetings options to improve your collaboration experience

Microsoft Teams is a powerful collaboration tool that allows you to communicate and work with your team members from anywhere in the world. One of the key features of Teams is the ability to schedule and host meetings with your colleagues, clients, or partners.

In this blog post, we will show you how to configure your Teams meetings options to improve your collaboration experience and make your meetings more productive.

What are Teams meetings options?

Teams meetings options are settings that allow you to control various aspects of your meetings, such as who can bypass the lobby, who can present, and who can record the meeting. By configuring these options, you can ensure that your meetings run smoothly and securely.

How to configure your Teams meetings options

To configure your Teams meetings options, follow these steps:

  1. Open the Teams app on your computer or mobile device.
  2. Click on the calendar icon in the left-hand menu to view your calendar.
  3. Click on the "Meetings" tab at the top of the screen to view your upcoming meetings.
  4. Click on the meeting you want to configure to open the meeting details.
  5. Click on the "Meeting options" link at the bottom of the meeting details window.
  6. Configure the meeting options according to your preferences. You can control who can bypass the lobby, who can present, who can record the meeting, and other settings.

Key settings to configure

Here are some key settings that you may want to configure in your Teams meetings options:

  • Who can bypass the lobby: By default, all participants are sent to the lobby when they join a meeting. You can configure this setting to allow participants to bypass the lobby and join the meeting directly.

  • Who can present: By default, only the meeting organizer can present in a meeting. You can configure this setting to allow other participants to present as well.

  • Who can record the meeting: By default, only the meeting organizer can record a meeting. You can configure this setting to allow other participants to record the meeting as well.

  • Automatically admit people: By default, participants need to be admitted to the meeting by the organizer or a presenter. You can configure this setting to automatically admit people to the meeting.

  • Allow attendees to unmute: By default, attendees are muted when they join a meeting. You can configure this setting to allow attendees to unmute themselves.

Conclusion

Configuring your Teams meetings options is a great way to improve your collaboration experience and make your meetings more productive. By configuring these settings, you can ensure that your meetings run smoothly and securely, allowing you to focus on the task at hand.

Reduce your attack surface in Snowflake when using from Azure

When it comes to data security, reducing your attack surface is a crucial step. This post will guide you on how to minimize your attack surface when using Snowflake with Azure or Power BI.

What is Snowflake?

Snowflake is a cloud-based data warehousing platform that allows you to store and analyze large amounts of data. It is known for its scalability, performance, and ease of use. Snowflake is popular among organizations that need to process large volumes of data quickly and efficiently.

What is an Attack Surface?

An attack surface refers to the number of possible ways an attacker can get into a system and potentially extract data. The larger the attack surface, the more opportunities there are for attackers. Therefore, reducing the attack surface is a key aspect of securing your systems.

How to Reduce Your Attack Surface in Snowflake:

  1. Use Azure Private Link Azure Private Link provides private connectivity from a virtual network to Snowflake, isolating your traffic from the public internet. It significantly reduces the attack surface by ensuring that traffic between Azure and Snowflake doesn't traverse over the public internet.

  2. Implement Network Policies Snowflake allows you to define network policies that restrict access based on IP addresses. By limiting access to trusted IP ranges, you can reduce the potential points of entry for an attacker.

  3. Enable Multi-Factor Authentication (MFA) MFA adds an extra layer of security by requiring users to provide at least two forms of identification before accessing the Snowflake account. This makes it harder for attackers to gain unauthorized access, even if they have your password.

In this blog post, we will show you how to reduce your attack surface when using Snowflake from Azure or Power BI by using Azure Private Link

Default Snowflake architecture

By default, Snowflake is accessible from the public internet. This means that anyone with the right credentials can access your Snowflake account from anywhere in the world, you can limit access to specific IP addresses, but this still exposes your Snowflake account to potential attackers.

This is a security risk because it exposes your Snowflake account to potential attackers.

The architecture of using Azure Private Link with Snowflake is as follows:

    graph TD
    A[Virtual Network] -->|Private Endpoint| B[Snowflake]
    B -->|Private Link Service| C[Private Link Resource]
    C -->|Private Connection| D[Virtual Network]
    D -->|Subnet| E[Private Endpoint]    

Architecture Components

Requirements

Before you can use Azure Private Link with Snowflake, you need to have the following requirements in place:

  • A Snowflake account with ACCOUNTADMIN privileges
  • Business Critical or higher Snowflake edition
  • An Azure subscription with a Resource Group and privileges to create:
    • Virtual Network
    • Subnet
    • Private Endpoint

Step-by-Step Guide

Note

Replace the placeholders with your actual values, that is a orientation guide.

Step 1: Retrieve Dedailts of your Snowflake Account
USE ROLE ACCOUNTADMIN;
select select SYSTEM$GET_PRIVATELINK_CONFIG();
Step 2: Create a Virtual Network

A virtual network is a private network that allows you to securely connect your resources in Azure. To create a virtual network with azcli, follow these steps:

az network vnet create \
  --name myVnet \
  --resource-group myResourceGroup \
  --address-prefix
  --subnet-name mySubnet \
  --subnet-prefix
  --enable-private-endpoint-network-policies true
Step 3: Create a Private Endpoint

The first step is to create a private endpoint in Azure. A private endpoint is a network interface that connects your virtual network to the Snowflake service. This allows you to access Snowflake using a private IP address, rather than a public one.

To create a private endpoint with azcli, follow these steps:

az network private-endpoint create \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup \
  --vnet-name myVnet \
  --subnet mySubnet \
  --private-connection-resource-id /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Network/privateLinkServices/<Snowflake-service-name> \
  --connection-name mySnowflakeConnection

Check the status of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup
Step 4: Authorize the Private Endpoint

The next step is to authorize the private endpoint to access the Snowflake service.

Retrieve the Resource ID of the private endpoint:

az network private-endpoint show \
  --name mySnowflakeEndpoint \
  --resource-group myResourceGroup

Create a temporary access token that Snowflake can use to authorize the private endpoint:

az account get-access-token --subscription <subscription-id>

Authorize the private endpoint in Snowflake:

USE ROLE ACCOUNTADMIN;
select SYSTEM$AUTHORIZE_PRIVATELINK('<resource-id>', '<access-token>');

Step 5: Block Public Access

To further reduce your attack surface, you can block public access to your Snowflake account. This ensures that all traffic to and from Snowflake goes through the private endpoint, reducing the risk of unauthorized access.

To block public access to your Snowflake account, you need to use Network Policy, follow these steps:

USE ROLE ACCOUNTADMIN;
CREATE NETWORK RULE allow_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('192.168.1.99/24');

CREATE NETWORK RULE block_access_rule
  MODE = INGRESS
  TYPE = IPV4
  VALUE_LIST = ('0.0.0.0/0');

CREATE NETWORK POLICY public_network_policy
  ALLOWED_NETWORK_RULE_LIST = ('allow_access_rule')
  BLOCKED_NETWORK_RULE_LIST=('block_access_rule');

It's highly recommended to follow the best practices for network policies in Snowflake. You can find more information here: https://docs.snowflake.com/en/user-guide/network-policies#best-practices

Step 6: Test the Connection

To test the connection between your virtual network and Snowflake, you can use the SnowSQL client.

snowsql -a <account_name> -u <username> -r <role> -w <warehouse> -d <database> -s <schema>

Internal Stages with Azure Blob Private Endpoints

If you are using Azure Blob Storage as an internal stage in Snowflake, you can also use Azure Private Link to secure the connection between Snowflake and Azure Blob Storage.

It's recommended to use Azure Blob Storage with Private Endpoints to ensure that your data is secure and that you are reducing your attack surface, you can check the following documentation for more information: Azure Private Endpoints for Internal Stages to learn how to configure Azure Blob Storage with Private Endpoints in Snowflake.

Conclusion

Reducing your attack surface is a critical aspect of securing your systems. By using Azure Private Link with Snowflake, you can significantly reduce the risk of unauthorized access and data breaches. Follow the steps outlined in this blog post to set up Azure Private Link with Snowflake and start securing your data today.

References

Nested Structures with Optional Attributes and Dynamic Blocks

In this post, I will explain how to define nested structures with optional attributes and it's use dynamic blocks in Terraform.

Nested Structures

Terraform allows you to define nested structures to represent complex data types in your configuration. Nested structures are useful when you need to group related attributes together or define a data structure that contains multiple fields.

For example, you might define a nested structure to represent a virtual machine with multiple attributes, such as size, image, and network configuration:

variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = string
    })
  })
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the vm variable defines a nested structure with three attributes: size, image, and network. The network attribute is itself a nested structure with two attributes: subnet and security_group.

Optional Attributes

If you have an attribute that is optional, you can define it as an optional attribute in your configuration. Optional attributes are useful when you want to provide a default value for an attribute but allow users to override it if needed.

If optional doesn't works, Terraform allows you to define your variables as any type, but it's not recommended because it can lead to errors and make your configuration less maintainable. It's better to define your variables with the correct type and use optional attributes when needed, but some cases it's necessary to use any and null values, you can minimize the risk of errors by providing a good description of the variable and its expected values.

For example, you might define an optional attribute for the security_group in the network structure:

variable "vm" {
  description = <<DESCRIPTION
  Virtual machine configuration.
  The following attributes are required:
  - size: The size of the virtual machine.
  - image: The image for the virtual machine.
  - network: The network configuration for the virtual machine.
  The network configuration should have the following attributes:
  - subnet: The subnet for the virtual machine.
  - security_group: The security group for the virtual machine.
  DESCRIPTION
  type = any
  default = null
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}
variable "vm" {
  type = object({
    size     = string
    image    = string
    network  = object({
      subnet = string
      security_group = optional(string)
    })
  })
}


resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = var.vm.size
  image    = var.vm.image
  subnet   = var.vm.network.subnet
  security_group = var.vm.network.security_group
}

In this example, the security_group attribute in the network structure is defined as an optional attribute with a default value of null. This allows users to provide a custom security group if needed, or use the default value if no value is provided.

Dynamic Blocks

Terraform allows you to use dynamic blocks to define multiple instances of a block within a resource or module. Dynamic blocks are useful when you need to create multiple instances of a block based on a list or map of values.

For example, you might use a dynamic block to define multiple network interfaces for a virtual machine:

variable "network_interfaces" {
  type = list(object({
    name    = string
    subnet  = string
    security_group = string
  }))
}

resource "azurerm_virtual_machine" "example" {
  name     = "example-vm"
  size     = "Standard_DS1_v2"
  image    = "UbuntuServer"

  dynamic "network_interface" {
    for_each = var.network_interfaces
    content {
      name            = network_interface.value.name
      subnet          = network_interface.value.subnet
      security_group  = network_interface.value.security_group
    }
  }
}

In this example, the network_interfaces variable defines a list of objects representing network interfaces with three attributes: name, subnet, and security_group. The dynamic block iterates over the list of network interfaces and creates a network interface block for each object in the list.

Conclusion

In this post, I explained how to define nested structures with optional attributes and use dynamic blocks in Terraform. Nested structures are useful for representing complex data types, while optional attributes allow you to provide default values for attributes that can be overridden by users. Dynamic blocks are useful for creating multiple instances of a block based on a list or map of values. By combining these features, you can create flexible and reusable configurations in Terraform.

Azure Container Registry: Artifact Cache

Azure Container Registry is a managed, private Docker registry service provided by Microsoft. It allows you to build, store, and manage container images and artifacts in a secure environment.

What is Artifact Caching?

Artifact Cache is a feature in Azure Container Registry that allows users to cache container images in a private container registry. It is available in Basic, Standard, and Premium service tiers.

Benefits of Artifact Cache

  • Reliable pull operations: Faster pulls of container images are achievable by caching the container images in ACR.
  • Private networks: Cached registries are available on private networks.
  • Ensuring upstream content is delivered: Artifact Cache allows users to pull images from the local ACR instead of the upstream registry.

Limitations

Cache will only occur after at least one image pull is complete on the available container image

How to Use Artifact Cache in Azure Container Registry without credential?

Let's take a look at how you can implement artifact caching in Azure Container Registry.

Step 1: Create a Cache Rule

The first step is to create a cache rule in your Azure Container Registry. This rule specifies the source image that should be cached and the target image that will be stored in the cache.

az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu

Check the cache rule:

az acr cache show -r MyRegistry -n MyRule

Step 2: Pull the Image

Next, you need to pull the image from the source registry to the cache. This will download the image and store it in the cache for future use.

docker pull myregistry.azurecr.io/hello-world:latest

Step 3: Clean up the resources

Finally, you can clean up the resources by deleting the cache rule.

az acr cache delete -r MyRegistry -n MyRule

If you need to check other rules, you can use the following command:

az acr cache list -r MyRegistry

Conclusion

Azure Container Registry's Artifact Cache feature provides a convenient way to cache container images in a private registry, improving pull performance and reducing network traffic. By following the steps outlined in this article, you can easily set up and use artifact caching in your Azure Container Registry.

If you need to use the cache with authentication, you can use the following article: Enable Artifact Cache with authentication.

For more detailed information, please visit the official tutorial on the Microsoft Azure website.

References

Repository Strategy: Branching Strategy

Once we have decided that we will use branches, it is time to define a strategy, so that all developers can work in a coordinated way.

Some of the best known strategies are:

  • Gitflow: It is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.
  • Feature Branch: It is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.
  • Trunk-Based Development: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitHub Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • GitLab Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.
  • Microsoft Flow: It is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

Gitflow

Gitflow is a branching model designed around the project release. It is a strict branching model that assigns very specific roles to different branches.

The Gitflow strategy is based on the following branches:

  • main: It is the main branch of the project. It contains the code that is in production.
  • develop: It is the branch where the code is integrated before being released to production.

The Gitflow strategy is based on the following types of branches:

  • feature: It is the branch where the code for a new feature is developed.
  • release: It is the branch where the code is prepared for release.
  • hotfix: It is the branch where the code is developed to fix a bug in production.

The Gitflow strategy is based on the following rules:

  • Feature branches are created from the develop branch.
  • Feature branches are merged into the develop branch.
  • Release branches are created from the develop branch.
  • Release branches are merged into the main and develop branches.
  • Hotfix branches are created from the main branch.
  • Hotfix branches are merged into the main and develop branches.

The Gitflow strategy is based on the following workflow:

  1. Developers create a feature branch from the develop branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the develop branch.
  4. Developers create a release branch from the develop branch.
  5. Developers prepare the code for release in the release branch.
  6. Developers merge the release branch into the main and develop branches.
  7. Developers create a hotfix branch from the main branch.
  8. Developers fix the bug in the hotfix branch.
  9. Developers merge the hotfix branch into the main and develop branches.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
branch develop
checkout develop
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout develop
merge feature_branch
commit
branch release_branch
checkout release_branch
commit
commit
checkout develop
merge release_branch
checkout main
merge release_branch
commit
branch hotfix_branch
checkout hotfix_branch
commit
commit
checkout develop
merge hotfix_branch
checkout main
merge hotfix_branch

The Gitflow strategy have the following advantages:

  • Isolation: Each feature is developed in a separate branch, isolating it from other features.
  • Collaboration: Developers can work on different features at the same time without affecting each other.
  • Stability: The main branch contains the code that is in production, ensuring that it is stable and reliable.

The Gitflow strategy is based on the following disadvantages:

  • Complexity: The Gitflow strategy is complex and can be difficult to understand for new developers.
  • Overhead: The Gitflow strategy requires developers to create and manage multiple branches, which can be time-consuming.

Feature Branch Workflow

Feature Branch is a strategy where each feature is developed in a separate branch. This strategy is simple and easy to understand.

The Feature Branch strategy is based on the following rules:

  • Developers create a feature branch from the main branch.
  • Developers develop the code for the new feature in the feature branch.
  • Developers merge the feature branch into the main branch.

The Feature Branch strategy is based on the following workflow:

  1. Developers create a feature branch from the main branch.
  2. Developers develop the code for the new feature in the feature branch.
  3. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

The Feature Branch strategy is based on the following advantages:

  • Simplicity: The Feature Branch strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in feature branches are visible to other developers, making it easier to review and merge changes.

The Feature Branch strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same feature can lead to conflicts and merge issues.
  • Complexity: Managing multiple feature branches can be challenging, especially in large projects.

Trunk-Based Development

The Trunk-Based Development strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature flags to hide unfinished features.
  • Developers merge the code into the main branch when it is ready.

The Trunk-Based Development strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create feature flags to hide unfinished features.
  3. Developers merge the code into the main branch when it is ready.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit tag:"v1.0.0"
commit
commit
commit tag:"v2.0.0"

The Trunk-Based Development strategy is based on the following advantages:

  • Simplicity: The Trunk-Based Development strategy is simple and easy to understand.
  • Flexibility: Developers can work on different features at the same time without affecting each other.
  • Visibility: Changes made in the main branch are visible to other developers, making it easier to review and merge changes.

The Trunk-Based Development strategy is based on the following disadvantages:

  • Conflicts: Multiple developers working on the same codebase can lead to conflicts and merge issues.
  • Complexity: Managing feature flags can be challenging, especially in large projects.

GitHub Flow

GitHub Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand.

The GitHub Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.

The GitHub Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch

GitLab Flow

GitLab Flow is a strategy where all developers work on a single branch. This strategy is simple and easy to understand. GitLab Flow is often used with release branches.

The GitLab Flow strategy is based on the following rules:

  • Developers work on a single branch.
  • Developers create feature branches to work on new features.
  • Developers merge the feature branches into the main branch when they are ready.
  • Developers create a pre-production branch to make bug fixes before merging changes back to the main branch.
  • Developers merge the pre-production branch into the main branch before going to production.
  • Developers can add as many pre-production branches as needed.
  • Developers can maintain different versions of the production branch.

The GitLab Flow strategy is based on the following workflow:

  1. Developers work on a single branch.
  2. Developers create a feature branch from the main branch.
  3. Developers develop the code for the new feature in the feature branch.
  4. Developers merge the feature branch into the main branch.
  5. Developers create a pre-production branch from the main branch.
  6. Developers make bug fixes in the pre-production branch.
  7. Developers merge the pre-production branch into the main branch.
  8. Developers create a production branch from the main branch.
  9. Developers merge the production branch into the main branch.
gitGraph:
options
{
    "nodeSpacing": 150,
    "nodeRadius": 10
}
end
commit
checkout main
commit
commit
branch feature_branch
checkout feature_branch
commit
commit
checkout main
merge feature_branch
commit
branch pre-production
checkout pre-production
commit
commit
checkout main
merge pre-production
commit
branch production
checkout production
commit
commit
checkout main
merge production