Introduction to AKS-Part3

This is the third part of the blog series which introduces the Azure Kubernetes Service (AKS).

Overview on the series:

In this part, topics are

  • AKS Support in security topics

Public endpoints & NSG

When it comes to security, AKS offers numerous options for securing the cluster and controlling access and data traffic. AKS is a PaaS service, which therefore automatically has a public endpoint. If access to it is to be restricted, this can be achieved with defined IP ranges, for example, outside of which no access is possible. The cluster can also be private and completely removed from the internet. In this case, communication with the API server is only possible via the configured VNet, using Azure Private Links. The data traffic then remains in the Azure backbone network.[1]

Restriction via private link also works for other Azure services that are often integrated into the cluster, such as storage or an Azure Container Registry.

Network Security Groups (NSG) are also used for filtering data traffic between the nodes, where AKS takes over the administration of the rules. This works for both network types. If, for example, an external K8s service is created, the load balancer configuration as well as the NSG configuration is adapted by AKS.[2]

Microsoft Defender for Cloud & Key Vault

Recently known as Azure Security Center and Azure Defender, additional protection can be activated for the cluster itself or other services used in the context, for example for the Azure Container Registry. The Defender supports this by, among other things, continuously scanning the images with the engine from the IT security provider Qualys, which can, for example, detect missing security patches of the OS layer.[3]

With the AKS activation of the Defender, which is subject to a charge, it can detect a crypto-mining container that has been smuggled in or access from unusual or disguised IP addresses. The results can then be viewed in the Defender Portal (Security Center).[4]

The Azure Key Vault is suitable for the secure storage of passwords, connection strings or certificates. This can be integrated directly into the cluster via an extension. Additional pods then simply run on the nodes and the Key Vault resources can be made available to the pods as volumes via the CSI driver.

Azure Policies & Network Policies

Azure Policies are a powerful tool for monitoring compliance and adherence to security guidelines. These can also be used in a cluster by activating a cluster add-on. Many AKS-specific policy definitions are already available by default and there are also initiatives in which policies are bundled for a security baseline standard. However, individual policies can also be created. Depending on the configuration, a policy can “report” (audit) a misconfiguration or, among other effects, prevent resource creation.[5].

Examples of compliance checks are setting allowed regions or SKUs of virtual machines. In the area of security, there are options for restricting pod privileges, permitted ports or resource allocations for pods.

In order to be able to control the data traffic between the pods more granularly, the Kubernetes resource network policies are available, which AKS can support in creating and managing with the feature of the same name. Based on the namespace and pod labels, a restriction can be created between pods, by default there is none.

You can choose between the Microsoft Azure Network Policies and those of the provider Calico. It should be noted that Microsoft policies are only recommended for Linux nodes and require a CNI network. Calico policies also work with the kubenet network and have Windows support in the preview, but of course cannot offer Azure support. [6]

AAD Integration

Azure Active Directory (AAD) can be integrated for access control to the cluster. In this case, too, only one feature needs to be activated. The integration of the AAD then connects the Azure RBAC system with the Kubernetes RBAC. This allows cluster access to be controlled via the AAD group and role assignments, with existing AKS-specific roles or self-created ones. It can also be used to work with services such as Conditional Access or Privileged Identity Management (PIM). It should be noted that this feature can be subsequently activated, but not deactivated again. In addition, the activation of K8’s RBAC in the cluster is required.[7]

Node Patching / Upgrades

An important topic in the security context is of course the patching of the operating systems. With AKS, this does not have to be taken into account at all for the master nodes, as these are managed and patched by Azure. In the case of the Linux worker nodes, updates are automatically obtained at night and the Kured project provides support for necessary reboots of the nodes. Additional pods on the nodes in the form of Deamon Sets check the existence of the /var/run/reboot-required file and can take over the rebooting of the node including the re-scheduling of the pods.[8]

There are no daily updates for the Windows Worker Nodes, but the AKS upgrade process can be used to get the latest patched base images for the nodes.

This process, which can also be used to upgrade the K8s version in the cluster, uses the cordon & drain approach. A node is first marked so that the K8s scheduler will not schedule any more new pods on it. Running pods are distributed to other nodes and a new node is created and can accommodate pods. Once the marked node has no more pods, it is deleted and the next node is added. In this way, downtime of the application can be prevented. The upgrade can be done with the Azure CLI, for example, and can optionally be done separately for master nodes and worker nodes.[9]

Snippet for an upgrade command via CLI:

az aks upgrade --resource-group $rgName --name $aksClusterName --kubernetes-version 1.21.1 --control-plane-only

Furthermore, an auto-upgrade channel is in preview for the Kubernetes version, which can automatically update the version in different stages.[10]

Snippet for CLI:

az aks update --resource-group $rgName --name $aksClusterName --auto-upgrade-channel stable


[1] https://docs.microsoft.com/en-us/azure/aks/private-clusters

[2] https://docs.microsoft.com/en-us/azure/aks/concepts-network

[3] https://docs.microsoft.com/en-us/azure/defender-for-cloud/defender-for-container-registries-usage

[4] https://docs.microsoft.com/en-us/azure/defender-for-cloud/defender-for-kubernetes-introduction

[5] https://docs.microsoft.com/en-us/azure/aks/policy-reference

[6] https://docs.microsoft.com/en-us/azure/aks/use-network-policies

[7] https://docs.microsoft.com/en-us/azure/aks/managed-aad

[8] https://docs.microsoft.com/en-us/azure/aks/node-updates-kured

[9] https://github.com/Azure/sg-aks-workshop/tree/master/day2-operations

[10] https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster#set-auto-upgrade-channel

Leave a comment

Your email address will not be published. Required fields are marked *