img

Basics of Azure Kubernetes Service (AKS)

Azure Cloud platformAzure Kubernetes Service (AKS) Companies are increasingly turning to Kubernetes to manage their containerized applications. At its core lies orchestration – the process of automating, overseeing, and engaging with a large number of containers. Orchestration technology and tools automate the deployment, management, scaling, networking, and availability of container-based applications across a cluster of physical or virtual servers. 

This article explains Kubernetes, its related terminologies, cluster architecture, and the basics of Azure Kubernetes Service (AKS). 

What is Kubernetes? 

Kubernetes (K8s) is a fast-developing open-source container orchestration technology platform that controls networking and storage components connected to container-based applications. Joe Beda, Brendan Burns, and Craig McLuckie started it; they were soon joined by other Google engineers. Google originally made the announcement in the middle of 2014. 

The workloads of the applications are the main concern, not the underlying infrastructure parts. Kubernetes provides deployment, scalability, load balancing, logging, and monitoring at the level of the container. With Kubernetes, anything that can operate in a container is compatible. Kubernetes can be integrated with existing continuous integration and continuous delivery (CI/CD) pipeline systems to schedule and deploy releases. 

The managed Kubernetes service offered by Azure Kubernetes Service (AKS) makes deployment and core management duties, such as organizing upgrades, less complicated. 

We will concentrate only on Kubernetes using Azure AKS 

Kubernetes Terminologies 

Containers 

Containers form the essential building blocks of Kubernetes. Applications must be executed inside containers in order for K8s to orchestrate them. 

Pods 

Kubernetes employs pods to execute an application instance. One container runs on one pod, which represents a single instance of your application. Although there are sophisticated instances where a pod may contain several containers, pods normally have a 1:1 mapping with a container. 

Nodes 

We require a Kubernetes node to operate applications and support services. A workload is executed by putting containers into pods that are powered by nodes. A node may be a virtual or physical machine because it totally depends on a cluster. 

Node Pools 

Node pools are collections of nodes with the same configuration. You can find a node pool in a Kubernetes cluster. A default node pool is created when we construct an AKS cluster, which specifies the initial nodes and size. The underlying VMs that power the agent nodes are contained in this default node pool in AKS. 

Clusters 

The cornerstone of Kubernetes is the cluster. A cluster is made up of many nodes and it forms at the time you deploy Kubernetes. Run on the top of a cluster are Kubernetes objects that represent containerized applications. 

Cluster Architecture 

A cluster mainly consists of two components: 

1. Control Plane 

A control plane is automatically established and set up when you create an AKS cluster. Azure resource is in charge of this control plane, which is hidden from the user. Only the nodes that are a part of the AKS cluster have costs; the control plane does not. It offers essential Kubernetes services and container-based application orchestration. 

AKS offers a single-tenant control plane with a dedicated API server, scheduler, etc. The user determines the size and number of the nodes, and Azure configures the secure communication between the control plane and the nodes. Users can communicate with the control plane using the Kubernetes dashboard or Kubernetes APIs like kubectl. 

Here are the core Kubernetes components that are a part of the control plane:  

  •  kube-apiserver 

The API server provides the Kubernetes APIs. The Kubernetes dashboard and kubectl interact with each other via the API server, which serves as the front end for the Kubernetes control plane. 

  •  etcd 

It is a highly available key-value store to keep the Kubernetes cluster’s configurations and state up to date. It functions as a backing store for all the cluster data. 

  •  kube-scheduler 

The scheduler finds the nodes that can handle the burden and start them when applications are created or scaled. It keeps an eye out for freshly formed pods that don’t yet have any assigned nodes and chooses a node for them to run on. 

  •  kube-controller-manager 

The Controller Manager is in charge of several smaller controllers that manage node operations and replicate pods, among other tasks. Although technically each controller should operate as a separate process, they are all compiled into a single binary and run in a single process to decrease complexity. Node, replication, endpoint, service account, and token controllers are just a few examples of constituent controllers. 

2. Nodes 

We require a Kubernetes node to run applications and auxiliary services. One or more nodes, which are Azure virtual machines (VMs) running the Kubernetes node components and container runtime, make up an AKS cluster: 

  •  kubelet 

The Kubernetes agent, or kubelet, handles orchestration requests from the control plane and arranges for the running of the necessary containers. It checks if containers are executing in a pod on every node in the cluster. 

  •  kube-proxy 

Each node runs kube-proxy in a cluster, and a portion of the Kubernetes service executes with the help of the network proxy. Network communication to pods from network sessions inside or outside of the cluster is permitted by network rules that are maintained on nodes. 

Container Runtime 

Container runtime is the element that enables containerized apps to function and communicate with extra resources like the virtual network and storage. Moby is utilized as the container runtime in AKS. 

Cluster Deployments 

An application is ready to be deployed onto a cluster once it is active in a container. One or more identical pods are represented by deployment by the Kubernetes Deployment Controller. The number of replicas (pods) to construct is specified by deployment, and the Kubernetes scheduler makes sure that any extra pods are scheduled on healthy nodes in the event that any pods or nodes have issues. 

Why AKS? 

AKS is a clear choice for your cloud solution if you already run some or all of your workload on Azure. You can manage permissions with ease because it interfaces with Azure Automate Deployments Using Azure directly. You can look forward to an excellent connection with Azure DevOps and GitHub, both of which are Git services owned by Microsoft, in addition to good integration with other components. 

Azure is also the last remaining cloud service provider to offer a free master node. The cost may or may not be significant depending on the size of your cluster, but it is something to consider. A few minor but crucial characteristics also make a difference. 

Any security updates are automatically installed by default in your cluster. Azure Kubernetes Service (AKS) offers scale-to-zero, which is still another fantastic feature, and general auto-updating as a feature. While you pair it with the free master node, you may have an AKS resource available that doesn’t cost you anything when you’re not using it. 

Let’s look at how you can containerize your azure app. 

Containerizing the Application 

  1. Create a new directory in your computer and clone the source code for the web application from here. 
  2. Open cmd in the current directory, create a container image and spin a container. 

docker build -t aks-app . 

docker run -p 5000:8080 aks-app 

  1. Check the application running by visiting localhost:5000 
  2. Push the container image after creating an Azure container Registry   

# login to your azure account 

az login 

az group create –name aks_rg –location eastus 

az acr create –resource-group aks_rg –name myaksCreg –sku Basic 

docker tag aks-app myakscreg.azurecr.io/aks-app:v1 

az acr login –name myaksCreg 

docker push myaksCreg.azurecr.io/aks-app:v1 

az acr repository list –name myaksCreg –output table 

az acr repository list –name myaksCreg –output table 

az acr update -n myaksCreg –admin-enabled true   

Creating a Kubernetes Cluster 

  1. Login to your Azure Cloud platform and search for Kubernetes services. Click on the add button, then configure your cluster. 
  2. Select the resource group created earlier for the registry i.e., my_rg. Fill the other details according to the requirement and keep the node count as 1.  
  3. Choose the Authentication method as System assigned under the authentication tab. Otherwise, it will not allow us to add the container image from our registry. 
  4. Next, move to the Integrations tab and choose the container registry which you created earlier. This is very important. 
  5.  Click on review+create. 
  6.  Launch cmd and execute the following command – it is used to download the credentials and it configures the Kubernetes CLI to use them. 

az aks get-credentials –resource-group aks_rg –name akscluster   

1. To verify the connection of cluster, use the “kubectl get” command to return a list of cluster nodes. 

kubectl get nodes 

2. We have created our cluster, and the nodes are ready. Now, we have to check if the pods are running containers or any services. We can check that by using the following in the cmd.   

kubectl get nodes 

kubectl get services 

Conclusion 

We hope you now understand the basics of Kubernetes and how you can use it in AKS. We have also explained cluster architecture and other Kubernetes-related terminologies. Xavor offers DevOps and Cloud Services.

If you need any assistance with Azure Kubernetes Service (AKS), contact us at [email protected].   

Let's make it happen

We love fixing complex problems with innovative solutions. Get in touch to let us know what you’re looking for and our solution architect will get back to you soon.