How to create a multi-master cluster in Azure - kubernetes

I need to create an Azure Kubernetes Service with 3 master nodes. So far I used to work with single master cluster, now I am in need of creating a multi-master cluster for production environments.
Can I get a way to create an AKS with multiple control planes. Thanks in Advance.

As Soundarya mentioned in the comment, the solution could be fould here:
As your ask is on AKS (Managed service from Azure) with HA enabled Clusters you already have more than one master running. As AKS is a managed offering service you will will not have the visibility/control on this.
Can I get a way to create an AKS with multiple control planes?
For this you can check the AKS Uptime SLA, Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters.
Please check this document for more details.
If you are using AKS Engine (unmanaged service), then you can specify the number of masters. Please refer to this document for more details.

Related

To read secret from etcd in AKS using etcdctl throws Error: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory

To read secret from etcd in AKS Cluster, Used below command
ETCDCTL_API=3 etcdctl --endpoints=<endpoint> --cert=/etc/kubernetes/pki/apiserver-etcd-client.crt --key=/etc/kubernetes/pki/apiserver-etcd-client.key get / --prefix --keys-only
Error: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory.
Where the certificates will be stored by default?
refered the doc https://docs.starlingx.io/security/kubernetes/etcd-certificates-c1fc943e4a9c.html for certification path.
It seems to me that you're having the wrong image about AKS (and Managed Kubernetes solutions in general).
Basically:
Managed Kubernetes solutions (like AKS, GKE EKS) are having some of the cluster components abstracted from the user (meaning you won't be able to access them).
Kubernetes clusters that are not managed by a cloud provider (like on-premise) are giving the user access to pretty much everything.
Above bullet points were only to narrow down the issue. There are a lot of differences between cloud-managed and self-managed solutions and I encourage you to check them out.
Example reference:
Serverfault.com: What is the point of running self managed Kubernetes cluster
In short terms:
You will not get the access to etcd on AKS.
You won't find the etcd certificates on your VM or Azure Cloud Shell
Citing official Microsoft documentation:
Control plane
When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.
The control plane includes the following core Kubernetes components:
Component
Description >
kube-apiserver
The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as kubectl or the Kubernetes dashboard.
etcd
To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a key value store within Kubernetes.
kube-scheduler
When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
kube-controller-manager
The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.
AKS provides a single-tenant control plane, with a dedicated API server, scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
While you don't need to configure components (like a highly available etcd store) with this managed control plane, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
To configure or directly access a control plane, deploy a self-managed Kubernetes cluster using Cluster API Provider Azure.
-- Docs.microsoft.com: Azure: AKS: Concepts clusters workloads: Control plane

Can I add nodes running on my machine to AWS EKS cluster?

Well, I read the user guide of AWS EKS service. I created a managed node group for the EKS cluster successfully.
I don't know how to add the nodes running on my machine to the EKS cluster. I don't know whether EKS support. I didn't find any clue in its document. I read the 'self-managed node group' chapter, which supports add a self-managed EC2 instances and auto-scaling group to the EKS cluster rather than a private node running on other cloud instance like azure, google cloud or my machine.
Does EKS support? How to do that if supports?
This is not possible. It is (implicitly) called out in this page. All worker nodes need to be deployed in the same VPC where you deployed the control plane (not necessarily the same subnets though). EKS Anywhere (to be launched later this year) will allow you to deploy a complete EKS cluster (control plane + workers) outside of an AWS region (but it won't allow running the control plane in AWS and workers locally).
As far as I know, EKS service doesn't support adding self nodes to the cluster. But the 'EKS Anywhere' service does, which has not been online yet, but soon.

How to add remote vm instance as worker node in kubernetes cluster

I'm new to kubernetes and trying to explore the new things in it. So, my question is
Suppose I have existing kubernetes cluster with 1 master node and 1 worker node. Consider this setup is on AWS, now I have 1 more VM instance available on Oracle Cloud Platform and I want to configure that VM as worker node and attach that worker node to existing cluster.
So, is it possible to do so? Can anybody have any suggestions regarding this.
I would instead divide your clusters up based on region (unless you have a good VPN between your oracle and AWS infrastructure)
You can then run applications across clusters. If you absolutely must have one cluster that is geographically separated, I would create a master (etcd host) in each region that you have a worker node in.
Worker Node and Master Nodes communication is very critical for Kubernetes cluster. Adding nodes from on-prem to a cloud provider or from different cloud provider will make lots of issues from network perspective.
As VPN connection between AWS and Oracle Cloud needed and every time worker node has to cross ocean (probably) to reach master node.
EDIT: From Kubernetes Doc, Clusters cannot span clouds or regions (this functionality will require full federation support).
https://kubernetes.io/docs/setup/best-practices/multiple-zones/

Deploying K8S cluster without default worker pool in IBM Cloud

Good day to you.
I am implementing VPC and K8S modules for Terraform to deploy a complete virtual datacenter including compute resources in the IBM managed cloud. I would like to have full control of the worker pools attributes, like
name
flavor
zone
size
and therefore I would like to delete the default worker pool. This should ideally happen during the deployment by terraform.
Does anyone know, whether it is possible?
I tried to set the worker count to zero and define a specific worker pool, but this creates me a cluster with to worker pools and one worker in the default pool.
Best regards.
Jan
#Jan-Hendrik Palic unfortunately, the IBM Cloud Kubernetes Service API does not support this scenario at the moment. Because Terraform uses the API, there is no way right now to create a cluster without the default worker pool.

Azure AKS node pool is not scaling

i have created the Azure AKS cluster with autoscale feature enabled, by following the link.
Deployed Django, Celery and Rabbitmq based application and created Keda in it to scale pods based on Rabbitmq queue length. Keda is able to scale pods but Nodes are not getting scaled in node pool.
Can some one help me with it?
The following is answer I got from azure support team on this -
"Unfortunately, Autoscaling feature on Virtual machine availabilty sets is natively support by Azure kubernetes for now. We have VMSS autoscaler feature and that too in preview phase."
They were focusing on manual scaling for now.
They also mentioned one github repo to refer but azure won't provide any support for it.
Its mentioned as follows -
I have done a quick research ,please find the github link where we have procedure on autoscaling of VM availbility sets, Kindly go through standard deployment section in the link. This is not directly supported by us and if you have any issues or concerns you can approach github for the same.
click here