Attaching non-azure VMs to Azure Kubernetes Service (AKS) - kubernetes

In the context of Azure Kubernetes Service (AKS), I would like to deploy some pods to a region not currently supported by Azure (in my case, Mexico). Is it possible to provision a non-Azure VM here in Mexico and attach it as a worker node to my AKS cluster?
Just to be clear, I want Azure to host the Kubernetes control plane. I want to spin out some Azure VMs within various supported regions. Then configure a non-Azure VM hosted in Mexico as a Kubernetes Node and attach it to the cluster.
(Soon there will be a Microsoft Azure Datacenter in Mexico and this problem will be moot. In the mean time, was hoping to monkey wrench it.)

You can't have a node pool with VMs that are not managed by Azure with AKS. You'll need to run your own k8s cluster if you want to do something like this. The closest you can get to something managed in Azure like AKS is to build your own Azure Arc enabled Kubernetes Cluster, but you'll need some skills with tools like Rancher, Kubespray, Kubeadm or something else.

Related

To read secret from etcd in AKS using etcdctl throws Error: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory

To read secret from etcd in AKS Cluster, Used below command
ETCDCTL_API=3 etcdctl --endpoints=<endpoint> --cert=/etc/kubernetes/pki/apiserver-etcd-client.crt --key=/etc/kubernetes/pki/apiserver-etcd-client.key get / --prefix --keys-only
Error: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory.
Where the certificates will be stored by default?
refered the doc https://docs.starlingx.io/security/kubernetes/etcd-certificates-c1fc943e4a9c.html for certification path.
It seems to me that you're having the wrong image about AKS (and Managed Kubernetes solutions in general).
Basically:
Managed Kubernetes solutions (like AKS, GKE EKS) are having some of the cluster components abstracted from the user (meaning you won't be able to access them).
Kubernetes clusters that are not managed by a cloud provider (like on-premise) are giving the user access to pretty much everything.
Above bullet points were only to narrow down the issue. There are a lot of differences between cloud-managed and self-managed solutions and I encourage you to check them out.
Example reference:
Serverfault.com: What is the point of running self managed Kubernetes cluster
In short terms:
You will not get the access to etcd on AKS.
You won't find the etcd certificates on your VM or Azure Cloud Shell
Citing official Microsoft documentation:
Control plane
When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.
The control plane includes the following core Kubernetes components:
Component
Description >
kube-apiserver
The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as kubectl or the Kubernetes dashboard.
etcd
To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a key value store within Kubernetes.
kube-scheduler
When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
kube-controller-manager
The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.
AKS provides a single-tenant control plane, with a dedicated API server, scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
While you don't need to configure components (like a highly available etcd store) with this managed control plane, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
To configure or directly access a control plane, deploy a self-managed Kubernetes cluster using Cluster API Provider Azure.
-- Docs.microsoft.com: Azure: AKS: Concepts clusters workloads: Control plane

GKE - Hybrid Kubernetes cluster

I've been reading the Google Cloud documentation about hybrid GKE cluster with Connect or completely on prem with GKE on-prem and VMWare.
However, I see that GKE with Connect you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.
But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing on-prem and cloud nodes. Graphical example:
For the above solution, the master node is managed by GCloud, but the ideal solution is to manage multiple node masters (High availability) on cloud and nodes on prem. Graphical example:
Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?
If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.
Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.
The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.
If you want to know more about Anthos in GCP please follow this link.

Can I add nodes running on my machine to AWS EKS cluster?

Well, I read the user guide of AWS EKS service. I created a managed node group for the EKS cluster successfully.
I don't know how to add the nodes running on my machine to the EKS cluster. I don't know whether EKS support. I didn't find any clue in its document. I read the 'self-managed node group' chapter, which supports add a self-managed EC2 instances and auto-scaling group to the EKS cluster rather than a private node running on other cloud instance like azure, google cloud or my machine.
Does EKS support? How to do that if supports?
This is not possible. It is (implicitly) called out in this page. All worker nodes need to be deployed in the same VPC where you deployed the control plane (not necessarily the same subnets though). EKS Anywhere (to be launched later this year) will allow you to deploy a complete EKS cluster (control plane + workers) outside of an AWS region (but it won't allow running the control plane in AWS and workers locally).
As far as I know, EKS service doesn't support adding self nodes to the cluster. But the 'EKS Anywhere' service does, which has not been online yet, but soon.

How to create a multi-master cluster in Azure

I need to create an Azure Kubernetes Service with 3 master nodes. So far I used to work with single master cluster, now I am in need of creating a multi-master cluster for production environments.
Can I get a way to create an AKS with multiple control planes. Thanks in Advance.
As Soundarya mentioned in the comment, the solution could be fould here:
As your ask is on AKS (Managed service from Azure) with HA enabled Clusters you already have more than one master running. As AKS is a managed offering service you will will not have the visibility/control on this.
Can I get a way to create an AKS with multiple control planes?
For this you can check the AKS Uptime SLA, Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters.
Please check this document for more details.
If you are using AKS Engine (unmanaged service), then you can specify the number of masters. Please refer to this document for more details.

AKS Hybrid setup

I have 1 master node & 2 worker nodes in the on-premise servers i.e. bare metal running kubernetes.
Considering that after few months, we might need more nodes. We will be using Azure going further for provisioning more nodes.
Can AKS work in combination with the on-prem machines, such that active master is in on-prem & the second master is in Azure, and the additional worker nodes can be scaled up/down in Azure?
Is it possible to achieve the below scenario, where on-prem & azure both can work together for the same K8s cluster? If yes, then any 3rd party tool is available for setting up as so and make life easy?
On-Premises
1 master & 2 worker nodes
+
AKS
1 master & 5 worker nodes (scale up/down)
As far as I know, today you can use the AKS engine to setup nodes on-prem only if you're using Azure Stack Hub which is an extension of Azure that can run workloads in an on-premises environment by providing Azure services in your datacenter.
Azure Arc can bring together two clusters but they won't operate as they were single cluster.
I found options for you to consider:
Running Kubernetes in a hybrid environment:
Setting up Kubernetes to work in an hybrid cloud environment is
absolutely possible today and many companies choose this path as a
progressive migration to Azure. You can benefit from the flexibility
and scalability of Azure, maintain existing systems running on your
local network, and get them to talk to eachother seamlessly. This
however still requires a non-negligible investment in the
infrastructure setup, and maintenance of it.
Azure Arc hybrid management and deployment for Kubernetes clusters:
You can use Azure Arc to register Kubernetes clusters hosted outside
of Microsoft Azure, and use Azure tools to manage these clusters
alongside clusters hosted in Azure Kubernetes Service (AKS).
The later one would require you to use Azure Arc.
I haven't used them myself but they seem to fit your use case.