AKS Hybrid setup - kubernetes

I have 1 master node & 2 worker nodes in the on-premise servers i.e. bare metal running kubernetes.
Considering that after few months, we might need more nodes. We will be using Azure going further for provisioning more nodes.
Can AKS work in combination with the on-prem machines, such that active master is in on-prem & the second master is in Azure, and the additional worker nodes can be scaled up/down in Azure?
Is it possible to achieve the below scenario, where on-prem & azure both can work together for the same K8s cluster? If yes, then any 3rd party tool is available for setting up as so and make life easy?
On-Premises
1 master & 2 worker nodes
+
AKS
1 master & 5 worker nodes (scale up/down)

As far as I know, today you can use the AKS engine to setup nodes on-prem only if you're using Azure Stack Hub which is an extension of Azure that can run workloads in an on-premises environment by providing Azure services in your datacenter.
Azure Arc can bring together two clusters but they won't operate as they were single cluster.

I found options for you to consider:
Running Kubernetes in a hybrid environment:
Setting up Kubernetes to work in an hybrid cloud environment is
absolutely possible today and many companies choose this path as a
progressive migration to Azure. You can benefit from the flexibility
and scalability of Azure, maintain existing systems running on your
local network, and get them to talk to eachother seamlessly. This
however still requires a non-negligible investment in the
infrastructure setup, and maintenance of it.
Azure Arc hybrid management and deployment for Kubernetes clusters:
You can use Azure Arc to register Kubernetes clusters hosted outside
of Microsoft Azure, and use Azure tools to manage these clusters
alongside clusters hosted in Azure Kubernetes Service (AKS).
The later one would require you to use Azure Arc.
I haven't used them myself but they seem to fit your use case.

Related

GKE - Hybrid Kubernetes cluster

I've been reading the Google Cloud documentation about hybrid GKE cluster with Connect or completely on prem with GKE on-prem and VMWare.
However, I see that GKE with Connect you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.
But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing on-prem and cloud nodes. Graphical example:
For the above solution, the master node is managed by GCloud, but the ideal solution is to manage multiple node masters (High availability) on cloud and nodes on prem. Graphical example:
Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?
If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.
Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.
The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.
If you want to know more about Anthos in GCP please follow this link.

Attaching non-azure VMs to Azure Kubernetes Service (AKS)

In the context of Azure Kubernetes Service (AKS), I would like to deploy some pods to a region not currently supported by Azure (in my case, Mexico). Is it possible to provision a non-Azure VM here in Mexico and attach it as a worker node to my AKS cluster?
Just to be clear, I want Azure to host the Kubernetes control plane. I want to spin out some Azure VMs within various supported regions. Then configure a non-Azure VM hosted in Mexico as a Kubernetes Node and attach it to the cluster.
(Soon there will be a Microsoft Azure Datacenter in Mexico and this problem will be moot. In the mean time, was hoping to monkey wrench it.)
You can't have a node pool with VMs that are not managed by Azure with AKS. You'll need to run your own k8s cluster if you want to do something like this. The closest you can get to something managed in Azure like AKS is to build your own Azure Arc enabled Kubernetes Cluster, but you'll need some skills with tools like Rancher, Kubespray, Kubeadm or something else.

High Available Kubernetes cluster

We are starting migrating our system from Azure web app services to AKS infrastructure and currently we had an incident with our test cluster and connection to all our environments were lost. It was due to the upgrading version of Kubernetes and adding additional node pool which broke the route table and they lost communication between themselves.
So as a result we came up with the next HA infrastructure for our environments:
But that eventually adds more work on the CI/CD pipelines and doesn't look very logically as Kubernetes itself should be reliable.
Can I have your comments and thoughts if it is best practice or proper way of moving forward?

Azure Service Fabric - connect to local service fabric cluster from outside the VM it's running on?

We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.

Can kubernetes cluster formed from mix of AWS nodes, Azure nodes, VMWare nodes

Is HA across multiple cloud providers i.e ONE kubernetes cluster from mix of Azure nodes, AWS nodes, VMware nodes. (Consider all have same OS image)
If so how dynamic provisioning works.
Can Kubernetes CSI (container storage interface) help me with this.
That will not work very well. The cloud provider needs to be set on the apiserver & controller-manager and you can't run multiple copies of those in different configurations.
Now if you don't need a cloud provider, as in you are just using these as generic VMs, you will not have access to cloud storage via the kubernetes api. Otherwise it's workable but is still not a great setup. This would essentially be a cross region cluster which is not a supported use case. You are meant to use 1 cluster per region and arrange for LB somehow (yes, this is the tricky bit).