How to run an application (like nextcloud) on-premises that has a failover to Azure Kubernetes Service - kubernetes

I want to run an application (like nextcloud) in kubernetes on-premises that can failovers to Azure Kubernetes Service with the same data on it.
I already have an application on-premises and in the cloud but the data needs to be synced.

Related

GKE - Hybrid Kubernetes cluster

I've been reading the Google Cloud documentation about hybrid GKE cluster with Connect or completely on prem with GKE on-prem and VMWare.
However, I see that GKE with Connect you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.
But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing on-prem and cloud nodes. Graphical example:
For the above solution, the master node is managed by GCloud, but the ideal solution is to manage multiple node masters (High availability) on cloud and nodes on prem. Graphical example:
Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?
If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.
Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.
The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.
If you want to know more about Anthos in GCP please follow this link.

Attaching non-azure VMs to Azure Kubernetes Service (AKS)

In the context of Azure Kubernetes Service (AKS), I would like to deploy some pods to a region not currently supported by Azure (in my case, Mexico). Is it possible to provision a non-Azure VM here in Mexico and attach it as a worker node to my AKS cluster?
Just to be clear, I want Azure to host the Kubernetes control plane. I want to spin out some Azure VMs within various supported regions. Then configure a non-Azure VM hosted in Mexico as a Kubernetes Node and attach it to the cluster.
(Soon there will be a Microsoft Azure Datacenter in Mexico and this problem will be moot. In the mean time, was hoping to monkey wrench it.)
You can't have a node pool with VMs that are not managed by Azure with AKS. You'll need to run your own k8s cluster if you want to do something like this. The closest you can get to something managed in Azure like AKS is to build your own Azure Arc enabled Kubernetes Cluster, but you'll need some skills with tools like Rancher, Kubespray, Kubeadm or something else.

AKS Hybrid setup

I have 1 master node & 2 worker nodes in the on-premise servers i.e. bare metal running kubernetes.
Considering that after few months, we might need more nodes. We will be using Azure going further for provisioning more nodes.
Can AKS work in combination with the on-prem machines, such that active master is in on-prem & the second master is in Azure, and the additional worker nodes can be scaled up/down in Azure?
Is it possible to achieve the below scenario, where on-prem & azure both can work together for the same K8s cluster? If yes, then any 3rd party tool is available for setting up as so and make life easy?
On-Premises
1 master & 2 worker nodes
+
AKS
1 master & 5 worker nodes (scale up/down)
As far as I know, today you can use the AKS engine to setup nodes on-prem only if you're using Azure Stack Hub which is an extension of Azure that can run workloads in an on-premises environment by providing Azure services in your datacenter.
Azure Arc can bring together two clusters but they won't operate as they were single cluster.
I found options for you to consider:
Running Kubernetes in a hybrid environment:
Setting up Kubernetes to work in an hybrid cloud environment is
absolutely possible today and many companies choose this path as a
progressive migration to Azure. You can benefit from the flexibility
and scalability of Azure, maintain existing systems running on your
local network, and get them to talk to eachother seamlessly. This
however still requires a non-negligible investment in the
infrastructure setup, and maintenance of it.
Azure Arc hybrid management and deployment for Kubernetes clusters:
You can use Azure Arc to register Kubernetes clusters hosted outside
of Microsoft Azure, and use Azure tools to manage these clusters
alongside clusters hosted in Azure Kubernetes Service (AKS).
The later one would require you to use Azure Arc.
I haven't used them myself but they seem to fit your use case.

How to automate Azure DevOps Kubernetes Service Connection to Cluster?

To deploy services via Azure Devops to my kubernetes cluster, I need to create a Kubernetes Service Connection manually. I want to automate this by creating the service connection dynamically in Azure DevOps so I can delete and recreate the cluster and deployment. Is this possible? How can I do this?
you can create the service endpoint using the azure devops api,
check this out for api detail
this might be related

Azure vs On-premise Service Fabric

I have a bit of trouble finding differences about Azure and on-premise Service Fabric versions. I did read somewhere that on-premise version does not support auto-scaling, but this is easy to understand.
However, does on-premise version offer any type of operational capabilities such as resource managers, visual management of cluster, etc.?
The core Service Fabric platform is simply a runtime that gets installed on a set of virtual or physical machines. Once you tell those machines how to find each other, they form a cluster and provide a set of management capabilities that includes the Service Fabric Explorer UI, a REST API, and a TCP endpoint for PowerShell. All of that is common whether you're running on Azure, on-premises, or in another public cloud.
What's different in those environments is everything that lives outside of the machines that form the cluster. That includes:
Autoscaling
While Service Fabric can easily handle new machines being added and removed from the cluster, it has no knowledge of how that process actually works, so some external agent needs to handle it. In Azure, that's a virtual machine scale set.
Failure domain/Upgrade domain management
Good management of failure and upgrade domains is critical to ensuring availability and data reliability in Service Fabric. In Azure, clusters are automatically spread across FDs/UDs and maintenance is coordinated to avoid impact to your clusters. In other environments, this is your responsibility.
Cluster setup and management
In Azure, a Service Fabric cluster is a 1st class resource that can be created and managed through the Azure Resource Manager and the Azure portal. Outside of Azure, you must do that management using the cluster configuration JSON template.
Incidentally, just so there's no confusion since there are overloaded terms... you can't currently use the Azure Resource Manager (ARM) with Service Fabric outside of the Azure environment. However, Service Fabric's cluster resource manager is part of the core runtime and is available everywhere.
Diagnostics pipeline
By default, Service Fabric logging (on Windows) is done via ETW. However, without any component to pick up those events from the individual machines in the cluster and ship them somewhere for easy aggregation and inspection, the logs aren't very useful. In Azure, that process is handled by the Windows Azure Diagnostics (WAD) agent, whereas in other environments you are responsible for setting up that pipeline.
You don't get to use the resource manager on premises. You can access the Service Fabric Explorer at port 19080.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-deploy-anywhere/
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-visualizing-your-cluster/
Powershell management & deployment will also work.