one service fabric cluster or multiple cluster? - azure-service-fabric

I am migrating several of my cloud service web/worker roles into service fabric.
There will be many (around 5+) service fabric services (stateless or stateful). Shall we put all of them into one service fabric cluster, or multiple clusters? Is there best practice on cluster plan?
Also, I will add multi-tenant support on my service. per this post Service Fabric multi-tenant, I can choose application instance per customer pattern.
I am wondering if it is good idea to choose cluster per customer pattern?

It depends on your requirements per-tenant, but generally it is better to have a single cluster with multiple applications and services:
A single cluster is much easier to manage than multiple clusters.
Service Fabric was designed to host and manage a large number of applications and services in a single cluster.
Multiple services in a single cluster allows you to utilize your cluster resources much more efficiently and use Service Fabric's resource balancing to manage resources effectively.
Standing up a new cluster, depending on size, can take 30 minutes or more. Creating application instances in a cluster takes seconds.

Related

Azure Service Fabric and Kubernetes communication within same network

I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?

Can EKS worker nodes be setup in different regions

We are currently designing a micro-services based architecture, by dividing monolith app to microservices.
Earlier, the monolith was present in 2 different regions viz. US and Asia. US instances would get request from USA and Asia ones will get request from Asian nations.
Now, we want to use AWS EKS for deploying micro services. I checked that EKS is deployed in 3 different AZs for a region to maintain HA. In our scenario, do we need to have 2 different setups of AWS EKS, one in each US and Asia?
Is it possible that I can use only one EKS and worker nodes in different regions?
The architecture comprises of AWS EKS, spring boot micro services, angular 5 apps, docker and kubernetes.
As far as I know you are not able simply use worker nodes from different regions in EKS clusters. Also there is still unanswered question How can we achieve Multi Region Worker nodes for EKS cluster? on github.
What you can is is to use Kubernetes Cluster Federation. You will need kubefedctl tool to join the clusters. I have never done this on my own exactly on EKS, but please check 2 articles below that show you the ways to create a federation exactly on EKS.
Federation v2 and EKS
Build a Federation of Multiple Kubernetes Clusters With Kubefed V2
Or create 2 separate clusters in required regions
another option to explore is CAPE which provides for KubeFed functionality but through an intuitive UI and extended for multi-clusters. You can set up a host cluster to deploy and manage cluster configuration and app deployment to multiple clusters across regions with a few button clicks. CAPE is free for life up to 10 nodes :)

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

Kubernetes - Load balancing Web App access per connections

Long time I did not come here and I hope you're fine :)
So for now, i have the pleasure of working with kubernetes ! So let's start ! :)
[THE EXISTING]
I have an operationnal kubernetes cluster with which I work every day.it consists of several applications, one of which is of particular interest to us, which is the web management interface.
I currently own one master and four nodes in my cluster.
For my web application, pod contain 3 containers : web / mongo /filebeat, and for technical reasons, we decided to assign 5 users max for each web pod.
[WHAT I WANT]
I want to deploy a web pod on each nodes (web0,web1,web2,web3), what I can already do, and that each session (1 session = 1 user) is distributed as follows:
For now, all HTTP requests are processed by web0.
[QUESTIONS]
Am I forced to go through an external loadbalancer (haproxy)?
Can I use an internal loadbalancer, configuring a service?
Does anyone have experience on the implementation described above?
I thank in advance those who can help me in this process :)
This generally depends how and where you've deployed your Kubernetes infrastructure, but you can do this natively with a few options.
Firstly, you'll need to scale your web deployment. This is very simple to do:
kubectl scale --current-replicas=2 --replicas=3 deployment/web
If you're deployed into a cloud provider (such as AWS using kops, or GKE) you can use a service. Just specify the type as LoadBalancer. Services will spread the sessions for your users.
Another option is to use an Ingress. In order to do this, you'll need to use an Ingress Controller, such as the nginx-ingress-controller which is the most featureful and widely deployed.
Both of these options will automatically loadbalance your incoming application sessions, but they may not necessarily do it in the order you've described in your image, it'll be random across the available web deployments

Tenant isolation with Kubernetes on networking level

We want to run a multi-tenant scenario that requires tenant separation on a network level.
The idea is that every tenant receives a dedicated node and a dedicated network that other tenants nodes can join. Tenant nodes should be able to interact with each other in that network.
Networks should not be able to talk with each other (true network isolation).
Are there any architectural patterns to achieve this?
One Kubernetes cluster per tenant?
One Kubernetes cluster for all tenants, with one subnet per tenant?
One Kubernetes cluster across VPCs (speaking in AWS terms)?
The regular way to deal with multi-tenancy inside kubernetes is to use namespaces. But this is within a kube cluster, meaning you still have the same underlying networking solution shared by all tenants. That is actualy fine, as you have Network Policies to restrict networking in the cluster.
You can obviously run autonomous clusters per tenant, yet this is not exactly multi-tenancy then, just multiple clusters. Networking can be configured on node level to route as expected, but you'd still be left with an issue of cross-cluster service discovery etc. Federation can help a bit with that, but I would still advise to chase Namespaces+Policies approach.
I see four ways to run multi-tenant k8s clusters at network-level:
Namespaces
Ingress rules
allow/deny and ingress/egress Network Policies
Network-aware Zones