We have two IBM kubernetes cluster whenever issue happens in one cluster we need to failover to DR. Can anyone tell me how to do that automatically ? both cluster present in two different zones Montreal & torento. Also we have IBM Cloud internet service.
You could use the CIS service Global Load Balancer offering to set up a globally load-balanced and health-checked URL for your applications. You'd create a GLB for domain app.mydomain.com/app_path for example and then back it with the VIPs for your cluster ALBs in the same origin pool. Configure a health check at the GLB so traffic will be sent to the available endpoints that are healthy.
CIS GLB docs are covered at https://cloud.ibm.com/docs/cis?topic=cis-global-load-balancer-glb-concepts
Related
We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
I have two Kubernetes clusters cluster-a, cluster-b in Google Cloud GCP.
Can i call a service exposed with ambasador in cluster (cluster-a) from a different cluster (cluster-b) in the same GCP project but different VPC's ?
Right now i can call the service by the ambasador service name (when I do it in the same cluster).
I have read about Internal TCP/UDP Load Balancing, but it only works when cluster-a and cluster-b are in the same VPC network and my clusters are in different VPC's.
There is a different approach to accomplish it ?
VPCs on GCP aren't routed to each other by default, so your requests won't be reaching the remote CIDRs. For that, you want to use VPC Network Peering to make each VPC reachable to each other.
Note that firewall rules still apply for both VPCs, so you have to create them in order to establish full communication.
Finally, this will only allow network communication between your VPCs. If you rule out this as the issue and you're still experiencing lack of connectivity, it might be related to your Ambassador configuration, in which case, I'd recommend posting either information about that or create another question for that specifically.
We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!
I am currently trying to form a high availability Kubernetes cluster having 5 worker nodes and 3 master in on-premise server. I learned about implementation of high availability cluster by checking its documentation. Also I understood the implementation of HA cluster on AWS cloud or Azure cloud using Load Balancer functionality from appropriate cloud provider.
My confusion is that, when I am creating the same high availability Kubernetes cluster in my on-premise server, then how I can use the Load Balancer functionality in implementation ?
You can use keepalived to setup the load balancer for master on your on premise setup. The keepalived daemon can be used to monitor services or systems and to automatically failover to a standby if problems occur. There will be one Active server on the master and other two master will be in backup mode.
I have written a blog on how to setup kubernetes highly available cluster on premise. You can find it at below link:
https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm
I have used keepalived to setup the load balancer in above blog on my on-premise cluster.
The way we do it is that we put cluster of loadbalancers (simple nginx in reverse proxy in HA) in front of the K8s API and in front of the Ingress.
My company mostly specialize in web and mobile development. Some of our clients want to have backend or web applications hosted and managed by us, because of that we have several apps and server to manage. I'm looking for a solution to have all these servers under one panel and most of all deploy all this application in High Availability. Moreover, we have servers in many different cloud providers and it would be nice if it would be possible to use them.
I've already found and tested few solutions. Maybe someone had the same problem and found a better solution or maybe can you advise which one of these are the best?
1. Rancher + DNS Round Robin
It would be setting up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for loadBalancer and achieve HA by using DNS Round Robin. Put ip of all LoadBalancer in DNS records for every web application.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failure
Very cheap
Cons:
Leaves failover to the client-side application
Not reliable
When one node down high response times for some clients (he needs to wait for request to timeout)
2. Rancher + Cloudflare Load Balancer
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer and achieve HA by using Cloudflare LoadBancer pointing to rancher nodes used for LoadBalancers.
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
Theoretically, Cloudflare LB has 100% SLA
Cons:
The biggest problem is that Cloudflare LB uses DNS records for LoadBalancing. So our clients would need to redirect their domain to our DNS servers on Cloudflare or add CNAME record for our domain. Both of them are not ideal solutions :/ CNAME would be bad for SEO I think.
With many domains and many requests can get expensive.
Notes: I've tested this solution and it's working quite well, after shutting down node with LoadBalancer or with application downtime was about 20s-60s, so just time needed to spin new container.
3. Rancher + Floating IP + Keep alive
As the previous set up Rancher in HA mode with use of cattle or Kubernetes. Then set up few host just for LoadBalancer. Then setting up keepalive and (DigitalOcean) floating IP for nodes that are for LoadBalancers.
DigitalOcean floating ip diagram
Pros:
Easy to setup
Multiple environments. One panel to administrate development, production infrastructure.
No single point of failer
Cons:
LoadBalancers nodes needs to be on DigitalOcean
4. Kubernets on Google Cloud Platform with Kubernetes Engine
Setting up Kubernetes in HA mode on GCP.
Pros:
Super easy to setup on GCP. Just one click
Cons:
I couldn't find SLA of GCP Load Balancers. But probably single point of failer and SLA is not 100%
We would be attached with this Kubernetes cluster to one cloud provider
Having LB for every application, even if it's small could get expensive.
Worse web panel than the Rancher
5. Rancher 2.0 use all from above depending on environment
With Rancher 2.0 we could use all of above solutions it allows to add existing Kubernetes clusters to Rancher. So it would work with Kubernetes engine on GCP. However, it's in alpha version and doesn't have HA deployment yet.
Mostly I'm thinking about setting up option 3. Then if the rancher 2.0 will be released change for it and for larger applications use GCP with Kubernetes Engine. Have someone better solution? or maybe should I use other solutions from provided?