Can Two Load balancers have same ssl endpoint and certificate - deployment

One of my application is running behind a Load balancer on a server in east region. I have created a replica of the same application and deployed it on a server in west region
My question is, that can I achieve High availability using two load balancers?
Something Like
Application running in EAST region behind load balancer LB-1 (Primary)
If we shut down the above, then Application running in WEST region should become active which is running behind LB-2.
My thoughts:
Replication Of Code on deployment: Write Jenkins script which will trigger a deploy command to deploy APP TO west REGION, whenever a deployment is done on east region.
CHecking the health of primary Server/application: Write cron, which will check if the server on east region is down
If it is down, then
a. Using Load Balancer PATCH API, remove the mapping of load balancer in EAST region.
b. Using Load Balancer PATCH API, update the mapping of load balancer in WEST region [To match with the previous east region mappings]
Are these feasible?

Note that each Dedicated Load Balancer has a unique DNS host name in CloudHub. And the certificate subject's common name attribute must match the host name to avoid SSL/TLS validation errors in the clients.
If you are intending to failover transparently for the clients, meaning that the next requests go through LB-2, then you should have a DNS CNAME record that matches LB-1 and you need to point to LB-2. If you don't have a DNS CNAME record to point to the other dedicated load balancer, then you need to change the clients URL to point to LB-2, and need to be sure that the certificate has a Subject Alternative Name with LB-2 host name, so it is valid for both.

Related

Best way to use DNS and ephemeral OCI load balancer IP addresses

We host a number of services in OCI using Kubernetes (OKE) Ingress on private subnets through OCI Load Balancers (K8s managed). We often need a DNS record to point to the load balancer's floating address.
Inevitably we tear down, or rebuild these ingresses when things change. Our problem is that we have no control over the private IPs that are assigned to these load balance instances, and thus have to re-point DNS each time which takes much longer than the Kubernetes deployment.
From what we can see OCI just picks the next free IP in the subnet range. I've search through the documentation, but I see no way of reserving internal IP addresses other than for Instance VNICs which don't apply here from what what I can see.
On premise we would reserve an IP in the private range to avoid this problem.
What's the best way to deal with this in OCI?
Thanks.
You can try to set the Private IP of the load balancer as static, however it is a feature supported by the REST API only.
you can refer this : https://docs.oracle.com/en/cloud/paas/java-cloud/jscug/stop-start-and-restart-oracle-java-cloud-service-instance-and-individual-nodes.html#GUID-B663B1CE-1B99-40FF-9CDA-9BC76E43134E
https://docs.oracle.com/en/cloud/paas/java-cloud/jsrmr/op-paas-api-v1.1-instancemgmt-identitydomainid-services-jaas-instances-serviceid-hosts-command-post.html
Thank you,
Hemanth.
We create 2 load balancers, 1st has the dns pointing to it and has a single backend set pointing to the 2nd load balancer that is integrated more with Kubernetes. Assuming in your scenario you'd only need to tear down and rebuild the 2nd load balancer you could then go and update the backend set of the 1st load balancer.

kops kubernetes cluster with multiple DNS

There are two parts to this.
I am using kops v1.17.0 to standup kubernetes cluster on ec2 instances. I am followinf these docs for doing so. https://kubernetes.io/docs/setup/production-environment/tools/kops/
on of the points go as follows.
kops has a strong opinion on the cluster name: it should be a valid
DNS name.
this got me confused. Can my cluster serve requests to only one DNS and its subdomains?
I tried this on a domain example.com I created a hosted zone for it. created a cluster named example.com.k8s.local.
I pointed this domain to my clusters load balancer. and I can access example.com. All good till now.
now, I want one of the services in my cluster to be served on abc.com. I created another hosted zone, and a new record set within it which points to this load balancer. I am expecting to visit abc.com and see this service but all I see is nginx 404 not found
Is this happening because of the first point I mentioned or totally separate issue? If it is because of 1st point is there aa way around or one cluster is always tied to one domain in the kops world?
As far as the first part is concerned, Yes I can serve multiple domains from same kubernetes cluster with this setup. upto certain version there was a hard requirement of matching domain name with cluster name, its not the case anymore.
Couple of things you need to consider. while issuing a certificate from ACM, make sure all your domains are listed
example
example.com
*example.com
bar.com
*.bar.com
make sure that all of the domains are validated and are not in pending or any other state.
I think reason for second issue was one of the domains in my certificate generated by ACM was invalid state and thus in pending state.
#jt97 ^^

How to setup manual fail over (PROD to DR) in IKS?

We have two IBM kubernetes cluster whenever issue happens in one cluster we need to failover to DR. Can anyone tell me how to do that automatically ? both cluster present in two different zones Montreal & torento. Also we have IBM Cloud internet service.
You could use the CIS service Global Load Balancer offering to set up a globally load-balanced and health-checked URL for your applications. You'd create a GLB for domain app.mydomain.com/app_path for example and then back it with the VIPs for your cluster ALBs in the same origin pool. Configure a health check at the GLB so traffic will be sent to the available endpoints that are healthy.
CIS GLB docs are covered at https://cloud.ibm.com/docs/cis?topic=cis-global-load-balancer-glb-concepts

bare metal kubernetes best practice to externally load balance between services

BACKGROUND
We have a bare metal kubernetes cluster, including master01, master02, master03, worker01, ..., worker10. We expect to visit services in the cluster using our domain name company.com. It is possible to assign public ip to each node with bandwidth between 1Mbps to 100Mbps (the price increases exponentially). As the cluster is not in public cloud like GCE/AWS, external load balancer not exists.
PROBLEM
I have struggled for week about the best practices to access the services in the cluster using company.com from the Internet. Best practices here I mean load balance amoung nodes with minimal public bandwidth expenses. Here are the methods I came up with:
1) Assign 1Mbps to all the nodes in the cluseter and buy another machine named balancer. Assign it 100Mbps bandwidth and make the domain company.com point to it. Deploy a nginx service in balancer which proxy all the traffic from the Internet to worker01, ..., worker10;
2) Assign 1Mbps to all the nodes in the cluster except one of them, saying worker01, which have 100Mbps bandwidth. Point company.com to it;
3) Assign 10Mbps to all the worker nodes and assign company.com to all of them. And let DNS do the load balance job;
4) Maybe using MetaLB. It seems to be a good solution. But I am quite confused. As it is deployed inside kubernetes, how it differ from ingress in my situation? And moreover, as far as I understand, it does not support subdomain loadbalance, i.e., assign subdomain name for each service like ingress.
Which one do you think is the best solution? Maybe there are other methods too.
FYI, we have deployed ingress in the cluster. So all the services are accessed through it.
Thank you in advance.

Assign external ip to kubernetes pod

Context:
We're working on an integration with one of our clients
In order to get access to their systems, we need to establish a VPN connection
For security reasons, we need to bind this VPN connection to a static IP on our side (basically, layer 4 security check enforced by a Juniper router; we use OpenSwan to connect to it).
To do that, we must be connecting from that IP ; that is, we need to establish a socket connection, where the source IP corresponds to that static IP from the router's perspective (and, of course, that needs to route back to our pod successfully)
Client's side has very limited resources ops-wise, so this security hoop is the only way to connect to their systems
While our current system is running (AWS) Kubernetes, which is:
Made out of transient pods, transient nodes, with shifting IPs
Can assign an ExternalIP to a service (which, in turn, can route it to a pod); however that, by default, makes no guarantees about the originator IP of the traffic initiated by that pod
For this reason, we set up an external box & assigned Elastic IP to it, as a binding for the VPN, exposing endpoints, and calling our Kubernetes Services. This introduces a single point of failure -if that box goes down, so does our integration.
Question: in what ways can this be made HA within the Kubernetes world, given the constrains on the first list above?