Best way to use DNS and ephemeral OCI load balancer IP addresses - kubernetes

We host a number of services in OCI using Kubernetes (OKE) Ingress on private subnets through OCI Load Balancers (K8s managed). We often need a DNS record to point to the load balancer's floating address.
Inevitably we tear down, or rebuild these ingresses when things change. Our problem is that we have no control over the private IPs that are assigned to these load balance instances, and thus have to re-point DNS each time which takes much longer than the Kubernetes deployment.
From what we can see OCI just picks the next free IP in the subnet range. I've search through the documentation, but I see no way of reserving internal IP addresses other than for Instance VNICs which don't apply here from what what I can see.
On premise we would reserve an IP in the private range to avoid this problem.
What's the best way to deal with this in OCI?
Thanks.

You can try to set the Private IP of the load balancer as static, however it is a feature supported by the REST API only.
you can refer this : https://docs.oracle.com/en/cloud/paas/java-cloud/jscug/stop-start-and-restart-oracle-java-cloud-service-instance-and-individual-nodes.html#GUID-B663B1CE-1B99-40FF-9CDA-9BC76E43134E
https://docs.oracle.com/en/cloud/paas/java-cloud/jsrmr/op-paas-api-v1.1-instancemgmt-identitydomainid-services-jaas-instances-serviceid-hosts-command-post.html
Thank you,
Hemanth.

We create 2 load balancers, 1st has the dns pointing to it and has a single backend set pointing to the 2nd load balancer that is integrated more with Kubernetes. Assuming in your scenario you'd only need to tear down and rebuild the 2nd load balancer you could then go and update the backend set of the 1st load balancer.

Related

How to keep IP address of a pod static after pod dies

I am new to learning kubernetes, and I understand that pods have dynamic IP and require some other "service" resource to be attached to a pod to use the fixed IP address. What service do I require and what is the process of configuration & How does AWS-ECR fit into all this.
So if I have to communicate from a container of a pod to google.com, Can I assume my source as the IP address of the "service", if I have to establish a connection?
Well, for example on Azure, this feature [Feature Request] Pod Static IP is under request:
See https://github.com/Azure/AKS/issues/2189
Also, as I know, you can currently assign an existing IP adress to a load balancer service or an ingress controller
See https://learn.microsoft.com/en-us/azure/aks/static-ip
By default, the public IP address assigned to a load balancer resource
created by an AKS cluster is only valid for the lifespan of that
resource. If you delete the Kubernetes service, the associated load
balancer and IP address are also deleted. If you want to assign a
specific IP address or retain an IP address for redeployed Kubernetes
services, you can create and use a static public IP address
As you said we needs to define a service which selects all the required pods and then you would be sending requests to this service instead of the pods.
I would suggest you to go through this https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types.
The type of service you need basically depends on the use-case.
I will give a small overview so you get an idea.
Usually when pods only have internal requests ClusterIP is used
Node port allow external requests but is basically used for testing and not for production cases
If you also have requests coming from outside the cluster you would usually use load balancer
Then there is another option for ingress
As for AWS-ECR, its basically a container registry where you store your docker images and pull from it.

How to automatically update the Service `spec.externalIPs` when a Kubernetes worker is drained/down?

I'm hosting a Kubernetes cluster on VMs/VPS from a random cloud provider not providing any Kubernetes things at all, meaning with a dedicated public IP address and to allow the trafic coming to the worker nodes, I'm defining my Service with the spec.externalIPs with the fixed list of IP addresses.
I'm looking for a way to get that list updated when a node is drained/down automatically.
I had a look at the existing operators from https://operatorhub.io/ but I haven't found any that seem to cover my use case.
The idea would that when the event of a node passing to NotReady is emitted, the Service is updated with the Nodes being Ready.
Is there any operator that could allow doing that?
After some time working on this, I finally figured out that this is not possible, at least today, there's no known operator or what so ever that could update the field with the IP addresses.
And even if it was the case, there would be delays to update the DNS records.
What I've done instead is to buy another VPS, installing HAproxy in order to proxy the Kubernetes API trafic to the master nodes, and the web trafic (both 80 and 443) to the Kubernetes worker nodes.
HAproxy monitors the nodes, and add/remove nodes automagically and in a very quick way.
With this, you just need one DNS record, pointing to the Load Balancer (or VIP of the Load Balancers in order to avoid SPOF), and HAproxy will do the rest!

Having 1 outgoing IP for kubernetes egress traffic

Current set-up
Cluster specs: Managed Kubernetes on Digital Ocean
Goal
My pods are accessing some websites but I want to use a proxy first.
Problem
The proxy I need to use is only taking 1 IP address in an "allow-list".
My cluster is using different nodes, with node-autoscaler so I have multiple and changing IP addresses.
Solutions I am thinking about
Setting-up a proxy (squid? nginx?) outside of the cluster (Currently not working when I access an HTTPS website)
Istio could let me set-up a gateway? (No knowledge of Istio)
Use GCP managed K8s, and follow the answers on Kubernetes cluster outgoing traffic IP. But all our stack is on Digital Ocean and the pricing is better there.
I am curious to know what is the best practice, easiest solution or if anyone experienced such use-case before :)
Best
You could set up all your traffic to go through istio-egressgateway.
Then you could manipulate the istio-egressgateway to always be deployed on the same node of the cluster, and whitelist that IP address.
Pros: super easy. BUT. If you are not using Istio already, to set up Istio just for this is may be killing a mosquito with a bazooka.
Cons: Need to make sure the node doesn't change the IP address. Otherwise the istio-egressgateway itself might not get deployed (if you do not have the labels added to the new node), and you will need to reconfigure everything for the new node (new IP address). Another con might be the fact that if the traffic goes up, there is an HPA, which will deploy more replicas of the gateway, and all of them will be deployed on the same node. So, if you are going to have lots of traffic, may be it would be a good idea to isolate one node, just for this purpose.
Another option would be as you are suggesting; a proxy. I would recommend an Envoy proxy directly. I mean, Istio is going to be using Envoy anyways right? So, just get the proxy directly, put it in a pod, do the same thing as I mentioned before; node affinity, so it will always run on the same node, so it will go out with the same IP.
Pros: You are not installing entire service mesh control plane for one tiny thing.
Cons: Same as before, as you still have the issue of the node IP change if something goes wrong, plus you will need to manage your own Deployment object, HPA, configure the Envoy proxy, etc. instead of using Istio objects (like Gateway and a VirtualService).
Finally, I see a third option; to set up a NAT gateway outside the cluster, and configure your traffic to go through it.
Pros: You won't have to configure any kubernetes object, therefor there will be no need to set up any node affinity, therefor there will be no node overwhelming or IP change. Plus you can remove the external IP addresses from your cluster, so it will be more secure (unless you have other workloads that need to reach internet directly). Also , probably having a single node configured as NAT will be more resilient then a kubernetes pod, running in a node.
Cons: May be a little bit more complicate to set up?
And there is this general Con, that you can whitelist only 1 IP address, so you will always have a single point of failure. Even NAT gateway; it still can fail.
The GCP static IP won't help you. What is suggesting the other post is to reserve an IP address, so you can re-use it always. But it's not that you will have that IP address automatically added to a random node that goes down. Human intervention is needed. I don't think you can have one specific node to have a static IP address, and if it goes down, the new created node will pick the same IP. That service, to my knowledge, doesn't exist.
Now, GCP does offer a very resilient NAT gateway. It is managed by Google, so shouldn't fail. Not cheap though.

bare metal kubernetes best practice to externally load balance between services

BACKGROUND
We have a bare metal kubernetes cluster, including master01, master02, master03, worker01, ..., worker10. We expect to visit services in the cluster using our domain name company.com. It is possible to assign public ip to each node with bandwidth between 1Mbps to 100Mbps (the price increases exponentially). As the cluster is not in public cloud like GCE/AWS, external load balancer not exists.
PROBLEM
I have struggled for week about the best practices to access the services in the cluster using company.com from the Internet. Best practices here I mean load balance amoung nodes with minimal public bandwidth expenses. Here are the methods I came up with:
1) Assign 1Mbps to all the nodes in the cluseter and buy another machine named balancer. Assign it 100Mbps bandwidth and make the domain company.com point to it. Deploy a nginx service in balancer which proxy all the traffic from the Internet to worker01, ..., worker10;
2) Assign 1Mbps to all the nodes in the cluster except one of them, saying worker01, which have 100Mbps bandwidth. Point company.com to it;
3) Assign 10Mbps to all the worker nodes and assign company.com to all of them. And let DNS do the load balance job;
4) Maybe using MetaLB. It seems to be a good solution. But I am quite confused. As it is deployed inside kubernetes, how it differ from ingress in my situation? And moreover, as far as I understand, it does not support subdomain loadbalance, i.e., assign subdomain name for each service like ingress.
Which one do you think is the best solution? Maybe there are other methods too.
FYI, we have deployed ingress in the cluster. So all the services are accessed through it.
Thank you in advance.

kubernetes on gke / why a load balancer use is enforced?

Made my way into kubernetes through GKE, currently trying out via kubeadm on bare metal.
In the later environment, there is no need of any specific load balancer; using nginx-ingress and ingresses let one serve service to the www.
Oppositely, on gke, using the same nginx-ingress, or using the gke provided l7, you always end up with a billed load balancer.
What's the reason about that, as it seemed not to be ultimately needed ?
(Reposting my comment above)
In general, when one is receiving traffic from the outside world, that traffic is being sent to one or more non-ACLd public IP addresses.
If you run k8s on bare metals, those BMs can have public IPs, and you can just run ingress on one or more of them.
A managed k8s environment, however, for security reasons, will not permit nodes to have public IPs.
Instead, managed load balancers are allowed to have public IPs. Those are configured to know the private node IPs hosting ingress for your cluster and will direct traffic accordingly.
Kubernetes services have few types, each building up on previous one : ClusterIP, NodePort and LoadBalancer. Only the last one will provision LoadBalancer in a cloud environment, so you can avoid it on GKE without fuzz. The question is, what then? Because, in best case you end up with an Ingress (I assume we expose ingress as in your question), that is available on volatile IPs (nodes can be rolled at any time and new ones will get new IPs) and high ports given by NodePort service. Meaning that not only you have no fixed IP to use, but also you would need to open something like http://:31978, which obviously is crap. Hence, in cloud, you have a simple solution of putting a cloud load balancer in front of it with LoadBalancer service type. This LB will ingest the traffic on port 80/443 and forward it to correct backing service/pods.