Route traffic in kubernetes based on IP address - kubernetes

We have need to test a pod on our production kubernetes cluster after a data migration before we expose it to our users. What we'd like to do is route traffic from our internal ip addresses to the correct pods, and all other traffic to a maintenance pod. Is there a way we can achieve this, or do we need to briefly expose an ip address on the cluster so we can access the pods directly?

What ingress controller are you using?
Generally ingress-controllers tend to support header-based routing, rather that source ip based one.
Say, ingress-nginx supports header-based canary deployments out of box.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary.
Good example here.
https://medium.com/#domi.stoehr/canary-deployments-on-kubernetes-without-service-mesh-425b7e4cc862
Note, that ingress-nginx canary implementation actually requires your services to be in different namespaces.
You could try to configure your ingress-nginx with use-forwarded-headers and try to employ X-Forwarded-For header for routing.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers

Related

Is it double load-balancing when using MetalLB and a ClusterIP service?

I use MetalLB and Nginx-ingress controller to provide internet access to my apps.
I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there.
My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP?
If so, is this how it is supposed to be, or is there a better way?
Metallb doesn't receive and forward any traffic, so
from MetalLB to my ingress
doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing:
Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing.
But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod.
Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required.
Metallb is a solution to implement kubernetes services of type LoadBalancing when there is no cloud provider to do that for you.
So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type ClusterIP with an ingress-controller just use a service of type LoadBalancing. Metallb will give that service an external ip from your pool and announce it to it's peers.
In that case, when traffic reaches the cluster it will only be load-balanced once.

OpenShift/OKD, what is the difference between deployment, service, route, ingress?

Could you please explain use of each "Kind" of OpenShift in a short sentences?
It is okay, that deployment contains data about, image source, pod counts, limits etc.
With the route we can determine the URL for each deployment as well as Ingress, but what is the difference and when should use route and when ingress?
And what is the exact use of service?
Thanks for your help in advance!
Your question cannot be answered simply in short words or one line answers, go through the links and explore more,
Deployment: It is used to change or modify the state of the pod. A pod can be one or more running containers or a group of duplicate pods called ReplicaSets.
Service: Each pod is given an IP address when using a Kubernetes service. The service provides accessibility, connects the appropriate pod automatically, and this address may not be directly identifiable.
Route:Similar to the Kubernetes Ingress resource, OpenShift's Route was developed with a few additional features, including the ability to split traffic between multiple backends.
Ingress: It offers routing rules for controlling who can access the services in a Kubernetes cluster.
Difference between route and ingress?
OpenShift uses HAProxy to get (HTTP) traffic into the cluster. Other Kubernetes distributions use the NGINX Ingress Controller or something similar. You can find more in this doc.
when to use route and ingress: It depends on your requirements. From the image below you can find the feature of the ingress and route and you select according to your requirements.
Exact use of service:
Each pod in a Kubernetes cluster has its own unique IP address. However, the IP addresses of the Pods in a Deployment change as they move around. Therefore, using Pod IP addresses directly is illogical. Even if the IP addresses of the member Pods change, you will always have a consistent IP address with a Service.
A Service also provides load balancing. Clients call a single, dependable IP address, and the Service's Pods distribute their requests evenly.

GKE: ingres with sub-domain

I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain "somedomain.com" which is registered in Route53. In ingress I have the host mapping like:
hosts:
- host: sub1.somedomain.com
paths:
- path
After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE?
So far I have seen only mapping to static IP.
Also I red here https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.
Thanks in advance!
I think you should consider the other Ingress Controllers for your use cases.
I'm not an expert of the GKE, but as I can see Best practices for enterprise multi-tenancy as follows,
you need to consider how to route the multiple Ingress hostnames through wildcard subdomain like the OpenShift additionally.
Set up HTTP(S) Load Balancing with Ingress
:
You can create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress resource,
which defines how traffic reaches your Services and how the traffic is routed to your tenant's application.
By registering Services with the Ingress resource, the Services' naming convention becomes consistent,
showing a single ingress, such as tenanta.example.com and tenantb.example.com.
The routing feature depends on the Ingress Controllers basically.
In my finding, the default Ingress Controllers of the GKE just creates a Google Cloud HTTP(S) Load Balancer, but it does not consider multi-tenancy by default like the OpenShift.
In contrast, in the OpenShift, the Ingress Controller was implemented using HAProxy with dynamic configuration feature as follows.
LB -tenanta.example.com--> HAProxy(directly forward the tenanta.example.com traffic to the target pod IPs) ---> Target Pods
The type of service exposition depends on the K8S implementation on each cloud provider.
If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)
If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.
So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.
So far GKE does not provide the possibility to dynamically create subdomains. The wished situation would be if GKE cluster can be set some DNS zone managed in GCP and there is a mimik of OpenShift Routes using for example ingress annotations.
But the reality tight now - you have to create subdomain or domain youself as well as IP address wich you connect this domain to. And this particular GCP IP address (using name) can be connected to ingress using annotations. Or it can be used in loadbalancer service.

How to expose Kubernetes DNS externally

Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?
It's possible, there's a good article proving the concept: https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7
However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use https://github.com/txn2/kubefwd which also hacks name resolution.
Although it may be possible to expose coredns and thus forward requests to kubernetes, the typical approach I've taken, in aws, is to use the external-dns controller.
This will sync services and ingresses with provides like aws. It comes with some caveats, but I've used it successfully in prod environments.
coredns will return cluster internal IP addresses that are normally unreachable from outside the cluster. The correct answer is the deleted by MichaelK suggesting to use coredns addon k8s_external https://coredns.io/plugins/k8s_external/ .
k8s_external is already part of coredns. Just edit with
kubectl -n kube-system edit configmap coredns and add k8s_external after kubernetes directive per docs.
kubernetes cluster.local
k8s_external example.org
k8s_gateway also handles dns for ingress resources
https://coredns.io/explugins/k8s_gateway/
https://github.com/ori-edge/k8s_gateway (includes helm chart)
You'll also want something like metallb or rancher/klipper-lb handling services with type: LoadBalancer as k8s_gateway won't resolve NodePort services.
MichaelK is the author of k8s_gateway not sure why his reply is deleted by moderator.
I've never done that, but technically this should be possible by exposing kube-dns service as NodePort. Then you should configure your external DNS server to forward queries for Kube DNS zone "cluster.local" (or any other you have in Kube) to kube-dns address and port.
In Bind that can be done like that:
zone "cluster.local" {
type forward;
forward only;
forwarders{ ANY_NODE_IP port NODEPORT_PORT; };
};

How to use multi domain by one ingress on baremetal

I know how can i use one ingress for one domain but if i have more than one domain like below what should ido?
how should i handle DNS for ingress?I do not want to write domain in ingress.yml
Ingress element on diagram is ingress-controller, but nobody forbids creating individual Ingress resources for each route.
As alternative solution, you can expose service as LoadBalancer and configure external DNS service to route traffic on Kubernetes LB Service. Check ExternalDNS project for more information.
MetalLB and kube-router also could be useful for Bare-Metal/On-Premise K8s setup.
On my opinion, Helm/Ksonnet/Kustomize will help you with Ingress resource management too.