kubernetes - route ingress traffic to specific pod for some paths - kubernetes

I have multiple pods, that scale up and down automatically.
I am using an ingress as entry point. I need to route external traffic to a specific pod base on some conditions (lets say path). At the point the request is made I am sure the specific pod is up.
For example lets say I have domain someTest.com, that normally routes traffic to pod 1, 2 and 3 (lets say I identify them by internal ips - 192.168.1.10, 192.168.1.11 and 192.168.1.13).
When I call someTest.com/specialRequest/12, I need to route the traffic to 192.168.1.12, when I call someTest.com/specialRequest/13, I want to route traffic to 192.168.1.13. For normal cases (someTest.com/normalRequest) I just want to do the lb do his epic job normally.
If pods scale up and 192.168.1.14 appears, I need to be able to call someTest.com/specialRequest/14 and be routed to the mentioned pod.
Is there anyway I can achieve this?

Yes, you can easily achieve this using Kubernetes Ingress. Here is a sample code that might help:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: YourHostName.com
http:
paths:
- path: /
backend:
serviceName: Service1
servicePort: 8000
- path: /api
backend:
serviceName: Service2
servicePort: 8080
- path: /admin
backend:
serviceName: Service3
servicePort: 80
Please not that the ingress rules have serviceNames and not pod names, so you will have to create services for your pods. Here is an example for a service which exposes nginx as a service in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
io.kompose.service: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx

I am not aware of built-in functionality to implement this (if this is what your really want). You can achieve this by building your own operator for Kubernetes. Your operator may provision a Pod+Ingress combo which will do exactly what you want - forward your traffic to a single pod, or you can provision 2 pods and 1 ingress to achive HA setup.
Depending on the Ingress you are using, it also may be possible to group multiple ingress resources under the same load balancer.
Here is a brief diagram of how this could look like.

would it be feasible to create another application
that can get the path and target the pod directly via
a pattern in the naming convention? for example
${podnamePrefix+param}.${service name}.${namespace}.svc.cluster.local

Related

Kubernetes ingress to pod running on same host?

We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?
We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.
Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.
I finally got this figured out. This was way harder than it should be IMO! Update: It's not working. Traffic frequently goes to the wrong pod.
The service needs externalTrafficPolicy: Local (see docs).
apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
The Ingress needs nginx.ingress.kubernetes.io/service-upstream: "true" (service-upstream docs).
The nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com" bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
So now a call https://starterservice-foo.example.com will go to the instance running on k8s host foo.
I believe Sticky Sessions is what you are looking for. Ingress does not communicate directly with pods, but with services. Sticky sessions try to bind requests from the same client to the same pod by setting an affinity cookie.
This is used for example with SignalR sessions, where the negotiation request has to be on the same host as the following websocket connection.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Affinity mode "balanced" is the default. If your pod count does not change, part of your clients will lose the session. Use "persistent" to have users connect to the same pod always (unless it dies of course). Further reading: https://github.com/kubernetes/ingress-nginx/issues/5944

How can I dynamicallly start kubernetes pods when the first request arrives?

Let's say I have multiple endpoints in my application that are exposed as different Kubernetes services via an ingress controller.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /service1
backend:
serviceName: service1
servicePort: 80
- path: /service2
backend:
serviceName: service2
servicePort: 80
Let us say service2 endpoint does not receive requests for a long time, so a serverless strategy is appropriate for it. Can I configure a Kubernetes ingress controller to dynamically scale service deployments up when a request arrives after a long time to service2 and shut down the pods for service2 when no request arrives for a long time?
Nginx ingress can not be used for serverless. You can use knative for this use case.
I don't think ingress controller can auto scale pods. Look at kubernetes autoscalar (https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), this can be used to horizontally scale pods of your deployment (or other resources where replicas can be specified).
Check the custom metrics support added to HorizontalPodAutoscalar.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
Your application could expose a custom metric (e.g. a number of HTTP requests) to a collector say Prometheus which then feeds these custom metrics to the Prometheus Adapter (a custom API server).
Not sure if you could start with 0 replicas and scale to 1 (and above) when a first request arrives.

How to implement multiple service in one ingress controller?one they gave in docs is not understandable

I created a service and each service is creating a new load balancer, I don't want to create a new load balancer for each service. For that, I found solution ingress controller but it's not happening.
I will try to describe the objects you need in just words.
You don't need to create a load balancer for each service. When you're using an ingress controller (like nginx), the ingress controller itself will be the type load balancer. All your other services need to be something like ClusterIP type.
Afterwards you can decide how to link your ClusterIP services with the Nginx LoadBalancer: create an ingress for each service or one ingress that exposes each service based on some rule (like paths as #harsh-manvar shows in the post above).
When you say "it's not happening", it would be good if you could provide details on your setup.
In order for Nginx ingress controller to work, it needs to be defined either as a NodePort or LoadBalancer service type. The examples provided in the nginx documentation are using LoadBalancer. However, LoadBalancer only works when your cluster supports this object (that means running in most cloud providers like AWS/GCP/Azure/DigitalOcean or newer versions of minikube). On the other hand, NodePort will expose the ingress controller on the Kubernetes node where it runs (when using minikube, that usually means a VM of sorts which then needs to be port forwarded to be accessible).
To use ingress in a local environment, you can look into minikube. All you need is to run minikube addons enable ingress and it will deploy an nginx controller for you. Afterwards, all you need to do is define an ingress and depending on your setup you may need to use kubectl port-forward to port forward port 80 on an nginx controller pod to a local port on your machine.
There are different types of services: ClusterIP, NodePort, LoadBalancer and ExternalName. You can specify it in spec.type. Actually the default one, when not specified is not LoadBalancer, but ClusterIP, so in your case, simply leave away the type: LoadBalancer definition and use your serviceName as backend in your ingress resource. Example:
spec:
rules:
- host: your.fully.qualified.host.name
http:
paths:
- backend:
serviceName: your-internal-service-name
servicePort: 80
path: /
Keep in mind that for some cloud providers there's also the possibility to use an internal LoadBalancer without a public IP. This is done by adding an annotation to the service configuration. For Azure AKS it looks like this:
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
For Google's GKE the annotation is cloud.google.com/load-balancer-type: "Internal"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/cluster-issuer: wordpress-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- test.test.com
secretName: prod
rules:
- host: test.test.com
http:
paths:
- path: /service-1
backend:
serviceName: service-1
servicePort: 80
- path: /service-2
backend:
serviceName: service-2
servicePort: 5000
Sharing here documentation for ingress to target multiple services you can redirect to multi-service.
Using this you can access services like
https://test.test.com/service-1
https://test.test.com/service-2
Following documentation you should do the following.
More information: kubernetes.github.com
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new

One Kubernetes Ingress to front multiple clusters? Scope issues

So I have two clusters at the moment (soon to be a few more, once I get this working), ClusterA and ClusterB.
Is it possible for one ingress to interface with services from both clusters?
ClusterA hosts the front end and the ingress, while ClusterB hosts the back end.
The excerpted ingress is below. Everything bar the back end works.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:{...}
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/test-frontend-ingress
uid: //
spec:
backend:
serviceName: idu-frontend-XYZ
servicePort: 80
rules:
- http:
paths:
- backend:
serviceName: test-backend-app-service
servicePort: 8080
path: /api/v2/
- backend:
serviceName: idu-frontend-XYZ
servicePort: 80
path: /
tls:
- secretName: tls-cert-name
status:
loadBalancer:
ingress:
- ip: 123.456.789.012
Back end service URL:
https://console.cloud.google.com/kubernetes/service/asia-southeast1-b/test-backend/default/test-backend-app-service...
URL the ingress tries to point to:
https://console.cloud.google.com/kubernetes/service/asia-southeast1-b/standard-cluster-1/default/test-backend-app-service...
So what I've gathered is the ingress can only interface with things in the same cluster as them? test-backend and standard-cluster-1 are the cluster names, and they are both on the default namespace. Isn't that kind of pointless, as you can only deploy one thing to each cluster? Unless your images contain multiple apps, in which case it isn't really microservices anymore.
Connecting two clusters with Kubernetes is hard I guess.
Instead you can deploy both the services on the same cluster. You can create two deployments and expose them as services. And then ingress can redirect traffic between them.
Why do you need a cluster per service though ?
If there is no other alternative you will have to do something like this:
https://appscode.com/products/voyager/7.1.1/guides/ingress/http/external-svc/

how to Route 70% traffic to ExternalName service and append url?

I want to route 70% percentage of my traffic coming to service A to an external end point and append the URL.
To achieve this I created an externalName type service which points to external endpoint and then use treafik ingress controller to divide the weight in percentage.
My service definition looks something like this:
---
apiVersion: v1
kind: Service
metadata:
name: wensleydale
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: wensleydale
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: test-service
Ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/service-weights: |
test-service: 70%
wensleydale: 30%
name: cheese
spec:
rules:
- http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
- backend:
serviceName: wensleydale
servicePort: 80
path: /
What I want in addition is when traffic goes to test-service, I want to append path.
In my test-service I want the URL to be something like www.google.com/something
I'm open to use other tools to achieve this.
You can do the following:
Use Istio Ingress Gateway instead of a traefik gateway. Istio Ingress Gateway is the recommended way for Ingress control in Istio. See https://istio.io/docs/tasks/traffic-management/ingress/
In the corresponding Virtual Service, use HTTPRewrite directive https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite :
rewrite:
uri: /something
Unfortunately you are hitting a limitation. The traefik ingress docs state this condition on weighting - "The associated service backends must share the same path and host". (https://docs.traefik.io/user-guide/kubernetes/#traffic-splitting) So you can't rewrite the path just for one of the weighted targets. The limitation comes from https://github.com/kubernetes/kubernetes/issues/25485 so you can see the suggestions there, many of which mention istio. (See also https://github.com/zalando/skipper/issues/324)
A simple solution might be to deploy another proxy into the cluster and use that to rewrite the target to the internal service that you can't change. Then your Ingress would be able to use the same path for both.
Another way would be to look at configuring a proxy using a conf file rather than ingress annotations. Configuration snippets may be enough to achieve this but I am not sure. I suspect you'd be best to deploy an additional proxy and expose it externally and configure it directly (avoiding the Ingress abstraction).