Kubernetes ingress with backend serviceName regex/wildcard or selector - kubernetes

Hi all we have a Flink application with blue-green deployment which we get using the Flink operator.
Flinkk8soperator for Apache Flink. The operator spins up the following three K8s services after a deployment:
my-flinkapp-14hdhsr (Top level service)
my-flinkapp-green
my-flinkapp-blue
The idea is that one of the two among blue green would be active and would have pods (either blue or green).
And a selector to the the active one would be stored in the top level myflinkapp-14hdhsr service with the selector flink-application-version=blue. Or green. As follows:
Labels: flink-app=my-flinkapp
flink-app-hash=14hdhsr
flink-application-version=blue
Annotations: <none>
Selector: flink-app=my-flinkapp,flink-application-version=blue,flink-deployment-type=jobmanager,
I have an ingress defined as follows which I want to use to point to the top level service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Accept-Encoding "";
sub_filter '<head>' '<head> <base href="/happy-flink-ui/">';
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: flink-secure-ingress-my-flink-app
namespace: happy-flink-flink
spec:
rules:
- host: flinkui-myapp.foo.com
http:
paths:
- path: /happy-flink-ui(/|$)(.*)
backend:
serviceName: my-flinkapp-14hdhsr // This works but....
servicePort: 8081
The issue I am facing is that the top level service keeps changing the hash at the end as the flink operator changes it at every deployment.. eg. myflinkapp-89hddew .etc.
So I cannot have a static service name in the ingress definition.
So I am wondering if an ingress can choose a service based on a selector or a regular expression of the service name which can account for the top level app service name plus the hash at the end.
The flink-app-hash (i.e the hash part of the service name - 14hdsr) is also part of the Labels in the top level service. Anyway I could leverage that?
Wondering if default backend could be applied here?
Have folks using Flink operator solved this a different way?

Unfortunately that's not possible OOB to use any kind of regex/wildcards/jsonpath/variables/references inside .backend.serviceName inside Ingres controller.
You are able to use regex only in path: Ingress Path Matching
And its not going to be implemented soon: Allow variable references in backend spec is in the hanged stage.
Would be very interesting to hear any possible solution for this. It was previously discussed on stack, however without any progress: https://stackoverflow.com/a/60810435/9929015

Related

Kubernetes Ingress not forwarding routes

I am fairly new to Kubernetes and have just deployed my first cluster to IBM Cloud. When I created the cluster, I get a dedicated ingress subdomain, which I will be referring to as <long-k8subdomain>.cloud for the scope of this post. Now, this subdomain works for my app. For example: <long-k8subdomain>.cloud/ping works from my browser/curl just fine- I get the expected JSON response back. But, if I add this subdomain to a CNAME record on my domain provider's DNS settings (I have used Bluehost and IBM Cloud's Internet Services), I get a 404 response back from all routes. However this response is the default nginx 404 response (it says "nginx" under "404 Not Found"). I believe this means that this means the ingress load balancer is being reached, but the request does not get routed right. I am using Kubernetes version 1.20.12_1561 on VPC gen 2 and this is my ingress-config.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Host: <long-k8subdomain>.cloud";
spec:
rules:
- host: <long-k8subdomain>.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
I am pretty sure this problem is due to the annotations. Maybe I am using the wrong ones or I do not have enough. Ideally, I would like something like this: api..com/ to route correctly. I have also read a little bit about default backends, but I have not dove too much into that just yet. Any help would be greatly appreciated, as I have spent multiple hours trying to fix this.
Some sources I have used:
https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning
https://cloud.ibm.com/docs/containers?topic=containers-ingress-types
https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#annotations
Note: The reason why I have the second annotation is because for some reason, requests without that header were not being routed directly. So that was part of my debugging process and I just ended up leaving it as I am not sure if that annotation solves that, so I left it for now.
For the NGINX ingress controller to route requests for your own domain's CNAME record to the service instead of the IBM Cloud one, you need a rule in the ingress where the host identifies your domain.
For instance, if your domain's DNS entry is api.example.com, then change the resource YAML to:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
You should not need the second annotation for this to work.
If you want both of the hosts to work, then you could add a second rule instead of replacing host in the existing one.

Ingress in Kubernetes

I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.

Define a fallback service for an isomorphic JavaScript app

I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.

Default Load Balancing in Kubernetes

I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:
External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod
For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.
I've got a few questions:
1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?
Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.
Cheers!
The "internal load balancing" between Pods of a Service has already been covered in this question from a few days ago.
Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.
If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of Istio, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.
I see two options that can be used with k8s:
Use istio's traffic management and create a DestinationRule. It currently supports three load balancing modes:
Round robin
Random
Weighted least request
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
...
spec:
...
subsets:
- name: test
...
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Use lb_type in envoy proxy with ambassador on k8s. More info about ambassador is in https://www.getambassador.io.

Issue while using the kubernetes annotations

I've read documentation of kubernetes annotations.
But I couldn't find basic example about using this annotations. For Example;
I have a deployment yaml like below:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
test_value: "test"
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
How can I use this annotation named test_value and where.
Best Regards...
Just as Labels, Annotations are key-value pairs which represent metadata that is attached to a Kubernetes object.
But contrary to Labels, which are internally utilized to find a collection of objects which satisfy specific conditions, the purpose of Annotations is simply to attach relevant metadata, which should not be used as a filter to identify those objects.
What if we wanted to describe whose person was responsible for generating a specific .yaml file?
We could attach such information to the Kubernetes's object, so that when we need to know who created such object, we can simply run kubectl describe ...
Another useful example, could be to add an annotation to a Deployment before a rollout, explaining what modifications occurred on the new version of the Deployment object. That information could be retrieved later while checking the history of your deployment versions.
But as you have realized with the Ingress example, with Annotations we can also perform advanced configuration on such objects. This is not limited only to Ingress, and for instance you can also provide configuration for running Prometheus on a Kubernetes cluster. You can check the details here.
As mentioned in Kubernetes Documentation, labels have a limited purpose of selecting objects and finding collections of objects that satisfy certain conditions. That put some limitation on the information you can store in labels. (Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between.)
However, annotations are not used to filter objects, so, you can put in annotation big/small structured/unstructured data that can contain characters, you’re not permitted to use in labels. Tools and libraries can retrieve annotations and use it to add some features to your cluster.
Here are some examples of information that could be recorded in annotations:
Fields managed by a declarative configuration layer. Attaching these fields as annotations distinguishes them from default values set by clients or servers, and from auto-generated fields and fields set by auto-sizing or auto-scaling systems.
Build, release, or image information like timestamps, release IDs, git branch, PR numbers, image hashes, and registry address.
Pointers to logging, monitoring, analytics, or audit repositories.
Client library or tool information that can be used for debugging purposes: for example, name, version, and build information.
User or tool/system provenance information, such as URLs of related objects from other ecosystem components.
Lightweight rollout tool metadata: for example, config or checkpoints.
Phone or pager numbers of persons responsible, or directory entries that specify where that information can be found, such as a team website.
Options for Ingress object, (nginx,gce)
Well, you are right Annotations is like Labels. But I saw that We could customize to config with Annotations for example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-with-annotations
annotations:
nginx.org/proxy-connect-timeout: "30s"
nginx.org/proxy-read-timeout: "20s"
nginx.org/client-max-body-size: "4m"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Nginx config can customize according to given Annotation. So how to do this. I couldn't find a tutorial.
I'll first give some background regarding annotations.
Annotations Vs Labels
Annotations are quiet different then labels.
Labels:
You use labels to group resources that you want to refer as a whole.
For example pods with the app=run, env=staging could be exposed by a service with a label selector that matches those labels or managed by a deployment or a daemon set.
Annotations:
Annotations have a few different usages like providing description and adding support for fields that are not part of the K8S API.
While labels should be short, annotations can contain much larger sets of data and can reach up to 256KB.
Annotations use cases examples
You can see below a few examples of how annotations are being used by the various providers / tools that interacts with your cluster.
1 ) Used internally by K8S - below are the annotations that are added to the API-server pod:
kubernetes.io/config.hash: 7c3646d2bcee38ee7dfb851711571ba3
kubernetes.io/config.mirror: 7c3646d2bcee38ee7dfb851711571ba3
kubernetes.io/config.seen: "2020-10-22T01:26:12.671011852+03:00"
kubernetes.io/config.source: file
2 ) If you provision a cluster with kubeadm - this will be added to the API-server pod:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.246.38.137:6443
3 ) If you run on amazon-eks you can see that the following annotation is added to your workloads - this is for backward compatibility - read more in here):
annotations:
kubernetes.io/psp: eks.privileged
4 ) There are cases when 3rd party tools like aws-alb-ingress-controller that requires you to pass (mandatory) configuration via annotations (because those fields are not supported by the K8S api):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aws-alb-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Role=Backend , Environment=prod , Name=eros-ingress-alb
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
alb.ingress.kubernetes.io/security-groups : sg-0e3455g455
alb.ingress.kubernetes.io/backend-protocol : HTTP
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/healthcheck-path:
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/certificate-arn:
In your case
Ask yourself what is the reason for adding the annotations.
Then make sure you use a unique prefix for your key in order to avoid collusions.
If you're not sure how to add an annotation to a yaml you can add it manually:
$kubectl annotate pod <pod-name> unique.prefix/for-my-key="value"
And then run $kubectl get po <pod-name> -o yaml to view the annotation that you added manually and copy the yaml to your VCS.