I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.
Related
I'm fairly new to Kubernetes and I have played around with it for a few days now to get a feeling for it. Trying out to set up an Nginx Ingress controller on the google-cloud platform following this guide, I was able to set everything up as written there - no problems, I got to see the hello-app output.
However, when I tried replicating this in a slightly different way, I encountered a weird behavior that I am not able to resolve. Instead of using the image --image=gcr.io/google-samples/hello-app:1.0 (as done in the tutorial) I wanted to deploy a standard nginx container with a custom index page to see if I understood stuff correctly. As far as I can tell, all the steps should be the same except for the exposed port: While the hello-app exposes port 8080 the standard port for the nginx container is 80. So, naively, I thought exposing (i.e., creating a service) with this altered command should do the trick:
kubectl expose deployment hello-app --port=8080 --target-port=80
where instead of having target-port=8080 as for the hello-app, I put target-port=80. As far as I can tell, all other thins should stay the same, right? In any way, this does not work and when I try to access the page I get a "404 - Not Found" although the container is definitely running and serving the index page (I checked by using port forwarding from the google cloud which apparently directly makes the page accessible for dev purposes). In fact, I also tried several other combinations of ports (although I believe the above one should be the correct one) to no avail. Can anyone explain to my why the routing does not work here?
If you notice the tutorial inside the ingress configuration path: "/hello"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "34.122.88.204.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
you might have updated port number and service name config however if path /hello which means you request is going to Nginx container but not able to file the page hello.html so it's giving you 404.
You hit endpoint IP/hello (Goes to Nginx ingress controller)-->
checked for path /hello and forwarded request to service -->
hello-app (service forwarded request to PODs) --> Nginx POD (it
doesn't have anything at path /hello so 404)
404 written by Nginx side, in your case either it will be Nginx ingress controller or else container(POD) itself.
So try you ingress config without setting path path: "/" and hit the endpoint you might see the output from Nginx.
Hi all we have a Flink application with blue-green deployment which we get using the Flink operator.
Flinkk8soperator for Apache Flink. The operator spins up the following three K8s services after a deployment:
my-flinkapp-14hdhsr (Top level service)
my-flinkapp-green
my-flinkapp-blue
The idea is that one of the two among blue green would be active and would have pods (either blue or green).
And a selector to the the active one would be stored in the top level myflinkapp-14hdhsr service with the selector flink-application-version=blue. Or green. As follows:
Labels: flink-app=my-flinkapp
flink-app-hash=14hdhsr
flink-application-version=blue
Annotations: <none>
Selector: flink-app=my-flinkapp,flink-application-version=blue,flink-deployment-type=jobmanager,
I have an ingress defined as follows which I want to use to point to the top level service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Accept-Encoding "";
sub_filter '<head>' '<head> <base href="/happy-flink-ui/">';
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: flink-secure-ingress-my-flink-app
namespace: happy-flink-flink
spec:
rules:
- host: flinkui-myapp.foo.com
http:
paths:
- path: /happy-flink-ui(/|$)(.*)
backend:
serviceName: my-flinkapp-14hdhsr // This works but....
servicePort: 8081
The issue I am facing is that the top level service keeps changing the hash at the end as the flink operator changes it at every deployment.. eg. myflinkapp-89hddew .etc.
So I cannot have a static service name in the ingress definition.
So I am wondering if an ingress can choose a service based on a selector or a regular expression of the service name which can account for the top level app service name plus the hash at the end.
The flink-app-hash (i.e the hash part of the service name - 14hdsr) is also part of the Labels in the top level service. Anyway I could leverage that?
Wondering if default backend could be applied here?
Have folks using Flink operator solved this a different way?
Unfortunately that's not possible OOB to use any kind of regex/wildcards/jsonpath/variables/references inside .backend.serviceName inside Ingres controller.
You are able to use regex only in path: Ingress Path Matching
And its not going to be implemented soon: Allow variable references in backend spec is in the hanged stage.
Would be very interesting to hear any possible solution for this. It was previously discussed on stack, however without any progress: https://stackoverflow.com/a/60810435/9929015
I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.
Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !
I had sticky session working in my dev environment with minibike with following configurations:
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gl-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip"
spec:
backend:
serviceName: gl-ui-service
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: gl-api-service
servicePort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: gl-api-service
labels:
app: gl-api
annotations:
ingress.kubernetes.io/affinity: 'cookie'
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
selector:
app: gl-api
Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).
Session affinity is not available yet in the GCE/GKE Ingress controller.
In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.
Use NodePort for the Kubernetes Service. Set the value of the port in spec.ports[*].nodePort, otherwise a random one will be assigned
Disable kube-proxy SNAT load balancing
Create a Load Balancer from the GCE API, with cookie session affinity enabled. As backend use the port from 1.
Good news! Finally they have support for these kind of tweaks as beta features!
Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:
Timeout
Connection draining timeout
Session affinity
The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.
Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
Based on this: https://github.com/kubernetes/ingress-gce/blob/master/docs/annotations.md
there's no annotation available, which could effect the session affinity setting of the Google Cloud LoadBalancer (GCLB), that is created as a result of the ingress creation. As such:
This have to be turned on by hand: either as suggested above by creating the LB yourself, or letting the ingress controller do so and then changing the backend configuration for each backend (either via GUI or gcloud cli). IMHO the later seems faster and less prone to errors. (Tested, and cookie "GCLB" was returned by LB after the config change got propagated automatically, and subsequent requests including the cookie were routed to the same node)
As rightfully pointed out by Matt-y-er: service.spec "externalTrafficPolicy" has to be set to local "Local" to disable forwarding from the Node the GCLB selected to another. However:
One would still need to ensure:
The GCLB should not send traffic to nodes, which doesn't run the pod or
make sure there's a pod running on all nodes (and only a single pod as the externalTrafficPolicy setting would not prevent loadbalancing over multiple local pods)
With regard to #3,the simple solution:
convert the deployment to a daemonset -> there will be exactly one pod on each node (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) at all times
The more complicated solution (but which allows to have less pods than nodes):
It seems, that GCLB's health check doesn't need to be adjusted as Ingress rule definition automatically sets up a healthcheck to the backend (and not to the default healthz service)
supply anti-affinity rules to make sure there's at most a single instance of a pod on each node (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)
Note: The above anti-affinity version was tested on 24th July 2018 with 1.10.4-gke.2 kubernetes version on a 2 node cluster running COS (default GKE VM image)
I was trying the gke tutorial for that on version: 1.11.6-gke.6 (the latest availiable).
stickiness was not there... the only option that was working was only after sessing externalTrafficPolicy":"Local" on the service...
spec:
type: NodePort
externalTrafficPolicy: Local
i opened defect to google about the same, and they accepted it, without commiting on eta.
https://issuetracker.google.com/issues/124064870
For the BackendConfig of the ingress loadbalancer, documentation can be found here:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
An example snippet for type generated cookie is :
spec:
timeoutSec: 1800
connectionDraining:
drainingTimeoutSec: 1800
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 1800