Ingress vs Direct Nginx Deployment on On-premise Kuber Cluster - kubernetes

I am setting up a kubernetes cluster in the On-Premise servers. Now for setting up external traffic, I can run Nginx Ingress behind Nodeport or I can run Nginx Deployment(Pods) with NodePort service exposed.
The only difference I got to know is with Ingress, I will get the sticky sessions which I anyhow do not need. So which one I should prefer and why?
Apart from this, I also have one requirement on Nginx Caching of htmls(with purging logic). So I have Nginx Deplpyment, then I can use PVC and PV. But what if I use Nginx Ingress. How will it work then.

When you expose a Nginx Deployment you essentially create a L4 load balancer with Ingress you are creating a L7 load balancer.
If you want to host multiple domains like example1.com, example2.com so on the having a L7 load balancer makes sense also you can have defaulted backend defined if you want the request to endup somewhere special like some special service or endpoint.
Coming to 2nd part of enabling cache you can do it in ingress controller as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mywebsite
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffering: "on" # Important!
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
say you want to enable it for 1 path not for others, like you want to enable it for /static/ path and not for / path then you can have:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
tls:
- secretName: mysite-ssl
hosts:
- mysite.example.com
rules:
- host: mysite.example.com
http:
paths:
- path: /
backend:
serviceName: mysite
servicePort: http
---
# Leverage nginx-ingress cache for /static/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite-static
annotations:
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: mysite.example.com
http:
paths:
- path: /static/
backend:
serviceName: mysite
servicePort: http
Ultimately the design decision is yours, honestly its better to use ingress controller as it gives way more flexibility.
I hope this clears this up for you.

Related

Kubernetes ingress connnection types and specific connection

On Kubernetes, I want the first connection to be the pod using less CPU, and the incoming connections to be sticky sessions. How do I do this?
I try this and sticky session support, but I want first connection must come to least connection,least bandwidth or something.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "stickounet"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-svc
port:
number: 80
Using load balancer like nginx or treafik your request will automatically be routed to pod or node with less resource utilization rates and this document describes the process of configuring sticky connection in a step by step procedure.

Ingress on self-hosted Kubernetes on custom interface

I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
192.168.0.1-H"domain.com":443/frontend -> 192.168.0.1 (eth0) -> ingress -> service-frontend
192.168.0.1-H"domain.com":443/backend -> 192.168.0.1 (eth0) -> ingress -> service-backend
88.88.88.88-H"domain.com":443/frontend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
88.88.88.88-H"domain.com":443/backend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
And then later the eth1 interface should be able to be switched on, so that requests on that interface behave the same as on eth0.
I would like to be able to deploy multiple instances of services for load-balancing. I would like to keep the configuration in my namespace (if possible) so I can always delete and apply everything at once.
I'm using this guide as a reference: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
I was able to get something working with minikube, but obviously could not expose any external IPs and performance was quite bad. For that, I just configured a "kind: Ingress" and that was it.
So far, the ingress controller that's default on microk8s seems to listen on all interfaces and I can only configure it in its own namespace. Defining my own ingress seems to not have any effect.
I would like to deploy an ngingx-ingress controller on my self-hosted
Kubernetes (microk8s) that is configurable to listen on one or more
interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
bar.foo.dev
foo.bar.dev
YAML example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
One potential fix was much simpler than anticipated, no messing with MetalLB needed or anything else.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "public"
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24
...
This does not answer the question of splitting an Ingress across multiple interfaces, but it does solve the problem of restricting public access.
By default, bare-metal ingress will listen on all interfaces, which might be a security issue.
This solution works without enabling ingress on Microk8s:
install ingress controller : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml
create your deployment and service and add this Ingress resource (all in the one namespace):
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: 'true'
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
name: ingress-resource
namespace: namespace-name
spec:
rules:
- http:
paths:
- backend:
service:
name: service-name
port:
number: service-port
path: /namespace-name/service-name(/|$)(.*)
pathType: Prefix
kubectl get svc -n ingress-nginx
now get either CLUSTER-IP or EXTERNAL-IP and :
curl ip/namespace-here/service-here

Prometheus dashboard exposed over ingress controller

I am trying to setup Prometheus in k8 cluster, able to run using helm. Accessing dashboard when i expose prometheus-server as LoadBalancer service using external ip.
Same does not work when I try to configure this service as ClusterIP and making it as backend using ingress controller. Receiving 404 error, any thoughts on how to troubleshoot this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
with above ingress definition in place, url “http://<>/prometheus/ getting redirected to http://<>/graph/ and then 404 error page getting rendered. When url adjusted to http://<>/prometheus/graph some of webcontrols gets rendered with lots of errors on browser console.
Prometheus might be expecting to have control over the root path (/).
Please change the Ingress to prometheus.example.com and it should work fine. (Changing it to a subdomain)
Please change your Ingress configuration file, add host field:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: prometheus.example.com
http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
then apply changes executing command:
$ kubectl aply -f your_ingress_congifguration_file.yaml
The host header field in a request provides the host and port
information from the target URI, enabling the origin server to
distinguish among resources while servicing requests for multiple
host names on a single IP address.
Please take a look here: hosts-header.
Ingress definition: ingress.
Useful information: helm-prometheus.
Useful documentation: ingress-path-matching.

IP whitelisting in google container engine with ingress not working

I am trying to whitelist IPs that can access my application. I created http-balancer by following this tutorial. https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
After creating the service with NodePort I created an ingress.yaml file that looks like the one below. I have created a global static ip and setup a domain name.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: <global-static-ip>
spec:
rules:
- host: <domain_name>
- http:
paths:
- path: /*
backend:
serviceName: nginx
servicePort: 80
This above yaml file works fine and I am able to access the "Welcome to Nginx" page.
But when I add the IPs to be whitelisted it does not seem to work and still allows other IPs that are not whitelisted.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: <global-static-ip>
ingress.kubernetes.io/whitelist-source-range: "xx.xx.xx.xxx/32"
spec:
rules:
- host: <domain_name>
- http:
paths:
- path: /*
backend:
serviceName: nginx
servicePort: 80
Reference:
http://container-solutions.com/kubernetes-quick-tip/
https://docs.giantswarm.io/guides/advanced-ingress-configuration/
I have not worked with Ingress but as per normal nginx rules you need to deny all and then allow the whitelist IPS
`location / {
proxy_pass https://xxx.xx.xx.xx:8080
allow xx.xx.xx.xxx/32;
deny all;
allow xx.xx.xx.xxx/32;
}`
Which inturn wont allow your non-Whitelisted IP's.
The references you provided use the Nginx-based ingress controller.
Ingress on GKE uses http(s) load balancer. Currently the http(s) load balancer on GCP does not support the firewall rules to allow or deny traffic by IPs.
You can:
Block the source ip in web server or application by yourself.
Or
Try to install nginx-based ingress controller.

How do I implement session affinity with a Ingress controller using GCE load balancer

I have the following ingress config:
ingressProd.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wordpress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- ***.net
secretName: production-tls
rules:
- host: ***.net
http:
paths:
- path: /*
backend:
serviceName: wordpress
servicePort: 80
I'm having difficulty finding resources on how to enable session affinity for the above. Having previously used a LoadBalancer service which worked as intended previously.
What do I need to investigate?
The current GCE ingress controller doesn't support session affinity. This is because it is not capable of load balancing the pods directly (It uses the nodeport service).
If you really need session affinity, the current solution is to deploy an ngnix-controller in GKE. This link contains the deployment steps.