How to set max_conn for nginx ingress? - kubernetes

I have Nginx Ingress Controller which is deployed via official Helm chart, in the doc I saw that I can set max_conn parameter, but I didn't get how to set it. I want to set it to 2, so that maximum of 2 clients could connect to my services. How do I set it? Should I set it in ingress controller values during helm install of Ingress Controller or in Ingress manifest?

From this document you can add these in the annotations
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
nginx.org/max-conns: 2
Try this and let me know if this works
Found a similar stack question with a different approach which can help you to resolve your issues.

Related

How to set requests per second limit on GKE and Kong Ingress?

I have a cluster on GKE and I want to set a limit for incoming requests, but I cannot find a way to do it using Kong Ingress Controller. I can't find any documentation or info about this specific topic.
Following the steps in this article, I achieved the desired results by adding the rate limit plugin in my kongo ingress. To do so, first, update / create your ingress definition and add the annotations defined below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: func
namespace: default
annotations:
kubernetes.io/ingress.class: kong # <-- THIS
plugins.konghq.com: http-ratelimit # <-- THIS
spec:
...
After, to finally set the rate-limit, use this definition and apply it in your kubernetes cluster:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
namespace: default
config:
policy: local
second: 1
plugin: rate-limiting
This will create a restriction of 1 request per second in your ingress. If you want anything different, just change the config section with your own configuration. Check the plugin's documentation for all possible configurations.

kubernetes ingress service annotations

I am setting up an ingress service following some k8s documentation, but I am not able to understand the following annotations:
kubernetes.ip/ingress.class:
nginx.ingress.kubernetes.io/rewrite-target:
Do you know what these annotations do?
Thanks in advance.
kubernetes.io/ingress.class annotation is officially deprecated:
Before the IngressClass resource was added in Kubernetes 1.18, a
similar concept of Ingress class was often specified with a
kubernetes.io/ingress.class annotation on the Ingress. Although this
annotation was never formally defined, it was widely supported by
Ingress controllers, and should now be considered formally deprecated.
Instead you should use the ingressClassName:
The newer ingressClassName field on Ingresses is a replacement for
that annotation, but is not a direct equivalent. While the annotation
was generally used to reference the name of the Ingress controller
that should implement the Ingress, the field is a reference to an
IngressClass resource that contains additional Ingress
configuration, including the name of the Ingress controller.
The rewrite annotation does as follows:
In some scenarios the exposed URL in the backend service differs from
the specified path in the Ingress rule. Without a rewrite any request
will return 404. Set the annotation
nginx.ingress.kubernetes.io/rewrite-target to the path expected by
the service.
If the Application Root is exposed in a different path and needs to be
redirected, set the annotation nginx.ingress.kubernetes.io/app-root
to redirect requests for /.
For a more detailed example I strongly suggest you can check out this source. It shows exactly how rewriting works.
Since Kubernetes 1.18+, kubernetes.io/ingress.class annotation is deprecated.
You have to create an IngressClass like:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb-ingress-class
spec:
controller: ingress.k8s.aws/alb
And then reference it in your Ingress declaration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: ...
name: my-fabulous-ingress
annotations:
...
labels:
...
spec:
ingressClassName: "alb-ingress-class"
rules:
...
Important: Be sure to create the IngressClass before the Ingress (because it is referenced by the Ingress)
Note: If in the same manifest, putting the IngressClass block above the Ingress one is enough.
More info:
https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/ingress_class/

How to set ingress-nginx custom errors

I use the kubernetes ingress-nginx controller and a set custom errors on GKE but I have some problems.
Goal:
If a 50x error occurs in something-web-app, I'll return the HTTP status code 200 and JSON {"status":200, "message":"ok"}
Problems:
I have read the custom-errors document but there is no example of how to customize the default-backend.
I do not understand the difference between ConfigMap and Annotation.
How does ingress-nginx controller work in the first place.
You can do it using two way :
Adding annotation in ingress
Change in ingress controller configmap (which is more like beckend)
1. Try adding this annotation to kubernetes ingress :
nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc
nginx.ingress.kubernetes.io/custom-http-errors: 404,503
nginx.ingress.kubernetes.io/default-backend: error-pages
If that doesn't work add this along with two :
nginx.ingress.kubernetes.io/server-snippet: |
location #custom_503 {
return 404;
}
error_page 503 #custom_503;
2. Configmap editing
You can apply this config map to ingress controller
apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
You can also refer this blog : https://habr.com/ru/company/flant/blog/445596/

How can I update all ingress rules comments using kubectl?

Good morning,
I have a k8s cluster where multiple ingress services share a pre generated self managed certificate in GCP.
My problem is that when the certificate expires, I need to update the yaml file with the name of the new cert and apply the modified yaml file for each of the ingress to update the certs. We do it, updating the environment variable and redeploying the application. I was thinking in a better way to do it that will not require to redeploy it, I was planning to use kubectl patch to do this, anyone has already have to done something similar?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: cert-abc
ingress.kubernetes.io/forwarding-rule: fwd-abc
ingress.kubernetes.io/https-forwarding-rule: https-fwd-abc
ingress.kubernetes.io/https-target-proxy: tgt-https-abc
ingress.kubernetes.io/ssl-cert: cert-abc
ingress.kubernetes.io/static-ip: ip-abc
ingress.kubernetes.io/target-proxy: tgt-http-abc
ingress.kubernetes.io/url-map: lb-abc
kubernetes.io/ingress.global-static-ip-name: sta-ip-abc
creationTimestamp: 2019-01-29T22:38:10Z
generation: 2
name: abc-ingress
namespace: abc
spec:
backend:
serviceName: abc
servicePort: 80
Thanks in advance for your help.
We have similar challenges. kubectl apply works fine here as Hernan Garcia already pointed out.
A patch can do the same trick.
Our choice in fact way using helm which is quite easy to use and which makes it quite easy to update selectively values. Furthermore you have the option to rollback if something goes wrong, which is nice for automated deployments.

Does GKE support nginx-ingress with static ip?

I have been using the Google Cloud Load Balancer ingress. However, I'm trying to install a nginxinc/kubernetes-ingress controller in a node with a Static IP address in GKE.
Can I use Google's Cloud Load Balancer ingress controller in the same cluster?
How can we use the nginxinc/kubernetes-ingress with a static IP?
Thanks
In case you're using helm to deploy nginx-ingress.
First create a static IP address. In google the Network Loadbalancers (NLBs) only support regional static IPs:
gcloud compute addresses create my-static-ip-address --region us-east4
Then install nginx-helm with the ip address as a loadBalancerIP parameter
helm install --name nginx-ingress stable/nginx-ingress --namespace my-namespace --set controller.service.loadBalancerIP=35.186.172.1
First question
As Radek 'Goblin' Pieczonka already pointed you out it is possible to do so.
I just wanted to link you to the official documentation regarding this matter:
If you have multiple Ingress controllers in a single cluster, you can
pick one by specifying the ingress.class annotation, eg creating an
Ingress with an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
will target the GCE controller, forcing the nginx controller to ignore
it, while an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
Second question
Since you are making use of the Google Cloud Platform I can give you further details regarding this implementation of Kubernetes in Google.
Consider that:
By default, Kubernetes Engine allocates ephemeral external IP
addresses for HTTP applications exposed through an Ingress.
However of course you can use static IP addressed for your ingress resource,
there is an official step to step guide showing you how to create a HTTP Load Balancing with Ingress making use of a ingress resource and to link to it a static IP or how to promote an "ephemeral" already in use IP to be static.
Try to go through it and if you face some issue update the question and ask!
For the nginx-ingress controller you have to set the external IP on the service:
spec:
loadBalancerIP: "42.42.42.42"
externalTrafficPolicy: "Local"
It is perfectly fine to run multiple ingress controllers inside kubernetes, but they need to be aware which Ingress objects they are supposed to instantiate. That is done with a special annotation like :
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
which tells that this ingress is expected to be provided by and only by nginx ingress controller.
As for IP, Some cloud providers allow the loadBalancerIP to be specified. with this you can controll the public IP of a service.
Create a Static Ip
gcloud compute addresses create my-ip --global
Describe the Static Ip (this will helo you to know static IP )
gcloud compute addresses describe ssl-ip --global
Now add these annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: "gce" # <----
kubernetes.io/ingress.global-static-ip-name: my-ip # <----
Apply the ingress
kubectl apply -f infress.yaml
(Now wait for 2 minutes)
Run this to it will reflect the new ip
kubectl get ingress