How to set ingress-nginx custom errors - kubernetes

I use the kubernetes ingress-nginx controller and a set custom errors on GKE but I have some problems.
Goal:
If a 50x error occurs in something-web-app, I'll return the HTTP status code 200 and JSON {"status":200, "message":"ok"}
Problems:
I have read the custom-errors document but there is no example of how to customize the default-backend.
I do not understand the difference between ConfigMap and Annotation.
How does ingress-nginx controller work in the first place.

You can do it using two way :
Adding annotation in ingress
Change in ingress controller configmap (which is more like beckend)
1. Try adding this annotation to kubernetes ingress :
nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc
nginx.ingress.kubernetes.io/custom-http-errors: 404,503
nginx.ingress.kubernetes.io/default-backend: error-pages
If that doesn't work add this along with two :
nginx.ingress.kubernetes.io/server-snippet: |
location #custom_503 {
return 404;
}
error_page 503 #custom_503;
2. Configmap editing
You can apply this config map to ingress controller
apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
You can also refer this blog : https://habr.com/ru/company/flant/blog/445596/

Related

How to set max_conn for nginx ingress?

I have Nginx Ingress Controller which is deployed via official Helm chart, in the doc I saw that I can set max_conn parameter, but I didn't get how to set it. I want to set it to 2, so that maximum of 2 clients could connect to my services. How do I set it? Should I set it in ingress controller values during helm install of Ingress Controller or in Ingress manifest?
From this document you can add these in the annotations
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
nginx.org/max-conns: 2
Try this and let me know if this works
Found a similar stack question with a different approach which can help you to resolve your issues.

Is there a way to enable proxy-protocol on Ingress for only one service?

I have this service that limits IPs to 2 requests per day running in Kubernetes.
Since it is behind an ingress proxy the request IP is always the same, so it is limiting he total amount of requests to 2.
Its possible to turn on proxy protocol with a config like this:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
But this would turn it on for all services, and since they don't expect proxy-protocol they would break.
Is there a way to enable it for only one service?
It is possible to configure Ingress so that it includes the original IPs into the http header.
For this I had to change the service config.
Its called ingress-nginx-ingress-controller(or similar) and can be found with kubectl get services -A
spec:
externalTrafficPolicy: Local
And then configure the ConfigMap with the same name:
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
Restart the pods and then the http request will contain the fields X-Forwarded-For and X-Real-Ip.
This method won't break deployments not expecting proxy-protocol.

How to set requests per second limit on GKE and Kong Ingress?

I have a cluster on GKE and I want to set a limit for incoming requests, but I cannot find a way to do it using Kong Ingress Controller. I can't find any documentation or info about this specific topic.
Following the steps in this article, I achieved the desired results by adding the rate limit plugin in my kongo ingress. To do so, first, update / create your ingress definition and add the annotations defined below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: func
namespace: default
annotations:
kubernetes.io/ingress.class: kong # <-- THIS
plugins.konghq.com: http-ratelimit # <-- THIS
spec:
...
After, to finally set the rate-limit, use this definition and apply it in your kubernetes cluster:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
namespace: default
config:
policy: local
second: 1
plugin: rate-limiting
This will create a restriction of 1 request per second in your ingress. If you want anything different, just change the config section with your own configuration. Check the plugin's documentation for all possible configurations.

k8s ExternalName endpoint not found - but working

I deployed a simple test ingress and an externalName service using kustomize.
The deployment works and I get the expected results, but when describing the test-ingress it shows the error: <error: endpoints "test-external-service" not found>.
It seems like a k8s bug. It shows this error, but everything is working fine.
Here is my deployment:
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform
resources:
- test-ingress.yaml
- test-service.yaml
generatorOptions:
disableNameSuffixHash: true
test-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-external-service
namespace: platform
spec:
type: ExternalName
externalName: "some-working-external-elasticsearch-service"
test-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache_bypass $http_upgrade;
spec:
rules:
- host: testapi.mydomain.com
http:
paths:
- path: /
backend:
serviceName: test-external-service
servicePort: 9200
Here, I connected the external service to a working elasticsearch server. When browsing to testapi.mydomain.com ("mydomain" was replaced with our real domain of course), I'm getting the well known expected elasticsearch results:
{
"name" : "73b40a031651",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Xck-u_EFQ0uDHJ1MAho4mQ",
"version" : {
"number" : "7.10.1",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
"build_date" : "2020-12-05T01:00:33.671820Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
So everything is working. But when describing the test-ingress, there is the following error:
test-external-service:9200 (<error: endpoints "test-external-service" not found>)
What is this error? Why am I getting it even though everything is working properly? What am I missing here?
This is how the kubectl describe ingress command works.
The kubectl describe ingress command calls the describeIngressV1beta1 function, which calls the describeBackendV1beta1 function to describe the backend.
As can be found in the source code, the describeBackendV1beta1 function looks up the endpoints associated with the backend services, if it doesn't find appropriate endpoints, it generate an error message (as in your example):
func (i *IngressDescriber) describeBackendV1beta1(ns string, backend *networkingv1beta1.IngressBackend) string {
endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{})
if err != nil {
return fmt.Sprintf("<error: %v>", err)
}
...
In the Integrating External Services documentation, you can find that ExternalName services do not have any defined endpoints:
ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.
Service is a Kubernetes abstraction that uses labels to chose pods to route traffic to.
Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label.
This is the case with Kubernetes service with the type ClusterIP, NodePort or LoadBalancer.
For your case, you use a Kubernetes service with the type ExternalName where the endpoint is a server outside of your cluster or in a different namespace, thus kubernetes displays that error message when you try to describe the ingress.
Usually we do not create an ingress that points to a service of type ExternalName because we are not supposed to expose externally a service that it is already exposed. The kubernetes ingress expects a service with the type ClusterIP, NodePort or LoadBalancer, that is why you got that unexpected error when you described the ingress.
If you are browsing that ExternalName within the cluster, it would be better to avoid using an ingress and use the service uri instead (test-external-service.<namespace>.svc.cluster.local:9200)
Anyway if you insist on using the Ingress, you can create a Headless service without selector and then manually create an endpoint using the same name as of the service. Follow the example here

kubernetes ingress service annotations

I am setting up an ingress service following some k8s documentation, but I am not able to understand the following annotations:
kubernetes.ip/ingress.class:
nginx.ingress.kubernetes.io/rewrite-target:
Do you know what these annotations do?
Thanks in advance.
kubernetes.io/ingress.class annotation is officially deprecated:
Before the IngressClass resource was added in Kubernetes 1.18, a
similar concept of Ingress class was often specified with a
kubernetes.io/ingress.class annotation on the Ingress. Although this
annotation was never formally defined, it was widely supported by
Ingress controllers, and should now be considered formally deprecated.
Instead you should use the ingressClassName:
The newer ingressClassName field on Ingresses is a replacement for
that annotation, but is not a direct equivalent. While the annotation
was generally used to reference the name of the Ingress controller
that should implement the Ingress, the field is a reference to an
IngressClass resource that contains additional Ingress
configuration, including the name of the Ingress controller.
The rewrite annotation does as follows:
In some scenarios the exposed URL in the backend service differs from
the specified path in the Ingress rule. Without a rewrite any request
will return 404. Set the annotation
nginx.ingress.kubernetes.io/rewrite-target to the path expected by
the service.
If the Application Root is exposed in a different path and needs to be
redirected, set the annotation nginx.ingress.kubernetes.io/app-root
to redirect requests for /.
For a more detailed example I strongly suggest you can check out this source. It shows exactly how rewriting works.
Since Kubernetes 1.18+, kubernetes.io/ingress.class annotation is deprecated.
You have to create an IngressClass like:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb-ingress-class
spec:
controller: ingress.k8s.aws/alb
And then reference it in your Ingress declaration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: ...
name: my-fabulous-ingress
annotations:
...
labels:
...
spec:
ingressClassName: "alb-ingress-class"
rules:
...
Important: Be sure to create the IngressClass before the Ingress (because it is referenced by the Ingress)
Note: If in the same manifest, putting the IngressClass block above the Ingress one is enough.
More info:
https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/ingress_class/