I'm trying to setup a simple ingress with path rewriting to pass requests to my backend services.
Ref.: https://haproxy-ingress.github.io/v0.10/docs/configuration/keys/#rewrite-target
The ingress controller uses this image: quay.io/jcmoraisjr/haproxy-ingress:v0.10-beta.1.
Here is sample YAML:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "myapp-apis-ingress"
namespace: "my-namespace"
labels:
app: myapp
tier: ingress
annotations:
kubernetes.io/ingress.class: "haproxy"
haproxy.org/rewrite-target: "/"
spec:
rules:
- host: "myapp.mydomain"
http:
paths:
- path: /api/v1/hello
pathType: Prefix
backend:
service:
name: "myapp-hello-svc"
port:
number: 8080
Expected behaviour:
It should route requests from https://myapp.mydomain/api/v1/hello/* to the GKE service at myapp-hello-svc:8080/*
Actual behaviour:
It routes everything to myapp-hello-svc:8080/api/v1/hello/* (myapp-hello-svc pod receives GET /api/v1/hello/*
I've tried some other combinations and rules, but none seemed working neither.
Any ideas what I may have missed here?
Thanks!
UPDATE: WORKAROUND
As currently I still can't find solution to this, I decided to use workaround by adding NGINX ingress controller to K8S cluster and routing the traffic for new APIs through it instead.
Related
I am trying to deploy Jaeger all-in-one image in a kubernetes cluster.
Jaeger is not in the root of the url, meaning it's accessible through https://somedomain.com/xyz/jaeger
I have an ingress rule which seems to be pointing correctly to a Service which is also referencing fine the pod in a deployment (I can see all this in Rancher UI).
But somehow when I try to access, nginx is throwing a 502 Bad Gateway error.
This is how the ingress rule looks like
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: somedomain.com
http:
# Jaeger
- path: /xyz/jaeger(/|$)(.*)
pathType: Prefix
backend:
service:
name: jaeger
port:
number: 16868
Then in the pod definition I tried using the QUERY_BASE_PATH env var setting it to /xyz/jaeger but that made no difference at all.
The problem was an incorrect port being specified.
16868 instead of 16686
I've deployed a default nginx ingress controller v1.5.1 via helm (kubernetes.github.io/ingress-nginx v4.4.0) to my AKS cluster. Running kubernetes v1.24.6.
I have created the following ingress to reach my app service/pod. The idea is to remove the prefix path (/api/v1) entirely in the rewrite.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: my-ingress
namespace: my-namespace
labels:
app: my-app
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/v1(/|$)(.*)
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
This results in a 504 (150s timeout). The app pod is healthy, and when I remove the rewrite-target annotation the controller becomes responsive again.
What is unusual is that I see no request log entries in the controller pod whatsoever when attempting to use the rewrite-target annotation. I've added nginx.ingress.kubernetes.io/enable-rewrite-log: "true" but it makes no difference.
Following documentation here: https://kubernetes.github.io/ingress-nginx/examples/rewrite/
What am I missing?
The problem was the version of Kubernetes on the cluster. I created a new cluster running v1.23.12 using the same ingress and had no issues with the path rewrite working as expected. To confirm, I tried v1.24.3 and v1.24.6 again on new clusters and in both instances reproduced the timeout.
I am trying to setup Prometheus in k8 cluster, able to run using helm. Accessing dashboard when i expose prometheus-server as LoadBalancer service using external ip.
Same does not work when I try to configure this service as ClusterIP and making it as backend using ingress controller. Receiving 404 error, any thoughts on how to troubleshoot this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
with above ingress definition in place, url “http://<>/prometheus/ getting redirected to http://<>/graph/ and then 404 error page getting rendered. When url adjusted to http://<>/prometheus/graph some of webcontrols gets rendered with lots of errors on browser console.
Prometheus might be expecting to have control over the root path (/).
Please change the Ingress to prometheus.example.com and it should work fine. (Changing it to a subdomain)
Please change your Ingress configuration file, add host field:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: prometheus.example.com
http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
then apply changes executing command:
$ kubectl aply -f your_ingress_congifguration_file.yaml
The host header field in a request provides the host and port
information from the target URI, enabling the origin server to
distinguish among resources while servicing requests for multiple
host names on a single IP address.
Please take a look here: hosts-header.
Ingress definition: ingress.
Useful information: helm-prometheus.
Useful documentation: ingress-path-matching.
I can't get the nginx controller to route based on the hostname. The YAML below doesn't work - traffic goes to the default back-end / I get a 404. However, if I remove the value for host, the ingress controller successfully routes traffic to my-service. The service works successfully if I place it behind a load balancer but I want to have multiple services working for different host names so I want to use an ingress controller and use a single IP. Thoughts?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test1.mydomain.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
The yaml looks slightly different than the rewrite example located here. The yaml is valid and kubectl apply or create should work but not produce the results you are expecting. Do you need the rewrite annotation or could you remove it and the back end service will respond without issue? If you don't need to rewrite anything try removing the yaml to just look like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
spec:
rules:
- host: test1.mydomain.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
Is it possible to have a fallback service for Kubernetes ingresses in the event that none of the normal pods are live/ready? In other words, how would you go about presenting a friendly "website down" page to visitors if all pods crashed or went down somehow?
Right now, a page appears that says "default backend - 404" if that happens.
Here's what we tried, to no avail:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
backend:
serviceName: website-down-service
servicePort: 80
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 80
For reference, we're testing locally with Minikube and deploying to the cloud on Google's Container Engine.
If using Nginx then default backend annotation should do the trick, sample:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: fallback-backend
spec:
<your spec here>
For the Nginx Ingress Controller there is a flag --default-backend-service, which currently points to the service showing the "default backend - 404" message. Just replace it with the service you want. See https://github.com/kubernetes/ingress/tree/master/controllers/nginx#command-line-arguments
If you're using another Ingress Controller, I expect it to have a similar option.