Ingress routing rules to access prometheus server - kubernetes

I have deployed prometheus server(2.13.1) on kubernetes(1.17.3), I am able to access it on http://my.prom.com:9090
But i want to access it on http://my.prom.com:9090/prometheus so i added following ingress rules but its not
working
First Try:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /prometheus
name: approot
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /
This results in 404 error
Second Try:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /prometheus(/|$)(.*)
Now when i access URL http://my.prom.com:9090/prometheus in browser it get changed to http://my.prom.com:9090/graph and show 404 error

Prometheus is not aware of what you are trying to achieve and that's why it's redirecting to unknown destination.
You have to tell prometheus to accept traffic on the new path as can be seen here and here.
Highlight to the second link, you have to include - "--web.route-prefix=/" and - "--web.external-url=http://my.prom.com:9090/prometheus" in your prometheus deployment.
Then I had to modify the prometheus deployment to accept traffic
on the new path (/prom). This was covered in the Securing
Prometheus API and UI Endpoints Using Basic
Auth
documentation:
In your env it should look like this:
> grep web deploy.yaml
- "--web.enable-lifecycle"
- "--web.route-prefix=/"
- "--web.external-url=http://my.prom.com:9090/prometheus"

I was experiencing this issue while deploying Prometheus via the community Helm chart and figured it may be helpful to share my findings with others here: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus
My override values.yaml looks like this:
server:
prefixURL: /
baseURL: https://my-cluster-external-hostname/prometheus
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
path: "/prometheus(/|$)(.*)"
hosts:
- my-cluster-external-hostname
tls:
- secretName: cluster-tls-secret
hosts:
- my-cluster-external-hostname
service:
servicePort: 9090
Now the resulting deployment spec for prometheus-server ends up with these args (note the order and specifically that --web.route-prefix is at the top):
--web.route-prefix=/
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
--web.external-url=https://my-cluster-external-hostname/prometheus
This does NOT work as the /-/healthy endpoint results in a 404 (from kubectl describe pod prometheus-server):
Warning Unhealthy 9s (x9 over 49s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
Warning Unhealthy 9s (x3 over 39s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
After much trial and error, I realized the order of those arguments seemed to matter so I changed my Helm chart values.yaml as follows:
server:
# prefixURL: / # <-- commented out
# baseURL: https://my-cluster-external-hostname/prometheus # <-- commented out
extraFlags: # <-- added this section to specify my args manually
- web.enable-lifecycle
- web.route-prefix=/
- web.external-url=https://my-cluster-external-hostname/prometheus
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
path: "/prometheus(/|$)(.*)"
hosts:
- my-cluster-external-hostname
tls:
- secretName: cluster-tls-secret
hosts:
- my-cluster-external-hostname
service:
servicePort: 9090
The resulting deployment from this values.yaml puts the arguments in the apparently proper order which makes the health check endpoint available (internally) and allows accessing Prometheus from outside the cluster. Notice where --web.route-prefix is located now.
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
--web.route-prefix=/
--web.external-url=https://my-cluster-external-hostname/prometheus
I also submitted a bug to the community Prometheus chart:
https://github.com/prometheus-community/helm-charts/issues/1594

add in deploy-prometheus.yml
args:
- --web.enable-lifecycle
- --web.route-prefix=/
- --web.external-url=https://localhost:9090/prometheus/
In VirtualService the Prometheus
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- prometheus-gateway
http:
- match:
- uri:
prefix: /prometheus/
rewrite:
uri: /
route:
- destination:
host: prometheus
port:
number: 9090

Related

Warning: Rejected - All hosts are taken by other resources

I'm trying to setup Nginx-ingress controller to manage two paths on the same hostname in bare metal based cluster.
In the app1 namespace i have below nginx resource:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
And in the app2 namespace i have below nginx resource:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app2-ingress
namespace: app2
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
My app1-service applied first and it is running fine, now when i applied the second app2-service it shows below warning and not able to access it on browser.
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
How do i configure my nginx ingress resource to connect multiple service paths on the same hostname?
Default Nginx Ingress controller doesn't support having different Ingress resources with the same hostname. You can have one Ingress resource that contains multiple paths, but in this case all apps should live in one namespace. Like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
Splitting ingresses between namespaces is currently not supported by standard Nginx Ingress controller.
You may however take a look at an alternative implementation of Nginx Ingress by Nginx Inc. They have support for Mergeable Ingresses.

Ingress configuration file not applied with error unknown directive "stub-status" after updating to version 1.19.10

I am trying to apply yaml files with ingress configuration for 3 different objects, the issue appeared after I updated the ingress version by helm to the latest one 1.19.10, and the error I got when do "kubectl apply -f ingress1.yaml"
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "be-ingress", Namespace: "env-prod"
for: "prod\\company apps\\Ingress\\be-ingress.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denienginx: [emerg] unknown directive "stub-status" in /tmp/nginx/nginx-cfg3860722829:256
nginx: configuration file /tmp/nginx/nginx-cfg3860722829 test failed
And also I have fluxcd system to automate the CD cycle and it gives same error
my ingress configuration example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: env-prod
name: be-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod3
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- test-be.prod.company.io
secretName: letsencrypt-prod3
rules:
- host: "test-be.prod.company.io"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: be-test
port:
number: 80

K8s ingress socket.io 503

I have a Nodejs with socket.io app running on EKS with a ELB. I have configured an ingress with the following yaml.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/websocket-services: ws-gateway
nginx.org/websocket-services: ws-gateway
name: web-socket
namespace: default
spec:
defaultBackend:
service:
name: ws-gateway
port:
number: 9090
rules:
- host: ws.example.com
http:
paths:
- backend:
service:
name: ws-gateway
port:
number: 9090
path: /
pathType: Prefix
tls:
- hosts:
- ws.example.com
secretName: ws-tls
However, when testing this I am receiving 503 Service Temporarily Unavailable. The domain name and service name and port are correct. The request is not reaching the app either.
Is there something that I am missing in the configuration?
The issue was on the service side. The containers were restarting unexpectedly because of connection lost to some other service, which cause a 503 every time I tried to access it.

Is it possible to use same hostname with multiple Ingress resources running in different namespaces?

I want to use the same hostname let's say example.com with multiple Ingress resources running in different namespaces i.e monitoring and myapp. I'm using Kubernetes nginx-ingress controller.
haproxy-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: haproxy-ingress
namespace: myapp
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
# fill in host here
- example.com
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: haproxy
port:
number: 80
grafana-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
# only match /grafana and paths under /grafana/
- path: /grafana(/|$)(.*)
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
When I'm doing curl example.com then it is redirecting me to the deployment running in namespace one(as expected) but when I'm doing curl example.com/grafana then still it is redirecting me to namespace one deployment.
Please help.
Yes it is possible.
There can be two issues in your case.
One is you don't need the regex path for grafana ingress. Simple /grafana path will be fine with path type Prefix as with path type Prefix any /grafana/... will be redirected associated service. So the manifest file will be:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /grafana
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
And the second issue can be the related service or deployment might not be the under same namespace monitoring. Please make sure the deployment/service/secret or other resources needed for grafana remains under the same namespace monitoring.

Kubernetes ingress edit: HTTP 400 Bad request - The plain HTTP request was sent to HTTPS

Could there be any reason why a webapp which perfectly loads up fine gives a *HTTP 400 Bad request - The plain HTTP request was sent to HTTPS* port after the webapp's ingress has been edited manually or edited through an automated job which updates the ingress modifying the Whitelisted IPs
Apparently, this issue gets fixed when we redeploy the webapp after purging the webapp deployment...
Any pointers to this would be great as this happens on our PROD env and not reproducible on any lower envs.
Points to note:-
- Nginx Ingress controller setup is the same across lower envs and Prod env.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ingress.allow-http: "false"
ingress.kubernetes.io/proxy-body-size: 20M
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/whitelist-source-range: xxx.yy.zz.pp/32, yyy.ss.dd.kkl/32
ingress.kubernetes.io/whitelist-source-range-status: unlocked
creationTimestamp: 2018-11-29T15:34:05Z
generation: 5
labels:
app: xyz-abc-pqr
name: xxxx-webapp-yyyy-zzzz
namespace: nspsace-name
resourceVersion: "158237270"
selfLink: /apis/extensions/v1beta1/namespaces/nspsace-name/ingresses/xxxx-webapp-yyyy-zzzz
uid: 348f892e-f3ec-11e8-aa6f-0a0340041348
spec:
rules:
- host: ssssssss.wwwwweewwerwerw.co.uk
http:
paths:
- backend:
serviceName: xxxx-webapp-yyyy-zzzz
servicePort: 443
path: /
- host: xxxx-webapp-yyyy-zzzz.bbbbv.lasdfkla.ajksdfh.ioohhsaf.pp
http:
paths:
- backend:
serviceName: xxxx-webapp-yyyy-zzzz
servicePort: 443
path: /
tls:
- hosts:
- ssssssss.wwwwweewwerwerw.co.uk
- xxxx-webapp-yyyy-zzzz.bbbbv.lasdfkla.ajksdfh.ioohhsaf.pp
secretName: xxxx-webapp-yyyy-zzzz-server-tls
status:
loadBalancer:
ingress:
- {}
There may be something wrong with the ingress controller and how it updates its configuration. I'm assuming you are using a nginx ingress controller so you can inspect the configs before an after:
$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf nginx.conf.before
$ kubectl edit ingress <your-ingress>
$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf nginx.conf.after
$ diff nginx.conf.before nginx.conf.after
You can see the this may happen with nginx because of something like this: Dealing with nginx 400 "The plain HTTP request was sent to HTTPS port" error.