Helm for kubernetes-dashboard not creating ingress - kubernetes-helm

I'm trying to get kubernetes-dashboard up and running under KIND but I'm not getting an ingress created even-though I think I changed the values.yaml to do that. Here is what I have for that section any idea what I'm missing/doing wrong:
ingress:
## If true, Kubernetes Dashboard Ingress will be created.
##
enabled: true
## Kubernetes Dashboard Ingress labels
labels:
key: value
## Kubernetes Dashboard Ingress annotations
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
## If you plan to use TLS backend with enableInsecureLogin set to false
## (default), you need to uncomment the below.
## If you use ingress-nginx < 0.21.0
# nginx.ingress.kubernetes.io/secure-backends: "true"
## if you use ingress-nginx >= 0.21.0
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
## Kubernetes Dashboard Ingress Class
# className: "example-lb"
## Kubernetes Dashboard Ingress paths
## Both `/` and `/*` are required to work on gce ingress.
paths:
- /
- /*
## Custom Kubernetes Dashboard Ingress paths. Will override default paths.
##
customPaths:
- pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- pathType: ImplementationSpecific
backend:
service:
name: >-
{{ include "kubernetes-dashboard.fullname" . }}
port:
# Don't use string here, use only integer value!
number: 443
# Kubernetes Dashboard Ingress hostnames
# Must be provided if Ingress is enabled
#
hosts:
- local.com
# Kubernetes Dashboard Ingress TLS configuration
# Secrets must be manually created in the namespace
#
tls:
- secretName: kubernetes-dashboard-tls
hosts:
- local.com
This will run and if I run:
helm upgrade -f dashboard/values.yaml dashboard dashboard
Release "dashboard" has been upgraded. Happy Helming!
NAME: dashboard
LAST DEPLOYED: Fri Dec 10 16:41:46 2021
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 7
TEST SUITE: None
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dashboard-kubernetes-dashboard-5d89cf78dd-g6tmb 1/1 Running 0 94m
But:
$ kubectl get ingress
No resources found in kubernetes-dashboard namespace.
Now stackoverflow won't post my question, because I posted mostly code. Maybe this will trick it.

I ended up creating my own ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- {{ .Values.apps.nameSpace }}.{{ .Values.apps.domain }}
secretName: my-tls-secret
rules:
- host: {{ .Values.apps.nameSpace }}.{{ .Values.apps.domain }}
http:
paths:
- pathType: Prefix
path: /dashboard(/|$)(.*)
backend:
service:
name: dashboard-kubernetes-dashboard
port:
number: 443

Related

Ingress configuration file not applied with error unknown directive "stub-status" after updating to version 1.19.10

I am trying to apply yaml files with ingress configuration for 3 different objects, the issue appeared after I updated the ingress version by helm to the latest one 1.19.10, and the error I got when do "kubectl apply -f ingress1.yaml"
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "be-ingress", Namespace: "env-prod"
for: "prod\\company apps\\Ingress\\be-ingress.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denienginx: [emerg] unknown directive "stub-status" in /tmp/nginx/nginx-cfg3860722829:256
nginx: configuration file /tmp/nginx/nginx-cfg3860722829 test failed
And also I have fluxcd system to automate the CD cycle and it gives same error
my ingress configuration example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: env-prod
name: be-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod3
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- test-be.prod.company.io
secretName: letsencrypt-prod3
rules:
- host: "test-be.prod.company.io"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: be-test
port:
number: 80

Kubernetes: Issue with 2 Ingress object with regex path

I have 2 ingress objects
first-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 80
second-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: second-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
cert-manager.io/cluster-issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
spec:
rules:
- host: www.example.com
http:
paths:
- path: /test(/|$)(.*)
pathType: Prefix
backend:
service:
name: second-service
port:
number: 80
The expectation is:
www.example.com/test/whatever -> second-service
www.example.com -> first-service
What I saw is that both www.example.com/test/whatever and www.example.com reach to the first-service
If I change the second-ingress to replace the regex with a static path, it will work. www.example.com/test/whatever will hit the second-service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: second-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
spec:
rules:
- host: www.example.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: second-service
port:
number: 80
Any idea why regex does not work? I need the rewrite-target rule, which is the reason I use the regex
The regex that you posted is exactly the same as from the example from the NGINX Ingress Controller docs, but the yaml file is wrong (one mistake with the double spec, below more information) - I tested your yamls files on Kubernetes v.1.21, NGINX Controller version v.1.0.2 and Cert Manager v1.6.1.
A few notes/thoughts about what may be wrong:
Instead of using twice spec: (second-ingress.yaml) , use spec only once. I did not observe the change in the behaviour in the prefixes itself, but in the kubectl get ing command for the second ingress there was no 443 port specified. When I ran kubectl get ing second-ingress -o yaml, there was missing TLS part.
Before (wrong setup):
second-ingress.yaml file:
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
spec:
rules:
kubectl get ing output (missing port 443 for the second ingress):
user#cloudshell:~/ingress-two-services $ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
first-ingress <none> www.example.com xx.xx.xx.xx 80, 443 5h7m
second-ingress <none> www.example.com xx.xx.xx.xx 80 50m
kubectl get ing second-ingress -o yaml output (missing tls part):
user#cloudshell:~/ingress-two-services $ kubectl get ing second-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
...
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: second-service
port:
number: 80
path: /test(/|$)(.*)
pathType: Prefix
status:
...
After (good setup):
second-ingress.yaml file:
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
rules:
kubectl get ing output:
user#cloudshell:~/ingress-two-services $ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
first-ingress <none> www.example.com xx.xx.xx.xx 80, 443 5h18m
second-ingress <none> www.example.com xx.xx.xx.xx 80, 443 61m
kubectl get ing second-ingress -o yaml output:
user#cloudshell:~/ingress-two-services $ kubectl get ing second-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
...
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: second-service
port:
number: 80
path: /test(/|$)(.*)
pathType: Prefix
tls:
- hosts:
- www.example.com
secretName: example-tls
status:
...
Another thing worth noting that the modified configuration for the second ingress is behaving differently than the one that does not work for you.
Using nginx.ingress.kubernetes.io/rewrite-target: /$2 + path: /test(/|$)(.*):
www.example.com/test/ will rewrite the request to the pod to /
www.example.com/test/whatever will rewrite the request to the pod to /whatever
However, using only path: /test:
www.example.com/test/ will rewrite the request to the pod to /test
www.example.com/test/whatever will rewrite the request to the pod to /test/whatever
Please make sure that you are using a proper setup for which your app is designed to.
Other tips:
make sure that ingress is applied by running kubectl get ing command
get the logs of the NGINX Ingress Controller. Get the name of the Ingress Controller pod (kubectl get pods -n ingress-nginx) and then run kubectl logs -n ingress-nginx ingress-nginx-controller-{...}. Check how your requests are handled and where (which service) they are forwarded to
check the logs of the pods from the deployments and check how they are handling request (kubectl logs {pod-name})
if you are using some out-dated version of the Kubernetes better upgrade it

Kubernetes Ingress on Docker Desktop

I am trying to use Nginx ingress to access kubernetes dashboard on my local pc. The step I followed are:
Getting nginx ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
Getting kubernetes dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
Applying this ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: "kubernetes.docker.internal"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Checking that my host file has this line
127.0.0.1 kubernetes.docker.internal
If I try to open http://kubernetes.docker.internal/ on my browser I get "Http Error 400 this page isn't working", while on postman I get an error 400 with message "Client sent an HTTP request to an HTTPS server."
How can I resolve?
I resolved adding annotations section in ingress yaml.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "kubernetes.docker.internal"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443

Ingress routing rules to access prometheus server

I have deployed prometheus server(2.13.1) on kubernetes(1.17.3), I am able to access it on http://my.prom.com:9090
But i want to access it on http://my.prom.com:9090/prometheus so i added following ingress rules but its not
working
First Try:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /prometheus
name: approot
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /
This results in 404 error
Second Try:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /prometheus(/|$)(.*)
Now when i access URL http://my.prom.com:9090/prometheus in browser it get changed to http://my.prom.com:9090/graph and show 404 error
Prometheus is not aware of what you are trying to achieve and that's why it's redirecting to unknown destination.
You have to tell prometheus to accept traffic on the new path as can be seen here and here.
Highlight to the second link, you have to include - "--web.route-prefix=/" and - "--web.external-url=http://my.prom.com:9090/prometheus" in your prometheus deployment.
Then I had to modify the prometheus deployment to accept traffic
on the new path (/prom). This was covered in the Securing
Prometheus API and UI Endpoints Using Basic
Auth
documentation:
In your env it should look like this:
> grep web deploy.yaml
- "--web.enable-lifecycle"
- "--web.route-prefix=/"
- "--web.external-url=http://my.prom.com:9090/prometheus"
I was experiencing this issue while deploying Prometheus via the community Helm chart and figured it may be helpful to share my findings with others here: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus
My override values.yaml looks like this:
server:
prefixURL: /
baseURL: https://my-cluster-external-hostname/prometheus
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
path: "/prometheus(/|$)(.*)"
hosts:
- my-cluster-external-hostname
tls:
- secretName: cluster-tls-secret
hosts:
- my-cluster-external-hostname
service:
servicePort: 9090
Now the resulting deployment spec for prometheus-server ends up with these args (note the order and specifically that --web.route-prefix is at the top):
--web.route-prefix=/
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
--web.external-url=https://my-cluster-external-hostname/prometheus
This does NOT work as the /-/healthy endpoint results in a 404 (from kubectl describe pod prometheus-server):
Warning Unhealthy 9s (x9 over 49s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
Warning Unhealthy 9s (x3 over 39s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
After much trial and error, I realized the order of those arguments seemed to matter so I changed my Helm chart values.yaml as follows:
server:
# prefixURL: / # <-- commented out
# baseURL: https://my-cluster-external-hostname/prometheus # <-- commented out
extraFlags: # <-- added this section to specify my args manually
- web.enable-lifecycle
- web.route-prefix=/
- web.external-url=https://my-cluster-external-hostname/prometheus
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
path: "/prometheus(/|$)(.*)"
hosts:
- my-cluster-external-hostname
tls:
- secretName: cluster-tls-secret
hosts:
- my-cluster-external-hostname
service:
servicePort: 9090
The resulting deployment from this values.yaml puts the arguments in the apparently proper order which makes the health check endpoint available (internally) and allows accessing Prometheus from outside the cluster. Notice where --web.route-prefix is located now.
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
--web.route-prefix=/
--web.external-url=https://my-cluster-external-hostname/prometheus
I also submitted a bug to the community Prometheus chart:
https://github.com/prometheus-community/helm-charts/issues/1594
add in deploy-prometheus.yml
args:
- --web.enable-lifecycle
- --web.route-prefix=/
- --web.external-url=https://localhost:9090/prometheus/
In VirtualService the Prometheus
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- prometheus-gateway
http:
- match:
- uri:
prefix: /prometheus/
rewrite:
uri: /
route:
- destination:
host: prometheus
port:
number: 9090

Kibana dashboard not loading - {"statusCode":404,"error":"Not Found","message":"not found"}

I am installing kibana with helm like so
values = [
<<-EOT
replicas: 3
healthCheckPath: /admin/kibana/app/kibana
kibanaConfig:
kibana.yml: |
server.basePath: "/admin/kibana"
server.rewriteBasePath: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: kong
kubernetes.io/tls-acme: "true"
path: /admin/kibana
I want kibana to be served at path /admin/kibana. eg. https://my-server.com/admin/kibana
I see the error {"statusCode":404,"error":"Not Found","message":"not found"}
In the logs
"res":{"statusCode":404,"responseTime":24,"contentLength":9},"message":"GET / 404 24ms - 9.0B"}
The pods are running fine which means health check is passing at /admin/kibana.
I have the server.basePath set as per documentation. What else is missing?
If I port-forward 5601 from my box,
kubectl port-forward svc/kibana 5601:5601
I can access kibana at localhost:5601/admin/kibana. But not on the domain.
The ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: kong
kubernetes.io/tls-acme: "true"
labels:
app: kibana
heritage: Tiller
release: kibana
name: kibana-kibana
spec:
rules:
- host: xxxx.xxxx.app
http:
paths:
- backend:
serviceName: kibana-kibana
servicePort: 5601
path: /admin/kibana
tls:
- hosts:
- xxxx.xxxx.app
secretName: wildcard-alchemy-tls
The kong ingress by default was stripping path. Hence the issue.