GKE - exposing Grafana externally not working using GCP Ingress - kubernetes

I've Prometheus/Grafana enabled on GKE (in namespace - monitoring)
Karans-MacBook-Pro:ingress-ns karanalang$ kc get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.80.9.14 <none> 3000/TCP 7d4h
prometheus-operated ClusterIP None <none> 9090/TCP 7d4h
prometheus-operator ClusterIP None <none> 8080/TCP 7d4h
I'm trying to expose Grafana using Ingress, below is the Ingress description,
the path is '/grafana'
Karans-MacBook-Pro:ingress-ns karanalang$ kc describe ingress ingress-grafana -n monitoring
Name: ingress-grafana
Namespace: monitoring
Address: 34.117.119.113
Default backend: default-http-backend:80 (10.76.0.8:8080)
Rules:
Host Path Backends
---- ---- --------
*
/grafana grafana:3000 (10.76.0.5:3000)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-31823--45a575f79c8f25d8":"HEALTHY","k8s1-45a575f7-monitoring-grafana-3000-2fa5518a":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-tkept4vc-monitoring-ingress-grafana-y11p7u0i
ingress.kubernetes.io/target-proxy: k8s2-tp-tkept4vc-monitoring-ingress-grafana-y11p7u0i
ingress.kubernetes.io/url-map: k8s2-um-tkept4vc-monitoring-ingress-grafana-y11p7u0i
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 23m loadbalancer-controller UrlMap "k8s2-um-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal Sync 23m loadbalancer-controller TargetProxy "k8s2-tp-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal Sync 23m loadbalancer-controller ForwardingRule "k8s2-fr-tkept4vc-monitoring-ingress-grafana-y11p7u0i" created
Normal IPChanged 23m loadbalancer-controller IP is now 34.117.119.113
Normal Sync 5m12s (x7 over 23m) loadbalancer-controller Scheduled for sync
when i do a curl /grafana, it shows that login is found
However - when i use the same on the browser, it gives '404 Not Found'
Karans-MacBook-Pro:ingress-ns karanalang$ curl 34.117.119.113/grafana
Found.
what needs to be done to debug/fix this ?
tia!

Based on the description provided for your ingress-grafana Ingress resource, it is a normal behavior. You only use /grafana path in the rules. But the screenshot shows a redirect to the /login page. And since you don't have other rules, you get this message.
To solve this problem, you can change path in the ingress-grafana Ingress resource to /* as shown below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: grafana
port:
number: 3000
This will allow to use redirections for the Grafana service.

Related

Access kubernetes-dashboard using ingess ( 404 Not Found )

I'm relatively new to k8s and was following an tutorial to get familiar with it. There was a example on exposing kubernetes-dashboard via ingress and I tried to try it.
Configured kubernetes-dashboard by running following. As per its documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
But different from the tutorial kubernetes-dashboard was exposed via port 443
service/dashboard-metrics-scraper ClusterIP 10.108.119.138 <none> 8000/TCP 50m
service/kubernetes-dashboard ClusterIP 10.100.58.17 <none> 443/TCP 50m
So I changed the ingress configuration yaml accordingly.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: ingress-dashboard
namespace: kubernetes-dashboard
spec:
rules:
- host: k8s-dashboard.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Then I describe the ingress and get the ip and added an entry in /etc/hosts for it
kubectl describe ingress ingress-dashboard -n kubernetes-dashboard
Name: ingress-dashboard
Labels: <none>
Namespace: kubernetes-dashboard
Address: 192.168.49.2
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
k8s-dashboard.com
/ kubernetes-dashboard:443 (172.17.0.6:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 24m (x2 over 25m) nginx-ingress-controller Scheduled for sync
/etc/hosts change
192.168.49.2 k8s-dashbaord.com
When tried to access k8s-dashbaord.com. I get a 404 Not Found from nginx. So it seems like ingress is running but it cannot reach the service.
The ip mapped to ingress rule seems to be wrong though. (172.17.0.6:8443). Because that is not the ip of the service.
What am I doing wrong here?
P.S
If I just to a proxy ( kubectl proxy ) and access dashboard it works fine.

ingress nginx 404 not found

Created a kubernetes cluster, and installed ingress-nginx controller. I am getting a 404 not found if i go to the ingress-nginx-controller load balancer external ip that is aa************.us-east-1.elb.amazonaws.com
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
to get the service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.219.162 aa************.us-east-1.elb.amazonaws.com 80:32091/TCP,443:32305/TCP 154m
ingress-nginx-controller-admission ClusterIP 10.100.208.135 <none> 443/TCP 154m
my ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: factory
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: factory.**.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 80
- host: api.factory.**.com # myfactoryapi-factorydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 8082
all my namespace
kubectl get namespace
NAME STATUS AGE
default Active 34d
ingress-nginx Active 160m
kerberos-factory Active 34d
kube-node-lease Active 34d
kube-public Active 34d
kube-system Active 34d
mongodb Active 8d
to get my ingress
kubectl get ing -n kerberos-factory
NAME CLASS HOSTS ADDRESS PORTS AGE
factory <none> factory.**.com,api.factory.**.com a****.us-east-1.elb.amazonaws.com 80 65m
to describe the ingress
kubectl describe ing -n kerberos-factory
Name: factory
Namespace: kerberos-factory
Address: a********.us-east-1.elb.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
factory.***.com
/ factory:80 (192.168.34.220:80)
api.factory.***.com
/ factory:8082 (192.168.34.220:8082)
Annotations: kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 3m8s (x3 over 4m51s) nginx-ingress-controller Scheduled for sync
why am i getting 404 not found

Kubernetes ingress redirects to 504

I'm trying to learn kubernetes with a couple of rpi's at home. I'm trying to run pihole in the cluster, which has worked, now the issue i'm facing is a redirect issue with ingress.
my ingress.yaml file output:
## pihole.ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: pihole
name: pihole-ingress
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: pihole.192.168.1.230.nip.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: pihole-web
port:
number: 80
output of kubectl describe ingress:
Name: pihole-ingress
Namespace: pihole
Address: 192.168.1.230
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
pihole.192.168.1.230.nip.io
/ pihole-web:80 (10.42.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18s (x12 over 11h) nginx-ingress-controller Scheduled for sync
Output of get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.43.240.186 <none> 443/TCP 22h
ingress-nginx-controller LoadBalancer 10.43.64.54 192.168.1.230 80:31093/TCP,443:30179/TCP 22h
I'm able to get into the pod and curl the cluster ip to get the output i expect, but when i try to visit pihole.192.168.1.230, i get a 504 error. Hoping anyone can assist with my ingress to redirect to the pihole-web service. Please let me know if there's any additional information i can provide.
EDIT:
kubectl get po -n pihole -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pihole-7d4dc6b8d8-vclxz 1/1 Running 0 9h 10.42.2.8 node02.iad <none> <none>
kubectl get svc -n pihole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.102.198 <none> 80/TCP,443/TCP 9h
pihole-dhcp NodePort 10.43.191.110 <none> 67:32021/UDP 9h
pihole-dns-udp NodePort 10.43.214.15 <none> 53:31153/UDP 9h
pihole-dns-tcp NodePort 10.43.168.6 <none> 53:32754/TCP 9h
another edit: since this question was originally posted, and the above edit was made, pihole pod ip was changed from 10.42.2.7 to 10.42.2.8
I checked the logs for the ingress controller and saw the following. Hoping someone can help me decipher this:
2021/09/03 17:52:35 [error] 1938#1938: *3132346 upstream timed out (110: Operation timed out) while connecting to upstream, client: 10.42.1.1, server: pihole.192.168.1.230.nip.io, request: "GET / HTTP/1.1", upstream: "http://10.42.2.8:80/", host: "pihole.192.168.1.230.nip.io", referrer: "http://pihole.192.168.1.230.nip.io/"

Kubernetes Dashboard & Ingress on Docker Desktop

I am trying to access kubernetes dashboard on my local PC through Ingress. The steps I've done so far are:
Install Nginx Ingress by:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
PS D:\dev\kubernetes-dashboard-ingress> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-7rzdl 0/1 Completed 0 148m
pod/ingress-nginx-admission-patch-295pf 0/1 Completed 0 148m
pod/ingress-nginx-controller-7fc74cf778-jz6ts 1/1 Running 0 148m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.106.183.115 localhost 80:30673/TCP,443:32591/TCP 148m
service/ingress-nginx-controller-admission ClusterIP 10.103.188.122 <none> 443/TCP 148m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 148m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7fc74cf778 1 1 1 148m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 16s 148m
job.batch/ingress-nginx-admission-patch 1/1 16s 148m
Install kubernetes dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
When I inspect kubernetes dashboard namespace, I notice that the service is running on port 443:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
So I created Ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
and after applying this rule:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get ingress -n kubernetes-dashboard -o wide
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard-ingress <none> my-dashboard.com localhost 80 121m
I just add the following entry in my windows host file:
127.0.0.1 my-dashboard.com
However, I am getting nothing when I tried to access the dashboard through my browser (http://my-dashboard.com). Have I missed anything?
I was following the tutorial here: https://www.youtube.com/watch?v=X48VuDVv0do. The tutorial was done using minikube - so the dashboard there was available on port 80. Whereas the one i installed directly from github above was available on port 443. Do I need to configure some certificate / secret? I noticed that a few stuffs were created in the Secret by kubernetes-dashboard:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get secret -n kubernetes-dashboard -o wide
NAME TYPE DATA AGE
default-token-97skl kubernetes.io/service-account-token 3 140m
kubernetes-dashboard-certs Opaque 0 140m
kubernetes-dashboard-csrf Opaque 1 140m
kubernetes-dashboard-key-holder Opaque 2 140m
kubernetes-dashboard-token-rwgs4 kubernetes.io/service-account-token 3 140m
and if i tried to describe Ingress:
PS D:\dev\kubernetes-dashboard-ingress> kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: dashboard-ingress
Namespace: kubernetes-dashboard
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
my-dashboard.com
/ kubernetes-dashboard:443 (10.1.0.106:8443)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m4s (x10 over 144m) nginx-ingress-controller Scheduled for sync
I know I can access the dashboard using kubectl proxy - but I would like to test out Ingress (learning it). Thank you in advance!
I'm running the following:
Docker Desktop 3.2.2 (61853)
Engine: 20.10.5
Compose: 1.28.5
Kubernetes: v1.19.7
Your service name seems to be wrong:
You listed your services:
PS D:\dev\kubernetes-dashboard-ingress> kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.110.109.6 <none> 8000/TCP 135m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard ClusterIP 10.110.230.166 <none> 443/TCP 135m k8s-app=kubernetes-dashboard
In your ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my-dashboard.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-dashboard # <<< This line should be kubernetes-dashboard
port:
number: 443
Ok. Figured out the issue. My request (in chrome) went through the corporate proxy, and that did not forward the request further to my kubernetes cluster. After adding 'my-dashboard.com' to the no proxy list, I can access it through browser.
Thank you thomas for the pointer !

Should Kubernetes host nodes be able to access services running in the cluster?

I am running:
Kubernetes v1.19.7 (On-premise, VMs. Provisioned via Kubespray)
MetalLB
Calico
nginx-ingress
Summary: Services are refusing to respond when queried from the host nodes. Is this even supposed to work? If not I can stop banging my head against this particular wall...
I am able to access service.foo.com from anywhere on my local network, however if I try to use something like cURL to make a request to service.foo.com from any of the host nodes I get "Connection refused" errors (but I can ping the service with no issue). I get the same behavior from within any pod running on the k8s cluster.
This is making things particularly difficult since I'm trying to set up and OIDC provider to use for gating access to the k8s dashboard, and host node needs to be able to query the provider.
Network Setup:
kube service addresses: 10.233.0.0/18
pods cidr: 10.233.64.0/18
MetalLB config:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.31.75-172.16.31.79
Ingress Controller Service described
Name: foo-com-ic-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=foo-com-ic
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=foo-com-ic-nginx-ingress
helm.sh/chart=nginx-ingress-0.8.0
Annotations: <none>
Selector: app=foo-com-ic-nginx-ingress
Type: LoadBalancer
IP Families: <none>
IP: 10.233.48.18
IPs: <none>
IP: 172.16.31.76
LoadBalancer Ingress: 172.16.31.76
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31445/TCP
Endpoints: 10.233.105.18:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31173/TCP
Endpoints: 10.233.105.18:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30406
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 9m4s (x4 over 43m) metallb-speaker announcing from node "node4"
Service Ingress described
Name: my-service
Namespace: default
Address: 172.16.31.76
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes service.foo.com
Rules:
Host Path Backends
---- ---- --------
service.foo.com / my-service:80 (10.233.96.27:80)
Annotations: kubernetes.io/ingress.class: service.com
meta.helm.sh/release-name: my-service
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 46m (x2 over 46m) nginx-ingress-controller Configuration for default/my-service was added or updated
Just in case someone comes across this issue while researching their own, I was eventually able to work around this. By chance I noticed that I could cURL the service from the node that the ingress controller pod was running on.
My work-around then was to change my ingress controller's installation kind from "deployment" to "daemonset". Now that the ingress controller pod runs on every node I am able to access the service from every node in the cluster.