AKS ingress address is empty. Grafana not being exposed through ingress - kubernetes

I have my AKS cluster where I am running my application without any problem. I have two deployments (backend-frontend) and I am using a service of type ClusterIP and an ingress controller to expose them publicly. I am trying to do the same with my Grafana deployment but it's not working. I am getting 404 error. If I go and execute the port forward I can access it over http://localhost:3000 and login successfully. The failure part is with ingress as it's created but with an empty address. I have three ingresses two of them shows the public IP in Address column and they are accessible through the URL while grafana url is not working.
Here is my Grafana Ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: loki-grafana-ingress
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
tls:
- hosts:
- xxx.xxx.me
secretName: sec-tls
rules:
- host: xxx.xxx.me
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: loki-grafana
servicePort: 80
status:
loadBalancer: {}
Attached are the Ingresses created:
Ingresses
Grafana Service
kind: Service
apiVersion: v1
metadata:
name: loki-grafana
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
ports:
- name: service
protocol: TCP
port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: loki
app.kubernetes.io/name: grafana
clusterIP: 10.0.189.19
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
When doing kubectl port-forward --namespace ingress-basic service/loki-grafana 3000:80
Grafana_localhost
Grana_URL

Related

Kong throws no Route matched with those values

I'm trying to setup Kong in Kubernetes. While following the official documentation from https://docs.konghq.com/kubernetes-ingress-controller/2.6.x/guides/using-kongingress-resource/ I'm, unable to make it work.
Every request I make to the URL it ends up with no route.
curl -X GET https://<$PROXY_IP>/<service_name> -L
{"message":"no Route matched with those values"}%
Kong logs don't throw any errors and I can see the route and service in Kong admin.
My k8s objects are the following
Ingress Config:
kind: Ingress
metadata:
annotations:
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: <service_ns>
resourceVersion: "18823478"
uid: 10f15c94-0c56-4d27-b97d-0ad5bdd549ec
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: <service_name>
port:
number: 8080
path: /<service_name>
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 172.16.44.236
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
annotations:
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: services
resourceVersion: "18823487"
uid: 9dd47a3c-598a-48fe-ba60-29779156b58d
route:
methods:
- GET
strip_path: true
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:04Z"
labels:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: <service_name>
app.kubernetes.io/version: latest
helm.sh/chart: <local_chart>
name: authenticate
namespace: services
resourceVersion: "18820083"
uid: a3112c22-4df7-40b0-8fd3-cde8516354cc
spec:
clusterIP: 172.16.45.71
clusterIPs:
- 172.16.45.71
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/name: <service_name>
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I'm battling this issue for days now and I can't seem to figure it out.

Ingress return 404

Rancher ingress return 404 to service.
Setup: I have 6 VMs, one Rancher server x.x.x.51 (where dns domain.company is pointing to, TLS), and 5 VMs (one master and 4 worker x.x.x.52-56).
My service, gvm-gsad running in gvm namespace:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
meta.helm.sh/release-name: gvm
meta.helm.sh/release-namespace: gvm
creationTimestamp: "2021-11-15T21:14:21Z"
labels:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gvm-gsad
app.kubernetes.io/version: "21.04"
helm.sh/chart: gvm-1.3.0
name: gvm-gsad
namespace: gvm
resourceVersion: "3488107"
uid: c1ddfdfa-3799-4945-841d-b6aa9a89f93a
spec:
clusterIP: 10.43.195.239
clusterIPs:
- 10.43.195.239
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: gsad
port: 80
protocol: TCP
targetPort: gsad
selector:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/name: gvm-gsad
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress configure: Ingress controller is default one from rancher.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.16.12.53"],"port":443,"protocol":"HTTPS","serviceName":"gvm:gvm-gsad","ingressName":"gvm:gvm","hostname":"dtl.miproad.ad","path":"/gvm","allNodes":true}]'
creationTimestamp: "2021-11-16T19:22:45Z"
generation: 10
name: gvm
namespace: gvm
resourceVersion: "3508472"
uid: e99271a8-8553-45c8-b027-b259a453793c
spec:
rules:
- host: domain.company
http:
paths:
- backend:
service:
name: gvm-gsad
port:
number: 80
path: /gvm
pathType: Prefix
tls:
- hosts:
- domain.company
status:
loadBalancer:
ingress:
- ip: x.x.x.53
- ip: x.x.x.54
- ip: x.x.x.55
- ip: x.x.x.56
When i access it with https://domain.company/gvm then i get 404.
However, when i change the service to NodePort, i could access it with x.x.x.52:PORT normally. Meaning the deployment is running fine, just some configuration issue in ingress.
I checked this one: rancher 2.x thru ingress controller returns 404 but it does not help.
Thank you in advance!
Figured out the solution.
The domain.company is pointing to rancher (x.x.x.51). Where the ingress is running on (x.x.x.53,.54,.55,.56).
So, the solution is to create a new DNS called gvm.domain.company pointing to any ingress (x.x.x.53,.54,.55,.56) (you can have LoadBalancer here or use round robin DNS).
Then, the ingress definition is gvm.domain.company and path is "/".
Hope it helps others!

gRPC connection between two different meshes is reset

I have two different clusters (EKS, v1.18) with their own meshes (v1.9.0).
I have a Thanos deployment on cluster A and a Prometheus deployment on cluster B (with the thanos sidecar running too). The goal is to have thanos query these sidecars in remote clusters to proxy queries to each cluster (block persistence using S3 or similar is out of scope for this issue) via an internal load balancer (ELB classic)
The resources for Gateway, Virtual Service and Service are in place in cluster B, and I can run Thanos locally when connected to the network and connect to the sidecars in cluster B successfully using gRPC.
The ServiceEntry for the FQDN from cluster B has been created in cluster A, resolution works, routing is correct, but the deployment in cluster A can't connect to cluster B.
Istio sidecars (from source workload, Thanos, in cluster A) show that the connection is being reset:
[2021-02-26T14:41:03.509Z] "POST /thanos.Store/Info HTTP/2" 0 - http2.remote_reset - "-" 5 0 4998 - "-" "grpc-go/1.29.1" "50912787-d528-994f-b8ad-78dd42081fea" "thanos.dev.integrations.internal.fqdn:10901" "-" - - 172.20.65.175:10901 172.30.9.174:37594 - default
I don't see the incoming request in cluster B's ingress gateway (I have a public one and a private one, I checked both just to be sure).
I have tried:
Forcing upgrade of http1.1 to http2 using DR
Forcing TLS to be disabled using DR
Excluding private LB CIDR range to bypass proxy
Resources (Cluster A)
ServiceEntry:
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: thanos-integrations-dev
namespace: thanos
spec:
hosts:
- thanos.dev.integrations.internal.fqdn
location: MESH_EXTERNAL
ports:
- name: grpc-thanos-int-dev
number: 10901
protocol: GRPC
resolution: DNS
Resources (Cluster B)
Gateway:
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
meta.helm.sh/release-name: istio-routing-layer
meta.helm.sh/release-namespace: istio-system
creationTimestamp: "2021-02-25T11:37:49Z"
generation: 3
labels:
app.kubernetes.io/instance: istio-routing-layer
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: istio-routing-layer
app.kubernetes.io/version: 0.0.1
helm.sh/chart: istio-routing-layer-0.0.1
name: thanos
namespace: istio-system
spec:
selector:
istio: internal-ingressgateway
servers:
- hosts:
- thanos.dev.integrations.internal.fqdn
port:
name: grpc-thanos
number: 10901
VirtualService:
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
meta.helm.sh/release-name: istio-routing-layer
meta.helm.sh/release-namespace: istio-system
creationTimestamp: "2021-02-25T11:37:49Z"
generation: 3
labels:
app.kubernetes.io/instance: istio-routing-layer
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: istio-routing-layer
app.kubernetes.io/version: 0.0.1
helm.sh/chart: istio-routing-layer-0.0.1
spec:
gateways:
- thanos
hosts:
- thanos.dev.integrations.internal.fqdn
http:
- route:
- destination:
host: thanos-sidecar.prometheus.svc.cluster.local
port:
number: 10901
Service:
---
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: prometheus-thanos-istio
meta.helm.sh/release-namespace: prometheus
creationTimestamp: "2021-02-25T14:31:02Z"
labels:
app.kubernetes.io/instance: prometheus-thanos-istio
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-thanos-istio
app.kubernetes.io/version: 0.0.1
helm.sh/chart: prometheus-thanos-istio-0.0.1
spec:
clusterIP: None
ports:
- name: grpc-thanos
port: 10901
protocol: TCP
targetPort: grpc
selector:
app: prometheus
component: server
sessionAffinity: None
type: ClusterIP

Ingress -> Cluster IP back-end - got ERR_CONNECTION_REFUSED

I got ingress defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-ingress
namespace: wordpress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 25m
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 6002
And the back-end, defined as Cluster IP running on 6002 port.
When I am trying to reach ingress by its ADDRESS in the browser I get ERR_CONNECTION_REFUSED.
I suspect it has to do with the back-end?
Q: What could be a problem? How to analyse it? to make it work.
See the picture below, it is on GCP, all ips are resolved. All seems connected to each other.
The nginx-ingress (ingress-controller, default backed) was installed as helm chart.
helm install --namespace wordpress --name wp-nginx-ingress stable/nginx-ingress --tls
UPDATE:
I do not use yet (https) for back-end, tried to remove http redirect from the ingress yml: nginx.ingress.kubernetes.io/ssl-redirect: "true" removed - did not help.
UPDATE2: wordpress yaml - got from running service at yaml tab in GCP->KE->Services
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-30T04:11:12Z"
labels:
app.kubernetes.io/instance: wordpress
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: wordpress
helm.sh/chart: wordpress-9.0.4
name: wordpress
namespace: wordpress
resourceVersion: "2518308"
selfLink: /api/v1/namespaces/wordpress/services/wordpress
uid: 7dac1a73-723c-11ea-af1a-42010a800084
spec:
clusterIP: xxx.xx.xxx.xx
ports:
- name: http
port: 6002
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/instance: wordpress
app.kubernetes.io/name: wordpress
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
UPDATE: 3
I tried:
kubectl -n wordpress exec -it wordpress-xxxx-xxxx -- /bin/bash
curl http://wordpress.wordpress.svc.cluster.local:6002 and it works - it gets me the html from the wordpress.

503 Service Temporarily Unavailable Nginx + Kibana + AKS

I have deployed Kibana in AKS with the server.basepath of /logs since I want it to be deployed in subpath. I am trying to access Kibana service using nginx controller It is giving 503 service unavailable but Service/Pod is running. Please help me on this.
Kibana Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
template:
metadata:
labels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
spec:
containers:
- name: kibana
image: "docker.elastic.co/kibana/kibana:7.6.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
- name: SERVER_BASEPATH
value: /logs
- name: SERVER_REWRITEBASEPATH
value: "true"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
Kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5601
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
Kibana Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
annotations:
ingress.kubernetes.io/send-timeout: "600"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /logs/?(.*)
backend:
serviceName: kibana
servicePort: 80
Ensure kibana is running with:
kubectl logs kibana
Check the endpoint for the service is not empty:
kubectl describe svc kibana
Check the ingress is correctly configured:
kubectl describe ingress kibana
check the ingress-controller logs:
kubectl logs -n nginx-ingress-controller-.....
Update:
You can only refer services on the same namespace of the ingress. So try to move ingress to kube-logging namespace.
Checkout this: https://github.com/kubernetes/kubernetes/issues/17088