Ingress config causes All backend services are in UNHEALTHY state - kubernetes

I'm setting up a Janusgraph instance with Kubernetes on GKE and my internal load balancer works fine. However, when using Ingress to expose an https endpoint, I can't resolve the following status:
All backend services are in UNHEALTHY state
My Service configuration is:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-04-24T05:53:47Z"
labels:
app: janusgraph
chart: janusgraph-0.2.1
heritage: Tiller
release: janusgraph
name: janusgraph-service
namespace: default
resourceVersion: "476142"
selfLink: /api/v1/namespaces/default/services/janusgraph-service
uid: f6c80efa-85ef-11ea-983b-42010a960015
spec:
clusterIP: 10.43.250.31
externalTrafficPolicy: Cluster
ports:
- nodePort: 31038
port: 8182
protocol: TCP
targetPort: 8182
selector:
app: janusgraph
release: janusgraph
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.236.228.15
My Ingress configuration is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-31038--126ca7f10f7eb9fb":"Unknown"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-dispatchdqs--126ca7f10f7eb9fb
ingress.kubernetes.io/target-proxy: k8s-tp-default-dispatchdqs--126ca7f10f7eb9fb
ingress.kubernetes.io/url-map: k8s-um-default-dispatchdqs--126ca7f10f7eb9fb
creationTimestamp: "2020-04-24T06:48:39Z"
generation: 1
name: dispatchdqs
namespace: default
resourceVersion: "492988"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/dispatchdqs
uid: a1641ba9-85f7-11ea-9f9f-42010a960046
spec:
backend:
serviceName: janusgraph-service
servicePort: 8182
status:
loadBalancer:
ingress:
- ip: 35.201.92.97
Thanks for your help in advance.

Related

Kong throws no Route matched with those values

I'm trying to setup Kong in Kubernetes. While following the official documentation from https://docs.konghq.com/kubernetes-ingress-controller/2.6.x/guides/using-kongingress-resource/ I'm, unable to make it work.
Every request I make to the URL it ends up with no route.
curl -X GET https://<$PROXY_IP>/<service_name> -L
{"message":"no Route matched with those values"}%
Kong logs don't throw any errors and I can see the route and service in Kong admin.
My k8s objects are the following
Ingress Config:
kind: Ingress
metadata:
annotations:
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: <service_ns>
resourceVersion: "18823478"
uid: 10f15c94-0c56-4d27-b97d-0ad5bdd549ec
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: <service_name>
port:
number: 8080
path: /<service_name>
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 172.16.44.236
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
annotations:
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: services
resourceVersion: "18823487"
uid: 9dd47a3c-598a-48fe-ba60-29779156b58d
route:
methods:
- GET
strip_path: true
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:04Z"
labels:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: <service_name>
app.kubernetes.io/version: latest
helm.sh/chart: <local_chart>
name: authenticate
namespace: services
resourceVersion: "18820083"
uid: a3112c22-4df7-40b0-8fd3-cde8516354cc
spec:
clusterIP: 172.16.45.71
clusterIPs:
- 172.16.45.71
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/name: <service_name>
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I'm battling this issue for days now and I can't seem to figure it out.

Ingress return 404

Rancher ingress return 404 to service.
Setup: I have 6 VMs, one Rancher server x.x.x.51 (where dns domain.company is pointing to, TLS), and 5 VMs (one master and 4 worker x.x.x.52-56).
My service, gvm-gsad running in gvm namespace:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
meta.helm.sh/release-name: gvm
meta.helm.sh/release-namespace: gvm
creationTimestamp: "2021-11-15T21:14:21Z"
labels:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gvm-gsad
app.kubernetes.io/version: "21.04"
helm.sh/chart: gvm-1.3.0
name: gvm-gsad
namespace: gvm
resourceVersion: "3488107"
uid: c1ddfdfa-3799-4945-841d-b6aa9a89f93a
spec:
clusterIP: 10.43.195.239
clusterIPs:
- 10.43.195.239
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: gsad
port: 80
protocol: TCP
targetPort: gsad
selector:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/name: gvm-gsad
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress configure: Ingress controller is default one from rancher.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.16.12.53"],"port":443,"protocol":"HTTPS","serviceName":"gvm:gvm-gsad","ingressName":"gvm:gvm","hostname":"dtl.miproad.ad","path":"/gvm","allNodes":true}]'
creationTimestamp: "2021-11-16T19:22:45Z"
generation: 10
name: gvm
namespace: gvm
resourceVersion: "3508472"
uid: e99271a8-8553-45c8-b027-b259a453793c
spec:
rules:
- host: domain.company
http:
paths:
- backend:
service:
name: gvm-gsad
port:
number: 80
path: /gvm
pathType: Prefix
tls:
- hosts:
- domain.company
status:
loadBalancer:
ingress:
- ip: x.x.x.53
- ip: x.x.x.54
- ip: x.x.x.55
- ip: x.x.x.56
When i access it with https://domain.company/gvm then i get 404.
However, when i change the service to NodePort, i could access it with x.x.x.52:PORT normally. Meaning the deployment is running fine, just some configuration issue in ingress.
I checked this one: rancher 2.x thru ingress controller returns 404 but it does not help.
Thank you in advance!
Figured out the solution.
The domain.company is pointing to rancher (x.x.x.51). Where the ingress is running on (x.x.x.53,.54,.55,.56).
So, the solution is to create a new DNS called gvm.domain.company pointing to any ingress (x.x.x.53,.54,.55,.56) (you can have LoadBalancer here or use round robin DNS).
Then, the ingress definition is gvm.domain.company and path is "/".
Hope it helps others!

AKS ingress address is empty. Grafana not being exposed through ingress

I have my AKS cluster where I am running my application without any problem. I have two deployments (backend-frontend) and I am using a service of type ClusterIP and an ingress controller to expose them publicly. I am trying to do the same with my Grafana deployment but it's not working. I am getting 404 error. If I go and execute the port forward I can access it over http://localhost:3000 and login successfully. The failure part is with ingress as it's created but with an empty address. I have three ingresses two of them shows the public IP in Address column and they are accessible through the URL while grafana url is not working.
Here is my Grafana Ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: loki-grafana-ingress
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
tls:
- hosts:
- xxx.xxx.me
secretName: sec-tls
rules:
- host: xxx.xxx.me
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: loki-grafana
servicePort: 80
status:
loadBalancer: {}
Attached are the Ingresses created:
Ingresses
Grafana Service
kind: Service
apiVersion: v1
metadata:
name: loki-grafana
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
ports:
- name: service
protocol: TCP
port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: loki
app.kubernetes.io/name: grafana
clusterIP: 10.0.189.19
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
When doing kubectl port-forward --namespace ingress-basic service/loki-grafana 3000:80
Grafana_localhost
Grana_URL

Traefik Ingress not routing traffic

I have deployed the kubernetes cluster on vagrant machine with config as:
one master and two worker nodes.
Two services are deployed with named as nodeport-svc-rc and nodeport-svc-rs
Services config:
# nodeport-svc-rc
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rc
spec:
type: NodePort
ports:
- port: 5001
targetPort: 5001
nodePort: 30001
selector:
app: controller
# nodeport-svc-rs
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rs
spec:
type: NodePort
ports:
- port: 5002
targetPort: 5002
nodePort: 30002
selector:
app: controller-rs
Ingress Config:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: example.com
http:
paths:
- path: /demo
backend:
serviceName: nodeport-svc-rc
servicePort: 5001
- path: /demof
backend:
serviceName: nodeport-svc-rs
servicePort: 5002
Traefik is able to detect the ingress resource on its dashboard as backends services:
But no Frontends have been detected on dashboard and no IP address are detected on Backends.
Entry in /etc/hosts file: XXX.XXX.X.X example.com
I'm unable to route traffic using ingress. If i hit from browser example.com/demo, error shows Site can't be reached where i'm wrong? can someone help me.
# sudo kubectl describe ing
Name: ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
example.com
/demo nodeport-svc-rc:5001 (10.244.171.95:5001,10.244.171.96:5001,10.244.235.150:5001)
/demof nodeport-svc-rs:5002 (10.244.171.98:5002,10.244.235.157:5002,10.244.235.158:5002)
Annotations: ingress.kubernetes.io/auth-secret: mysecret
ingress.kubernetes.io/auth-type: basic
kubernetes.io/ingress.class: traefik
Events: <none>
And when i hit directly on nodePort service example.com:30001 or example.com:30002 it successfully give response.
Edited: Below Traefik controller config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7.26-alpine
name: traefik-ingress-lb
args:
- --web
- --kubernetes

Services can't communicate because there is not DNS resolving in Kubernetes

I configure my services with type ClusterIP. And I want to make them communicated.
Service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-backend-deployment
name: app-backend
spec:
type: ClusterIP
ports:
- port: 8020
protocol: TCP
targetPort: 8100
selector:
app: app-backend
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-backend
name: app-backend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-backend
template:
metadata:
labels:
app: app-backend
spec:
containers:
- name: app-backend
image: app-backend
ports:
- containerPort: 8100
imagePullPolicy: Never
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-conf # name of configMap
data:
BACKEND_SERVICE_HOST: app-backend:8020
And that is what I pass to the frontend service, and I want to make a REST call through the DNS name for example http://app-backend:8020/get/1. But like I see in the console app cannot resolve DNS name net::ERR_NAME_NOT_RESOLVED.
I also check pod nslookup:
busybox nslookup app-backend.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: app-backend.default.svc.cluster.local
Address: 10.106.41.36
And compare it to
kubectl describe svc app-backend
Name: app-backend
Namespace: default
Labels: app=app-backend-deployment
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"...
Selector: app=app-backend
Type: ClusterIP
IP: 10.106.41.36
Port: <unset> 8020/TCP
TargetPort: 8100/TCP
And like you can see there is the same IP on Address but I don't know and where to look what is wrong why dns resolver doesn't work. kubectl version Client "v1.15.5", Server Version:"v1.17.3",
Because of that frontend service that was served to the local machine (that how Angular works) REST request cannot go through Kubernetes DNS with another backend service. I need to communicate them through the Ingress. Due to different annotations, I have to use 2 Ingress. Meaby there is a better way to use just one, but when I want to use only one Ingress I can't find a way to make them both working, with the same annotation.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: app-backend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /api(/|$)(.*)
backend:
serviceName: app-backend
servicePort: 8020
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-frontend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /
backend:
serviceName: app-frontend
servicePort: 80