503 Service Temporarily Unavailable Nginx + Kibana + AKS - kubernetes

I have deployed Kibana in AKS with the server.basepath of /logs since I want it to be deployed in subpath. I am trying to access Kibana service using nginx controller It is giving 503 service unavailable but Service/Pod is running. Please help me on this.
Kibana Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
template:
metadata:
labels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
spec:
containers:
- name: kibana
image: "docker.elastic.co/kibana/kibana:7.6.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
- name: SERVER_BASEPATH
value: /logs
- name: SERVER_REWRITEBASEPATH
value: "true"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
Kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5601
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
Kibana Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
annotations:
ingress.kubernetes.io/send-timeout: "600"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /logs/?(.*)
backend:
serviceName: kibana
servicePort: 80

Ensure kibana is running with:
kubectl logs kibana
Check the endpoint for the service is not empty:
kubectl describe svc kibana
Check the ingress is correctly configured:
kubectl describe ingress kibana
check the ingress-controller logs:
kubectl logs -n nginx-ingress-controller-.....
Update:
You can only refer services on the same namespace of the ingress. So try to move ingress to kube-logging namespace.
Checkout this: https://github.com/kubernetes/kubernetes/issues/17088

Related

Kong throws no Route matched with those values

I'm trying to setup Kong in Kubernetes. While following the official documentation from https://docs.konghq.com/kubernetes-ingress-controller/2.6.x/guides/using-kongingress-resource/ I'm, unable to make it work.
Every request I make to the URL it ends up with no route.
curl -X GET https://<$PROXY_IP>/<service_name> -L
{"message":"no Route matched with those values"}%
Kong logs don't throw any errors and I can see the route and service in Kong admin.
My k8s objects are the following
Ingress Config:
kind: Ingress
metadata:
annotations:
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: <service_ns>
resourceVersion: "18823478"
uid: 10f15c94-0c56-4d27-b97d-0ad5bdd549ec
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: <service_name>
port:
number: 8080
path: /<service_name>
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 172.16.44.236
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
annotations:
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:05Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: <service_name>
namespace: services
resourceVersion: "18823487"
uid: 9dd47a3c-598a-48fe-ba60-29779156b58d
route:
methods:
- GET
strip_path: true
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
konghq.com/override: <service_name>
meta.helm.sh/release-name: <service_name>
meta.helm.sh/release-namespace: services
creationTimestamp: "2022-09-23T14:03:04Z"
labels:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: <service_name>
app.kubernetes.io/version: latest
helm.sh/chart: <local_chart>
name: authenticate
namespace: services
resourceVersion: "18820083"
uid: a3112c22-4df7-40b0-8fd3-cde8516354cc
spec:
clusterIP: 172.16.45.71
clusterIPs:
- 172.16.45.71
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: <service_name>
app.kubernetes.io/name: <service_name>
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I'm battling this issue for days now and I can't seem to figure it out.

GKE Kubernetes LoadBalancer returns connection reset by peer

i have encountered a strange problem with my cluster
in my cluster i have a deployment and a Loadbalancer service exposing this deployment
it worked like a charm but suddenly the Loadbalancer started to return an error:
curl: (56) Recv failure: Connection reset by peer
the error is showing while the pod and the loadbalancer are running and have no errors in their logs
what i already tried:
deleting the pod
redeploying service + deployment from scratch
but the issue persist
my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME","app.kubernetes.io/version":"latest"},"name":"APP-NAME","namespace":"namespacex"},"spec":{"ports":[{"name":"web","port":3000}],"selector":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME"},"type":"LoadBalancer"}}
creationTimestamp: "2021-08-03T07:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/version: latest
name: APP-NAME
namespace: namespacex
resourceVersion: "14583904"
uid: 7fb4d7e6-4316-44e5-8f9b-7a466bc776da
spec:
clusterIP: 10.4.18.36
clusterIPs:
- 10.4.18.36
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30970
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xxx.xxx.xxx
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: APP-NAME
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "latest"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
annotations:
checksum/config: 5e6ff0d6fa64b90b0365e9f3939cefc0a619502b32564c4ff712067dbe44ab90
checksum/secret: 76e0a1351da90c0cef06851e3aa9e7c80b415c29b11f473d4a2520ade9c892ce
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: APP-NAME
containers:
- name: APP-NAME
image: 'docker.io/xxxxxxxx:latest'
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 3000
livenessProbe:
httpGet:
path: /balancer/
port: http
readinessProbe:
httpGet:
path: /balancer/
port: http
env:
...
volumeMounts:
- name: config-volume
mountPath: /home/app/config/
resources:
limits:
cpu: 400m
memory: 256Mi
requests:
cpu: 400m
memory: 256Mi
volumes:
- name: config-volume
configMap:
name: app-config
imagePullSecrets:
- name: secret
The issue in my case turned to be a network component (like a FW) blocking the outbound connection after dimming the cluster 'unsafe' for no apparent reason
so in essence it was not a K8s issue but an IT one

AKS ingress address is empty. Grafana not being exposed through ingress

I have my AKS cluster where I am running my application without any problem. I have two deployments (backend-frontend) and I am using a service of type ClusterIP and an ingress controller to expose them publicly. I am trying to do the same with my Grafana deployment but it's not working. I am getting 404 error. If I go and execute the port forward I can access it over http://localhost:3000 and login successfully. The failure part is with ingress as it's created but with an empty address. I have three ingresses two of them shows the public IP in Address column and they are accessible through the URL while grafana url is not working.
Here is my Grafana Ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: loki-grafana-ingress
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
tls:
- hosts:
- xxx.xxx.me
secretName: sec-tls
rules:
- host: xxx.xxx.me
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: loki-grafana
servicePort: 80
status:
loadBalancer: {}
Attached are the Ingresses created:
Ingresses
Grafana Service
kind: Service
apiVersion: v1
metadata:
name: loki-grafana
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
ports:
- name: service
protocol: TCP
port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: loki
app.kubernetes.io/name: grafana
clusterIP: 10.0.189.19
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
When doing kubectl port-forward --namespace ingress-basic service/loki-grafana 3000:80
Grafana_localhost
Grana_URL

get functional yaml files from Helm

Is there a way to intercept the yaml files from helm after is has built them, but right before the creation of the objects?
What I'm doing now is to create the objects then get them through:
for file in $(kubectl get OBJECT -n maesh -oname); do kubectl get $i -n maesh --export -oyaml > $file.yaml; done
This works fine. I only have to previously craete the object directory, but works. I just was wondering if there is a clean way of doing this.
And, by the way, the reason is because the service mesh of traefik (maesh) is still in diapers, and the only way to install it is through helm. They don't have yet the files in their repo.
You can do
helm template .
this will output something like
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "release-name-my-app-test-connection"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['release-name-my-app:80']
restartPolicy: Never
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-app
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: my-app/templates/ingress.yaml
and that is valid file with k8s objects.

Minikube Ingress: unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found

I am running minikube with below configuration
Environment:
minikube version: v0.25.2
macOS version: 10.12.6
DriverName: virtualbox
ISO: minikube-v0.25.1.iso
I created Ingress resource to map service:messy-chimp-emauser to path: /
But when I am rolling-out changes to minikube, I am getting below logs in the pod for nginx-ingress-controller
5 controller.go:811] service default/messy-chimp-emauser does not have any active endpoints
5 controller.go:245] unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found
5 controller.go:245] unexpected error reading configmap kube-system/udp-services: configmap kube-system/udp-services was not found
And hence getting HTTP - 503 when trying to access service from browser
Steps to reproduce
STEP 1
minikube addons enable ingress
STEP 2
kubectl create -f kube-resources.yml
(replaced actual-image with k8s.gcr.io/echoserver:1.4)
kube-resources.yml
apiVersion: v1
kind: Service
metadata:
name: messy-chimp-emauser
labels:
app: messy-chimp-emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: emauser
selector:
app: messy-chimp-emauser
release: messy-chimp
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: messy-chimp-emauser
labels:
app: emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: emauser
release: messy-chimp
template:
metadata:
labels:
app: emauser
release: messy-chimp
spec:
containers:
- name: emauser
image: "k8s.gcr.io/echoserver:1.4"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messy-chimp-ema-chart
labels:
app: ema-chart
chart: ema-chart-0.1.0
release: messy-chimp
heritage: Tiller
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: messy-chimp-emauser
servicePort: emauser
Request to please suggest on this.