Why GKE shows backend instances not healthy? - kubernetes

Following this blog post I created a GKE Kubernetes cluster.
Successively I deployed Keycloak istances and if I use a load balancer:
apiVersion: v1
kind: Service
metadata:
labels:
app: load-balancer
name: load-balancer
namespace: keycloak
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: keycloak
selector:
app: keycloak
I can reach Keyclok.
After that following the Google documentation I created an Ingress for accessing to Keycloak with HTTPS:
managed-certificate.yaml
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: keycloak
spec:
domains:
- mydomain.com
- www.mydomain.com
keycloak-service.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak-service
namespace: keycloak
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: keycloak
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 8443
externalTrafficPolicy: Cluster
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: keycloak
annotations:
kubernetes.io/ingress.global-static-ip-name: keycloak
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: keycloak-service
port:
number: 443
I added liveness and readiness probes to the deployment definition of keycloak too.
But with this configuration GKE says that backend istances ar unhealthy, even if they are healthy and running:
I've read in some related questions on StackOverflow that is a issue with NAG. Should I add the firewall rules for NAG and Ingress?
If it is the point, which could be the rules?
EDIT: keycloak-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak-deployment
namespace: keycloak
labels:
app: keycloak
spec:
replicas: 2
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:latest
env:
- name: DB_VENDOR
value: "POSTGRES"
- name: DB_ADDR
value: "postgres"
- name: DB_DATABASE
value: "keycloak"
- name: DB_USER
value: "keycloak"
- name: DB_SCHEMA
value: "public"
- name: DB_PASSWORD
value: "password"
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "password"
- name: KEYCLOAK_STATISTICS
value: all
- name: JDBC_PARAMS
value: "useSSL=false"
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
- name: JGROUPS_DISCOVERY_PROPERTIES
value: datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500,initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING ( own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, created timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))"
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
startupProbe:
httpGet:
path: /health
port: 9990
initialDelaySeconds: 120
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 9990
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 9990
successThreshold: 3

Related

Zonal network endpoint group unhealthy even though that container application working properly

I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?
I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.
# [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. Creating an ingress firewall rule with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.
I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.
Service config:
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/

Connect to Postgresql from inside kubernetes cluster

I setup a series of VM 192.168.2.(100,105,101,104) where kubernetes master is on 100 and two workers on 101,104. Also setup the postgres on 192.168.2.105, followed this tutorial but it is still unreachable from within. Tried it in minikube inside a test VM where minikube and postgres were installed in the same VM, worked just fine.
Changed the postgers config file from localhost to *, changed listen at pg_hba.conf to 0.0.0.0/0
Installed postgesql-12 and postgresql-client-12 in the VM 192.168.2.105:5432, now i added headless service to kubernetes which is as follows
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
------
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
in my deployment I am defining this to access database
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: 'my-service:5432'
- name: DB_DATABASE
value: postgres
- name: DB_PASSWORD
value: admin
- name: DB_SCHEMA
value: public
- name: DB_USER
value: postgres
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
Also the VMs are bridged, not on NAT.
What i am doing wrong here ?
The first thing we have to do is create the headless service with custom endpoint. The IP in my solution is only specific for my machine.
Endpoint with service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
As for my particular specs, I haven't defined any ingress or loadbalancer so i'll change the selector type from LoadBalancer to NodePort in the service after its deployed.
Now i deployed the keycloak with the the mentioned .yaml file
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin" # TODO give username for master realm
- name: KEYCLOAK_PASSWORD
value: "admin" # TODO give password for master realm
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: # <Node-IP>:<LoadBalancer-Port/ NodePort>
- name: DB_DATABASE
value: "keycloak" # Database to use
- name: DB_PASSWORD
value: "admin" # Database password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: "postgres" # Database user
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
After mentioning all the possible values, it connects successfully to the postgres server that is hosted on another server away from kubernetes master and workers node !

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

Unhealthy load balancer on GCE

I have a couple of services and the loadbalancers work fine. Now I keep facing an issue with a service that runs fine, but when a loadbalancer is applied I cannot get it to work, because one service seams to be unhealty, but I cannot figure out why. How can I get that service healthy?
Here are my k8s yaml.
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- name: api
containerPort: 8080
Service.yaml
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 80
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- foo.bar.io
secretName: api-tls
rules:
- host: foo.bar.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 80
The problem was solved by configuring the ports in the correct way. Container, Service and LB need (obviously) to be aligned. I also added the initialDelaySeconds.
LB:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
# kubernetes.io/ingress.allow-http: "false"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- api.foo.io
secretName: api-tls
rules:
- host: api.foo.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 8080
Service:
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobarbar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 45
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080