I try to deploy my tomcat on kubernetes but when I run : kubectl create -f deploy-tomcat.yaml I have always the same error :
error from server (need to declare liveness (found 0), need to declare readiness (found 0)
deploy-tomcat.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat-image
ports:
- containerPort: 8080
I suggest you add a liveness and readinessProbe to your manifest (on containers level), e.g.:
readinessProbe:
tcpSocket:
port: 8080
livenessProbe:
tcpSocket:
port: 8080
Please note, that this is not K8s default behavior to enforce the existence of those probes. I assume it was added to your K8s cluster with a validating admission webhook.
Related
We recently updated the deployment of a dropwizard service deployed using Docker and Kubernetes.
It was working correctly before, the readiness probe was yielding a healthcheck ping to internal cluster IP getting 200s. Since we updated the healthcheck pings are resulting in a 301 and the service is considered down.
I've noticed that the healthcheck is now Default kubernetes L7 Loadbalancing health check for NEG. (port is set to 80) where it was previously Default kubernetes L7 Loadbalancing health check. where the port was configurable.
The kube file is deployed via CircleCI but the readiness probe is:
kind: Deployment
metadata:
name: pes-${CIRCLE_BRANCH}
namespace: ${GKE_NAMESPACE_NAME}
annotations:
reloader.stakater.com/auto: 'true'
spec:
replicas: 2
selector:
matchLabels:
app: ***
template:
metadata:
labels:
app: ***
spec:
containers:
- name: ***
image: ***
envFrom:
- configMapRef:
name: ***
- secretRef:
name: ***
command: ['./gradlew', 'run']
resources: {}
ports:
- name: pes
containerPort: 5000
readinessProbe:
httpGet:
path: /api/healthcheck
port: pes
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: ***
namespace: ${GKE_NAMESPACE_NAME}
spec:
ports:
- name: pes
port: 5000
targetPort: pes
protocol: TCP
selector:
app: ***
type: LoadBalancer
Any ideas on how this needs to be configured in GCP?
I have a feeling that the new deployment has changed from legacy health check to non legacy but no idea what else needs to be set up for it to work. Does the kube file handle creating firewall rules or does that need to be done manually?
Reading the docs at https://cloud.google.com/load-balancing/docs/health-check-concepts?hl=en
EDIT:
Issue is now resolved. After GKE version was updated it is now creating a NEG healthcheck by default. We disabled this by adding below annotation to service deployment file.
metadata:
annotations:
cloud.google.com/neg: '{"ingress":false}'
Issue is now resolved. After GKE version was updated it is now creating a NEG healthcheck by default. We disabled this by adding below annotation to service deployment file.
metadata: annotations: cloud.google.com/neg: '{"ingress":false}'
I have a GKE cluster that I'm working to get going on https load balancing.
So far I have:
deployment
service (x 2 -- see below)
ingress
SSL cert -- google managed version
All of these seem to be working, but I'm getting a 502 error when connecting to the hostname via https:
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
When trying to trace this down I found a debugging post but when combing through it I found that his ingress shows ports 80,443 ... while I can never get mine to show anything but port 80.
This is even after I split my service into two different services, one on port 443 and one on port 80, and now am only telling the ingress about the 443 service and it still shows up with just port 80 and I'm still getting the 502 error.
The YAML for the deployment (asked by the commenter below):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-dev/myapp-container:latest
ports:
- containerPort: 8080
The YAML for the '443 service':
apiVersion: v1
kind: Service
metadata:
name: my-service443
spec:
type: NodePort
selector:
app: myapp
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
And the Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.global-static-ip-name: "kubething"
networking.gke.io/managed-certificates: clearspring-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: my-service443
port:
name: https
I don't understand (a) why the ingress is showing only port 80 and why I'm still getting 502 errors.
Thanks much for any help whatsoever!
It looks like it was missing readiness and liveness probes; when I changed the deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleardev-deployment
labels:
app: clearspring
spec:
replicas: 2
selector:
matchLabels:
app: clearspring
template:
metadata:
labels:
app: clearspring
spec:
containers:
- name: clearspring
image: gcr.io/clearspring-dev/clearspring-container:latest
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
Then the status changed from UNHEALTHY to Unknown ... but I was still getting the 502 error.
The liveness probe did its job: the procedure was not running on port 8080 on all hosts, just on 127.0.0.1. I fixed that ... still not working but tried EXPOSE 8080 in the Dockerfile and now I guess I need to look at firewall rules because liveness/readiness probes can't connect.
Note that I had to delete and recreate the cluster to get this far ... I think. I just tried first updating the deployment and I didn't get any change from UNHEALTHY.
I'm not sure if this is considered to be a best practice, but for ease of management I have created a Deployment that consists of 2 containers (Api Event server and Api server). Api server can send events that need to be processed by Api Event server and returned back. It is easier for me to manage these on in one pod to allow localhost access between them and not worry about defining ClusterIP services for all my environments.
One of my concerns is that if say Api Event server exits with error, pod will still be active as Api server continues to run. Is there a way to tell kubernetes to terminate a pod if one of it's containers fails?
Here is my deployment, here only port 8080 is exposed to the public via LoadBalancer service. Perhaps I can somehow add liveliness and readiness probe to both of these?
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
Use liveness and readiness probes.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In your case
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: 8080
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
livenessProbe:
tcpSocket:
port: 3000
When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs (Ingress GCE health checks) it should pick it up.
Expose an arbitrary URL as a readiness probe on the pods backing the Service.
Any ideas why?
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
selector:
matchLabels:
app: frontend-prod
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: frontend-prod
spec:
imagePullSecrets:
- name: regcred
containers:
- image: app:latest
readinessProbe:
httpGet:
path: /healthcheck
port: 3000
initialDelaySeconds: 15
periodSeconds: 5
name: frontend-prod-app
- env:
- name: PASSWORD_PROTECT
value: "1"
image: nginx:latest
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
name: frontend-prod-nginx
Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: frontend-prod
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-prod-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: frontend-prod-ip
spec:
tls:
- secretName: testsecret
backend:
serviceName: frontend-prod
servicePort: 80
So apparently, you need to include the container port on the PodSpec.
Does not seem to be documented anywhere.
e.g.
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Thanks, Brian! https://github.com/kubernetes/ingress-gce/issues/241
This is now possible in the latest GKE (I am on 1.14.10-gke.27, not sure if that matters)
Define a readinessProbe on your container in your Deployment.
Recreate your Ingress.
The health check will point to the path in readinessProbe.httpGet.path of the Deployment yaml config.
Update by Jonathan Lin below: This has been fixed very recently. Define a readinessProbe on the Deployment. Recreate your Ingress. It will pick up the health check path from the readinessProbe.
GKE Ingress health check path is currently not configurable. You can go to http://console.cloud.google.com (UI) and visit Load Balancers list to see the health check it uses.
Currently the health check for an Ingress is GET / on each backend: specified on the Ingress. So all your apps behind a GKE Ingress must return HTTP 200 OK to GET / requests.
That said, the health checks you specified on your Pods are still being used ––by the kubelet to make sure your Pod is actually functioning and healthy.
Google has recently added support for CRD that can configure your Backend Services along with healthchecks:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 30
port: 8080
type: HTTP #case-sensitive
requestPath: /healthcheck
See here.
Another reason why Google Cloud Load Balancer does not pick-up GCE health check configuration from Kubernetes Pod readiness probe could be that the service is configured as "selectorless" (the selector attribute is empty and you manage endpoints directly).
This is the case with e.g. kube-lego: see https://github.com/jetstack/kube-lego/issues/68#issuecomment-303748457 and https://github.com/jetstack/kube-lego/issues/68#issuecomment-327457982.
Original question does have selector specified in the service, so this hint doesn't apply. This hints serves visitors that have the same problem with a different cause.
I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-deployment
labels:
app: nodejs
spec:
replicas: 3
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: nodejs_server
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /is_alive
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /is_ready
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
labels:
app: nodejs
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: nodejs
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
backend:
serviceName: nodejs-service
servicePort: 80
The node server has methods to set and retrieve the liveness and readiness.
When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].
At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?
Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with kubectl port-forward. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.