I've tried to use a readiness probe to check my pod's health while deploying using internal api. Below is my readiness config in deployment yaml,
readinessProbe:
initialDelaySeconds: 10
httpGet:
host: dev.domain_name.com
scheme: HTTP
path: /api/v1/healthCheck
port: 5050
I don't want the port number here, as my health check path is dev.domain_name.com/api/v1/healthCheck. If I remove the port, am getting the below error.
Deployment.apps "app_name" is invalid: spec.template.spec.containers[0].readinessProbe.httpGet.port: Invalid value: 0: must be between 1 and 65535, inclusive
Is there any way to declare a readiness probe without a port number or any other alternative?
Related
I get the following error:
Warning Unhealthy 14m (x4 over 15m) kubelet Liveness probe failed: Get "https://10.244.1.13:8443/healthz": remote error: tls: bad certificate
The server is configured with tls support.
In https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ it is written that: If scheme field is set to HTTPS, the kubelet sends an HTTPS request skipping the certificate verification. so it is not clear why we get this error.
What you have mentioned in the description is completely different from the subject.
Please check you have configured only in readiness prob and You need to configure in both readiness and liveness probe.
if you are looking forward to send to the HTTPS request to the service you have to change the scheme.
livenessProbe:
httpGet:
path: /
port: 443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 443
To avoid the error the kubelet should be configured with the ca certificate of the server.
My readiness probe specifies HTTPS and 8200 as the port to check my hashicorp vault pod.
readinessProbe:
httpGet:
scheme: HTTPS
path: '/v1/sys/health?activecode=200'
port: 8200
Once the pod is running kubectl describe pod shows this
Readiness probe failed: Error checking seal status: Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/seal-status
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
I have found this similar problem. See the whole answer here.
According to this documentation:
The http/https scheme is controlled by the tlsDisable value.
When set to true, changes URLs from https to http (such as the VAULT_ADDR=http://127.0.0.1:8200 environment variable set on the Vault pods).
To turn it off:
global:
tlsDisable: false
NOTE:
Vault should always be used with TLS in production to provide secure communication between clients and the Vault server. It requires a certificate file and key file on each host where Vault is running.
See also this documentation. One can find there many examples of using readiness and liveness probes, f.e
readinessProbe:
exec:
command:
- /bin/sh
- -ec
- vault status
initialDelaySeconds: 5
periodSeconds: 5
I'm using janusgraph docker image - https://hub.docker.com/r/janusgraph/janusgraph
In my kubernetes deployment to initialise the remote graph using groovy script mounted to docker-entrypoint-initdb.d
This works as expected but in case if the remote host is not ready the janusgraph container throws exception and is still in the running mode.
Because of this kubernetes will not attempt to restart the container again. Is there any way so that I can configure this janusgraph container to terminate in case of any exception
As #Gavin has mentioned you can use probes to check if containers are working. Liveness Probes is used to know when containers are failed. If a container is unresponsive - it can restart the container.
Readiness probes inform when the container is available for accepting traffic. The readiness probe is used to control which pods are used as the backends for a service. A pod is considered ready when all of its containers are ready. If a pod is not ready, it is removed from service Endpoints.
Kubernetes supports three mechanisms for implementing liveness and readiness probes:
1) making an HTTP request against a container
This probes have additional fields that can be set on httpGet:
host: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
path: Path to access on the HTTP server. Defaults to /.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
port: Name or number of the port to access on the container. Number must be in the range 1 to 65535.
Read more: http-probes.
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
2) opening a TCP socket against a container
initialDelaySeconds: 15
livenessProbe: ~
periodSeconds: 20
port: 8080
tcpSocket: ~
3) running a command inside a container
livenessProbe:
exec:
command:
- sh
- /tmp/status_check.sh
initialDelaySeconds: 10
If you will get status code different than 0 this will mean that probe failed.
You can also add to probes additional params such as initialDelaySeconds: indicate number of seconds after the container has started before liveness or readiness probes are initiated. See: configuring-probes.
In every case add also restartPolicy: Never
to your pods definition. By default is always.
A readinessProbe could be employed here with a command like janusgraph show-config or something similar which will exit with code -1
spec:
containers:
- name: liveness
image: janusgraph/janusgraph:latest
readinessProbe:
exec:
command:
- janusgraph
- show-config
Kubernetes will terminate the pod if the readinessProbe fails. A livenessProbe could also be used here too, in case this pod needs to be terminated if the remote host ever becomes unavailable.
Consider enabling JanusGraph server metrics, which could then be used with Prometheus for additional monitoring or even with the livenessProbe itself.
In my deployment file, I created liveness probe and readiness probe in the following manner:
livenessProbe:
httpGet:
path: /rest/assets/get
port: 4000
httpHeaders:
- name: Authorization
value: Basic cnBjOnUzSGRlM0xvaWI1SGpEcTFTZGVoQktpU1NBbHE=
- name: Accept
value: application/json
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 30 # polling interval
timeoutSeconds: 30 # wish to receive response within this time period
readinessProbe:
httpGet:
path: /rest/assets/get
port: 4000
httpHeaders:
- name: Authorization
value: Basic cnBjOnUzSGRlM0xvaWI1SGpEcTFTZGVoQktpU1NBbHE=
- name: Accept
value: application/json
Both these probes work fine.
However, I also have a GCE ingress controller and health checks corresponding to this are failing.
When I checked the health checks, I saw that it was not created the same as readiness probe. Instead I see this in the description Default kubernetes L7 Loadbalancing health check.
How can I change the health check so that it matches the readiness probe?
For e.g., the health checks have an option of comparing the response to the response field in the health checks.
I would expect a response of "Unauthorized Access"
So, I added it in the response field. However, it does not seem to work.
I searched for 'readi', 'ready', 'live' etc in the kub swagger. I only see
io.k8s.api.core.v1.PodReadinessGate
thank you
That's one thing you would define. For example the following yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe: #this block performs liveness probes
httpGet:
path: /healthz
port: 80
readinessProbe: #this block performs readiness probes
httpGet:
path: /
port: 80
So, a pod with nginx. I can simply add the blocks highlighted in the yaml file and there it is. kubelet will check on them. Of course you have to have something serving there (/healthz, in this example), otherwise you will get 404.
You can add some configuration to the probes, like the other answer suggests. There are some more options than those.
According to Configure Liveness and Readiness Probes services can be configured to use
liveness command
TCP liveness probe
liveness HTTP request
So if your service use HTTP requests for liveness and readiness you can see in pod definition section livenessProbe (same for readinessProbe)
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
See full example here
There is no way to check the state of Liveness and Readiness probes directly.
You can check the resulting state of the pod which reflects changes in the state of Liveness and Readiness probes, but with some delay caused by threshold values.
Using kubectl describe pod you can also see some events at the bottom, but you can only see them after they occur. You can’t have it as a reply to the request.
You can also look into REST requests that are running under the hood of kubectl commands. All you need to do is adding a verbose flag to kubectl command:
-v, --v=0: Set the level of log output to debug-level (0~4) or trace-level (5~10)