Readiness probe failed: remote error: tls: bad certificate - kubernetes

I get the following error:
Warning Unhealthy 14m (x4 over 15m) kubelet Liveness probe failed: Get "https://10.244.1.13:8443/healthz": remote error: tls: bad certificate
The server is configured with tls support.
In https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ it is written that: If scheme field is set to HTTPS, the kubelet sends an HTTPS request skipping the certificate verification. so it is not clear why we get this error.

What you have mentioned in the description is completely different from the subject.
Please check you have configured only in readiness prob and You need to configure in both readiness and liveness probe.
if you are looking forward to send to the HTTPS request to the service you have to change the scheme.
livenessProbe:
httpGet:
path: /
port: 443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 443

To avoid the error the kubelet should be configured with the ca certificate of the server.

Related

EKS Readiness probe failed - httpGet.port: Invalid value

I've tried to use a readiness probe to check my pod's health while deploying using internal api. Below is my readiness config in deployment yaml,
readinessProbe:
initialDelaySeconds: 10
httpGet:
host: dev.domain_name.com
scheme: HTTP
path: /api/v1/healthCheck
port: 5050
I don't want the port number here, as my health check path is dev.domain_name.com/api/v1/healthCheck. If I remove the port, am getting the below error.
Deployment.apps "app_name" is invalid: spec.template.spec.containers[0].readinessProbe.httpGet.port: Invalid value: 0: must be between 1 and 65535, inclusive
Is there any way to declare a readiness probe without a port number or any other alternative?

Why is my kubernetes readiness probe for hashicorp vault hitting http when I specify https?

My readiness probe specifies HTTPS and 8200 as the port to check my hashicorp vault pod.
readinessProbe:
httpGet:
scheme: HTTPS
path: '/v1/sys/health?activecode=200'
port: 8200
Once the pod is running kubectl describe pod shows this
Readiness probe failed: Error checking seal status: Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/seal-status
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
I have found this similar problem. See the whole answer here.
According to this documentation:
The http/https scheme is controlled by the tlsDisable value.
When set to true, changes URLs from https to http (such as the VAULT_ADDR=http://127.0.0.1:8200 environment variable set on the Vault pods).
To turn it off:
global:
tlsDisable: false
NOTE:
Vault should always be used with TLS in production to provide secure communication between clients and the Vault server. It requires a certificate file and key file on each host where Vault is running.
See also this documentation. One can find there many examples of using readiness and liveness probes, f.e
readinessProbe:
exec:
command:
- /bin/sh
- -ec
- vault status
initialDelaySeconds: 5
periodSeconds: 5

How to terminate janusgraph container in case any exception is thrown

I'm using janusgraph docker image - https://hub.docker.com/r/janusgraph/janusgraph
In my kubernetes deployment to initialise the remote graph using groovy script mounted to docker-entrypoint-initdb.d
This works as expected but in case if the remote host is not ready the janusgraph container throws exception and is still in the running mode.
Because of this kubernetes will not attempt to restart the container again. Is there any way so that I can configure this janusgraph container to terminate in case of any exception
As #Gavin has mentioned you can use probes to check if containers are working. Liveness Probes is used to know when containers are failed. If a container is unresponsive - it can restart the container.
Readiness probes inform when the container is available for accepting traffic. The readiness probe is used to control which pods are used as the backends for a service. A pod is considered ready when all of its containers are ready. If a pod is not ready, it is removed from service Endpoints.
Kubernetes supports three mechanisms for implementing liveness and readiness probes:
1) making an HTTP request against a container
This probes have additional fields that can be set on httpGet:
host: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
path: Path to access on the HTTP server. Defaults to /.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
port: Name or number of the port to access on the container. Number must be in the range 1 to 65535.
Read more: http-probes.
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
2) opening a TCP socket against a container
initialDelaySeconds: 15
livenessProbe: ~
periodSeconds: 20
port: 8080
tcpSocket: ~
3) running a command inside a container
livenessProbe:
exec:
command:
- sh
- /tmp/status_check.sh
initialDelaySeconds: 10
If you will get status code different than 0 this will mean that probe failed.
You can also add to probes additional params such as initialDelaySeconds: indicate number of seconds after the container has started before liveness or readiness probes are initiated. See: configuring-probes.
In every case add also restartPolicy: Never
to your pods definition. By default is always.
A readinessProbe could be employed here with a command like janusgraph show-config or something similar which will exit with code -1
spec:
containers:
- name: liveness
image: janusgraph/janusgraph:latest
readinessProbe:
exec:
command:
- janusgraph
- show-config
Kubernetes will terminate the pod if the readinessProbe fails. A livenessProbe could also be used here too, in case this pod needs to be terminated if the remote host ever becomes unavailable.
Consider enabling JanusGraph server metrics, which could then be used with Prometheus for additional monitoring or even with the livenessProbe itself.

istio failing with "failed checking application ports"

Using istio 1.0.2 and kubernetes 1.12 on GKE.
When deploying a web application, the pod never reaches the healthy status.
My main pod spits out healthy logs.
However, my sidecar, i.e. the istio-proxy container reads:
* failed checking application ports. listeners="0.0.0.0:15090","10.8.48.10:53","10.8.63.194:15443","10.8.63.194:443","10.8.58.47:15011","10.8.54.249:42422","10.8.48.44:443","10.8.58.10:44134","10.8.54.34:443","10.8.63.194:15020","10.8.49.250:8080","10.8.63.194:31400","10.8.63.194:15029","10.8.63.194:15030","10.8.60.185:11211","10.8.49.0:53","10.8.61.194:443","10.8.48.1:443","10.8.48.180:80","10.8.51.133:443","10.8.63.194:15031","10.8.63.194:15032","0.0.0.0:9901","0.0.0.0:9090","0.0.0.0:80","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:15010","0.0.0.0:8080","0.0.0.0:20001","0.0.0.0:7979","0.0.0.0:9091","0.0.0.0:9411","0.0.0.0:15004","0.0.0.0:15014","0.0.0.0:3030","10.8.33.8:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 5000
5000 is indeed the port my web app is listening on.
Any suggestions?
If there is a mismatch between deployment port and service port this can cause some issues in combination with the readiness of the sidecar.
Add the annotation readiness.status.sidecar.istio.io/applicationPorts in your deployment like this:
annotations:
readiness.status.sidecar.istio.io/applicationPorts: "5000"
You can add multiple ports by using comma separation.
#mkrobi I got this working as suggested in this post by adding the following-
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
to the containers in my deployment. Make sure to change port 8080 to 5000.

how to avoid encoding in kubernetes http liveness and readiness probes?

My application has a health check at the endpoint /service?cmd=watchdog and when I try to configure a HTTP liveness probe in kubernetes, the above endpoint is getting encoded to utf-8 as it gets applied to the pods (when I do a describe on the pod), the health check is being applies as /service%3fcmd=watchdog and that situation does not work for me.
in my deployment.yaml:
httpGet:
path: "/service/?cmd=watchdog"
port: 8080
Is there any way to avoid the encoding / work around this situation?