k8s: Unable to read environment variable in livenessProbes exec - kubernetes

I am trying to use below command to curl to the pod itself using podip as currently I don't want kubelet to request my pod for health check.
livenessProbe:
exec:
command:
- curl
- $POD_IP:9990/admin/ping
initialDelaySeconds: 3
periodSeconds: 5
but the env variable $POD_IP is not recognized here,
Could not resolve host: $POD_IP
How to configure this so that env var can be read by curl in the command.
reference:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#execaction-v1-core

try following :
command:
- bash
- -c
- curl $POD_IP:9990/admin/ping

Try using escaping $ with \$ this has worked for me.
livenessProbe:
exec:
command:
- curl
- \$POD_IP:9990/admin/ping
initialDelaySeconds: 3
periodSeconds: 5

Related

Kubernetes: How to refer environment variable in config file

As per Kubernetes Doc , to reference a env variable in config use expr $(ENV_VAR).
But, this is not working for me.
In Readiness and Liveliness Probe API, I am getting token value as $(KUBERNETES_AUTH_TOKEN) instead of abcdefg
containers:
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
curl -H "Authorization: $(KUBERNETES_AUTH_TOKEN)" -H "Content-Type: application/json" -X GET http://localhost:9443/pre-stop
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
There is an issue opened about a very similar question here
Anyway, i want to share some yaml lines.
This is not intended for production env, obviously it's just to play around with the commands.
apiVersion: v1
kind: Pod
metadata:
labels:
run: ngtest
name: ngtest
spec:
volumes:
- name: stackoverflow-volume
hostPath:
path: /k8s/stackoverflow-eliminare-vl
initContainers:
- name: initial-container
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
image: bash
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
command:
# this only writes a text file only to show the purpose of initContainers (here you could create a bash script)
- "sh"
- "-c"
- "echo $(KUBERNETES_AUTH_TOKEN) > /status/$(KUBERNETES_AUTH_TOKEN).txt" # withi this line, substitution occurs with $() form
containers:
- image: nginx
name: ngtest
ports:
- containerPort: 80
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
echo $KUBERNETES_AUTH_TOKEN > /status/$KUBERNETES_AUTH_TOKEN.txt &&
echo 'echo $KUBERNETES_AUTH_TOKEN > /status/anotherFile2.txt' > /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh && . /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh &&
echo 'echo $(KUBERNETES_AUTH_TOKEN) > /status/anotherFile3.txt' > /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh && . /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh &&
echo 'curl -H "Authorization: $KUBERNETES_AUTH_TOKEN" -k https://www.google.com/search?q=$KUBERNETES_AUTH_TOKEN > /status/googleresult.txt && exit 0' > /status/exec-a-query-on-google.sh && . /status/exec-a-query-on-google.sh
# first two lines are working
# the third one with $(KUBERNETES_AUTH_TOKEN), do not
# last one with a bash script creation works, and this could be a solution
resources: {}
# # so, instead of use http liveness
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: /status
# port: 80
# scheme: HTTP
# httpHeaders:
# - name: Authorization
# value: $(KUBERNETES_AUTH_TOKEN)
# u can use the exec to call the endpoint from the bash script.
# reads the file of the initContainer (optional)
# here i omitted the url call , but you can copy from the above examples
livenessProbe:
exec:
command:
- sh
- -c
- "cat /status/$KUBERNETES_AUTH_TOKEN.txt && echo -$KUBERNETES_AUTH_TOKEN- `date` top! >> /status/resultliveness.txt &&
exit 0"
initialDelaySeconds: 15
periodSeconds: 5
dnsPolicy: ClusterFirst
restartPolicy: Always
This creates a pod with an hostPath volume (only to show you the output of the files) where will be create files based on the command of the yaml.
More details on the yaml.
If you go in you cluster machine, you can view the files produced.
Anyway, you should use ConfigMap, Secrets and/or https://helm.sh/docs/chart_template_guide/values_files/, which allow us to create your own charts an separate you config values from the yaml templates.
Hopefully, it helps.
Ps. This is my first answer on StackOverflow, please don't be too rude with me!
I don't think that's possible (at least, not that way -- kiggyttass' answer has an alternative that may work), and really your readiness and liveness endpoints shouldn't require authentication.
However, there is a hacky way around this. I don't know if its available to you, but it works for me. First, you'll need to have the KUBERNETES_AUTH_TOKEN variable set in the environment in which you're running kubectl, then you need to use the bash tool envsubst, like so:
cat k8s.yaml | envsubst | kubectl apply -f -
If you use this approach, you'll have to remove the parenthesis around the env-var.
One final note, the header you're using requires a hint as to the nature of the token, so it should maybe be:
value: Bearer $KUBERNETES_AUTH_TOKEN
Or Basic, or whatever. Also, I believe the token itself must be base64 encoded.

startup probes not working with exec as expected

I have a sample webapp and redis that I am running in Kubernetes.
I am using probes for the basic checks like below
Now I want to make sure that redis is up and running before the application.
below code snippet is from webapp.
when I run a command nc -zv <redis service name> 6379 it works well, but when I use it as command in startupProbe it gives me errors. I think the way I am passing command is not right, can someone help me understand what is wrong
error I get
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "nc -zv redis 6379": executable file not found in $PATH: unknown
readinessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 20
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 30
periodSeconds: 5
startupProbe:
exec:
command:
- nc -zv redis 6379
failureThreshold: 20
periodSeconds: 5
The command has to be entered in proper format as it is an array. The below code is in expected format.
startupProbe:
exec:
command:
- nc
- -zv
- redis
- "6379"
failureThreshold: 30
periodSeconds: 5

Kubernetes postStart hook leads to race condition

I use a MySQL on Kubernetes with a postStart hook which should run a query after the start of the database.
This is the relevant part of my template.yaml:
spec:
containers:
- name: ${{APP}}
image: ${REGISTRY}/${NAMESPACE}/${APP}:${VERSION}
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- hostname && sleep 12 && echo $QUERY | /opt/rh/rh-mysql80/root/usr/bin/mysql
-h localhost -u root -D grafana
-P 3306
ports:
- name: tcp3306
containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
timeoutSeconds: 1
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 120
timeoutSeconds: 1
When the pod start, the PVC for the database gets corruped and the pod crashes. When I restart the pod, it works. I guess the query runs, when the database is not up yet. I guess this might get fixed with the readinessprobe, but I am not an expert at these topics.
Did anyone else run into a similar issue and knows how to fix it?
Note that postStart will be call at least once but may also be called more than once. This make postStart a bad place to run query.
You can set pod restartPolicy: OnFailure and run the query in separate MYSQL container. Start your second container with wait and run your query. Note that your query should produce idempotent result or your data integrity may breaks; consider when the pod is re-create with the existing data volume.

Unable to import local minikube cluster to rancher on Mac (cattle-cluster-agent fails on :8080/health)

I have installed rancher on Mac, and used custom port numbers; am able to connect to localhost:443 and work through rancher GUI.
docker run --privileged -d --restart=unless-stopped -p 980:80 -p 981:443 --name rancher rancher/rancher
I then created a minikube local cluster, & tried to import that into rancher via GUI.
minikube -p dxmcs3 start
As suggested by Rancher as part of GUI, I ran the following to import; since the minikube POD needs to be able to access rancher endpoint on my host machine (mac), I updated the CATTLE_SERVER host to point to https://host.minikube.internal:981 prior to running.
curl --insecure -sfL https://localhost:981/v3/import/zsh5mtnkkrtz7tj7scbpb59q5tsjzmhg7r5476z7gdnh4xgjczt7cd_c-6sfj7.yaml | sed 's/https:\/\/localhost:981/https:\/\/host.minikube.internal:981/g' | kubectl apply -f -
The deployments go well, but the cluster's import shows up as "pending" forever on Rancher UI.
on digging, I noticed that the rancher/rancher-agent:v2.5.9 POD fails to start. More specifically, "cluster-register" container fails on health check up start. It tries to to http://(pod-ip):8080/health & fails.
containers:
- env:
- name: CATTLE_FEATURES
- name: CATTLE_IS_RKE
value: "false"
- name: CATTLE_SERVER
value: https://host.minikube.internal:981
- name: CATTLE_CA_CHECKSUM
value: d8e8de5d121fac709e414123ff792458931530569555a3d233d555deb62a9490
- name: CATTLE_CLUSTER
value: "true"
- name: CATTLE_K8S_MANAGED
value: "true"
- name: CATTLE_CLUSTER_REGISTRY
image: rancher/rancher-agent:v2.5.9
imagePullPolicy: IfNotPresent
name: cluster-register
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
Appreciate the community's help with resolving this. Or, Any pointer to a page I can refer. I really want to be able to have Rancher manage multiple minikube based local k8s environments for my learning & experimentation.

Define a livenessProbe with secret httpHeaders

I want to define a livenessProbe with an httpHeader whose value is secret.
This syntax is invalid:
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
valueFrom:
secretKeyRef:
name: my-secret-key
value: secret
If I specify my-secret-key with value secret as an environment variable named MY_SECRET_KEY, the following could work:
livenessProbe:
exec:
command:
- curl
- --fail
- -H
- "X-Custom-Header: $MY_SECRET_KEY"
- 'http://localhost:8080/healthz'
Unfortunately it doesn't due to the way the quotations are being evaluated. If I type the command curl --fail -H "X-Custom-Header: $MY_SECRET_KEY" http://localhost:8080/healthz directly on the container, it works.
I've also tried many combinations of single quotes and escaping the double quotes.
Does anyone know of a workaround?
Here some examples with curl and wget:
exec:
command:
- /bin/sh
- -c
- "curl -H 'Authorization: Bearer $(AUTH_TOKEN)' 'http://example.com'"
exec:
command:
- /bin/sh
- -c
- "wget --spider --header \"Authorization: Bearer $AUTH_TOKEN\" http://some.api.com/spaces/${SPACE_ID}/entries"
One workaround I can think of is to create some bash script to run this health check, and put your secret data to the environment as usual.