How to concatenate secret files mounted in a volume in kubernetes - kubernetes

I have several secrets that are mounted and need to be read as a properties file. It seems kubernetes can't mount them as a single file so I'm trying to concatenate the files after the pod starts. I tried running a cat command in a postStart handler but it seems execute before the secrets are mounted as I get this error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword": stat cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword: no such file or directory: unknown
Then here is the yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: K8S_ID
spec:
selector:
matchLabels:
app: K8S_ID
replicas: 1
template:
metadata:
labels:
app: K8S_ID
spec:
containers:
- name: K8S_ID
image: IMAGE_NAME
ports:
- containerPort: 8080
env:
- name: PROPERTIES_FILE
value: "/properties/dbPassword"
volumeMounts:
- name: secret-properties
mountPath: "/properties"
lifecycle:
postStart:
exec:
command: ["cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
volumes:
- name: secret-properties
secret:
secretName: secret-properties
items:
- key: SECRET_ITEM
path: dbPassword
- key: S3Key
path: S3Key
- key: S3Secret
path: S3Secret

You need a shell session for your command like this:
...
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
...

Related

Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:
postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF

Kubernetes: Cannot VolumeMount emptydir within init container

I am trying to make use of amazon/aws-cli docker image for downloading all files from s3 bucket through initcontainer and mount the same volume to the main container.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-deployment
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: test-deployment
template:
metadata:
labels:
app: test-deployment
spec:
securityContext:
fsGroup: 2000
serviceAccountName: "s3-sa" #Name of the SA we ‘re using
automountServiceAccountToken: true
initContainers:
- name: data-extension
image: amazon/aws-cli
volumeMounts:
- name: data
mountPath: /data
command:
- aws s3 sync s3://some-bucket/ /data
containers:
- image: amazon/aws-cli
name: aws
command: ["sleep","10000"]
volumeMounts:
- name: data
mountPath: "/data"
volumes:
- name: data
emptyDir: {}
But it does not seems working. It is causing init container to crashbackloop.
error:
Error: failed to start container "data-extension": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "aws s3 sync s3://some-bucket/ /data": stat aws s3 sync s3://some-bucket/ /data: no such file or directory: unknown
Your command needs update:
...
command:
- "aws"
- "s3"
- "sync"
- "s3://some-bucket/"
- "/data"
...

k8s docker container mounts the host, but fails to output log files

The k8s docker container mounts the host, but fails to output log files to the host. Can you tell me the reason?
kubernets yaml like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test
spec:
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:11.0-alpine
command:
- "docker-entrypoint.sh"
- "postgres"
- "-c"
- "logging_collector=on"
- "-c"
- "log_directory=/var/lib/postgresql/log"
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: log-fs
mountPath: /var/lib/postgresql/log
volumes:
- name: log-fs
hostPath:
path: /var/log

Run consul agent with config-dir caused not found exception

In our docker-compose.yaml we have:
version: "3.5"
services:
consul-server:
image: consul:latest
command: "agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=./usr/src/app/consul.d/"
volumes:
- ./consul.d/:/usr/src/app/consul.d
In the consul.d folder we have statically defined our services. It works fine with docker-compose.
But when trying to run it on Kubernetes with this configmap:
ahmad#ahmad-pc:~$ kubectl describe configmap consul-config -n staging
Name: consul-config
Namespace: staging
Labels: <none>
Annotations: <none>
Data
====
trip.json:
----
... omitted for clarity ...
and consul.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
#env:
#- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
# value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config
I got the following error:
ahmad#ahmad-pc:~$ kubectl describe pod consul-server-7489787fc7-8qzhh -n staging
...
Error: failed to start container "consul-server": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/\":
stat agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/:
no such file or directory": unknown
But when I run the container without command: agent... and bash into it, I can list files mounted in the right place.
Why consul gives me a not found error despite that folder exists?
To execute command in the pod you have to define a command in command field and arguments for the command in args field. command field is the same as ENTRYPOINT in Docker and args field is the same as CMD.
In this case you define /bin/sh as ENTRYPOINT and "-c, "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/" as arguments so it can execute consul agent ...:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
env:
- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["bin/sh"]
args: ["-c", "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config

Is it possible to use a bash script to do the liveness test in pod?

I'm currently setting up a kubernetes cluster with 3 nodes on 3 differents vm and each node is composed of 1 pod witch run the following docker image: ethereum/client-go:stable
The problem is that I want to do a health check test using a bash script (because I have to test a lot of things) but I don't understand how I can export this file to each container that are deployed with my yaml deployment file.
I've tried to add wget command in the yaml file to download my health check script from my github repo but it wasn't very clean from my point of view, maybe there is an other way ?
My current deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: ethereum/client-go:stable
name: goerli-geth
args: ["--goerli", "--datadir", "/test2"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
livenessProbe:
exec:
command:
- /bin/sh
- /test/health.sh
initialDelaySeconds: 60
periodSeconds: 100
volumeMounts:
- name: test
mountPath: /test
restartPolicy: Always
volumes:
- name: test
hostPath:
path: /test
I expect to put health check script in /test/health.sh
Any ideas ?
This could be a perfect usecase for the init container, As there could be different images for the init container and the Application container thus they have different file system inside the pods, therefore we need to use Emptydir in order to share the state.
for further detail follow the link init-containers
Thanks to Suresh Vishnoi:
A way to resolve my problem is to use init container this way:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: ethereum/client-go:stable
name: goerli-geth
args: ["--goerli", "--datadir", "/test2"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
livenessProbe:
exec:
command:
- /bin/sh
- /test/health.sh
initialDelaySeconds: 60
periodSeconds: 100
volumeMounts:
- name: test
mountPath: /test
initContainers:
- name: healthcheck
image: ethereum/client-go:stable
command: ["wget", "-O", "/test2/health.sh", "https://My-script-bash"]
volumeMounts:
- name: test
mountPath: "/test"
restartPolicy: Always
volumes:
- name: test
emptyDir: {}
The downloaded file will be visible in /test/health.sh
If you're using helm look at chart tests: https://github.com/helm/helm/blob/master/docs/chart_tests.md. This covers readinessProbe tho, not liveness.
For advanced liveness probe, I'd run some kind of healthcheck sidecar which does all the advanced tests continiosly via localhost, and exposes a single /healthcheck endpoint. Then use the endpoint in a liveness probe.